Contents^
Items^
Fields where Native Americans farmed a thousand years ago discovered in Michigan
Tangentially related: I'm trying to make LiDAR data in Switzerland more accessible, see https://github.com/r-follador/delta-relief
There's some interesting examples in the Readme.
Does LIDAR work underwater?
FWIU in Grand Traverse Bay, Lake Michigan, there's a 9,000 year old stonehenge-like structure 40 feet underwater; that's 4000 thousand years older than Stonehenge and about 6000 years older than the Osireoin and the Pyramids.
/? Michigan underwater stonehenge: https://www.google.com/search?q=michigan+underwater+stonehen...
There's not even a name or a wikipedia page for the site? There are various presumed Clovis sites which are now underwater in TN, as well.
A.I. Is Poised to Rewrite History. Literally.
DARPA program sets distance record for power beaming
Is it possible to steer the weather by heating the atmosphere with power beaming microwaves?
Is it possible to cancel the vortical formation of a tornado or a hurricane with microwave power beam(s)?
Does heating the atmosphere with microwaves change the weather, or the jet stream, or the cloud cover?
What sort of a fluidic weather simulator could answer this question?
Is there a fluid simulation device that allows for precise wireless heating of certain points in the fluid?
If so, there could be international space law to study and control for the known and presumed risks of space-based microwave power beaming.
Cheap yet ultrapure titanium might enable widespread use in industry (2024)
> Unfortunately, producing ultrapure titanium is significantly more expensive than manufacturing steel (an iron alloy) and aluminum, owing to the substantial use of energy and resources in preparing high-purity titanium. Developing a cheap, easy way to prepare it—and facilitate product development for industry and common consumers—is the problem the researchers aimed to address.
"Direct production of low-oxygen-concentration titanium from molten titanium" (2024) https://www.nature.com/articles/s41467-024-49085-4
Any comments from someone in the metals industry? The paper shows this process being done at lab scale. It needs to be scaled up to steel mill size. How hard does that look?
What a useful question though. I hadn't realized that the cost of titanium is due to lack of a process for removing oxygen.
What is the most efficient and sustainable alternative to yttrium for removing oxygen from titanium?
process(TiO2, …) => Ti, …
From teh Gemini 2.5 Pro AI "expert", with human review:
> For primary titanium production (from ore): Molten Salt Electrolysis (Direct Electrochemical Deoxygenation, FFC Cambridge, OS processes, etc.) and calciothermic reduction in molten salts
> They aim to [sic.] revolutionize titanium production by moving away from the energy-intensive and environmentally impactful Kroll process, directly reducing TiO 2 and offering the potential for closed-loop systems.
> For recycling titanium scrap and deep deoxidation: Hydrogen plasma arc melting and calcium-based deoxidation techniques (especially electrochemical calcium generation) are highly promising. Hydrogen offers extreme cleanliness, while calcium offers potent deoxidizing power.
...
> Magnesium Hydride Reduction (e.g., University of Utah's reactor)
> Solid-State Reduction (e.g., Metalysis process)
Are there more efficient, sustainable methods of titanium production?
Also, TIL Ti is a catalyst for CNT carbon nanotube production; and, alloying CNTs with Ti leaves vacancies.
> From teh Gemini 2.5 Pro AI "expert", with human review:
You don't know enough about the subject to answer the question on your own, do you? So your "review" is really just cutting and pasting shit you also don't understand, which may or may not be true.
Thanks for your service.
Do you have a factual dispute with what I posted?
Are any of those alternatives hallucinations, in your opinion?
I feel uncompleted to further assist these.
> Do you have a factual dispute with what I posted?
I don't have nearly enough knowledge or experience in the subject to talk about the factual accuracy of what you posted. The whole point of my comment was, neither do you.
So, to your knowledge there is no factual inconsistency with what I have posted?
You have made an assumption that I didn't review the content that I prepared to post. You have alleged in ignorance and you have disrespectfully harassed without due process.
I did not waste your time with spammy unlabeled AI BS.
I have given my "review search results" time for free; and, in this case too, I have delivered value. You made this a waste of my time. You have caused me loss with such harassment. I have not caused you loss by posting such preliminary research (which checks out).
Did others in this thread identify and share alternative solutions for getting oxygen out of titanium? I believe it was fair to identify and share alternative solutions to the OT which I (re-) posted because this is an unsolved opportunity.
I believe it's fair and advisable to consult and clearly cite AI.
Why would people cite their use of AI? Isn't that what we want?
Which helped solved for the OT problem?
Given such behavior toward me in this forum, I should omit such insightful research (into "efficient and sustainable alternatives") to deny them such advantage.
This was interesting to me and worth spending my personal time on also because removing oxygen from graphene oxide wafers is also a billion dollar idea. Does "hydrogen plasma" solve for deoxidizing that too?
Cargo fuzz: a cargo subcommand for fuzzing with libFuzzer
The other day I noticed the fuzzing support in the zip crate.
How does cargo-fuzz compare to cargo-afl and the `cargo afl fuzz` command?
Rust Fuzz Book > Tutorial: https://rust-fuzz.github.io/book/afl/tutorial.html#start-fuz...
Show HN: Spark, An advanced 3D Gaussian Splatting renderer for Three.js
I'm the co-creator and maintainer of https://aframe.io/ and long time Web 3D graphics dev.
Super excited about new techniques to author / render / represent 3D. Spark is a an open source library to easily integrate Gaussian splats in your THREE.js scene I worked with some friends and I hope you find useful.
Looking forward to hearing what features / rendering techniques you would love to see next.
PartCAD can export CAD models to Three.js.
The OCP CAD viewer extension for build123d and cadquery models, for example, is also built on Three.js. https://github.com/bernhard-42/vscode-ocp-cad-viewer
New Hydrogel Turns Toxic Wastewater and Algae Blooms into Garden Gold
Whey protein fibrils, Yeast-laden hydrogels, brewing waste, oleophilic hemp aerogels, chitosan and magnesium;
But not real gold. For that, you'd need whey protein;
From "Turning waste into gold" (2024) https://www.sciencedaily.com/releases/2024/02/240229124612.h... :
> Researchers have recovered gold from electronic waste. Their highly sustainable new method is based on a protein fibril sponge, which the scientists derive from whey, a food industry byproduct. ETH Zurich researchers have recovered the precious metal from electronic waste [in solution]
From "Brewing tea removes lead from water" https://news.ycombinator.com/item?id=43165095 :
> "Yeast-laden hydrogel capsules for scalable trace lead removal from water" (2024) https://pubs.rsc.org/en/content/articlelanding/2024/su/d4su0...
> "Application of brewing waste as biosorbent for the removal of metallic ions present in groundwater and surface waters from coal regions" (2018) https://www.sciencedirect.com/science/article/abs/pii/S22133...
Chitosan + Magnesium or Hydrogels; which given efficiency and sustainability as criteria?
"Simultaneous ammonium and phosphate removal with Mg-loaded chitosan carbonized microsphere: Influencing factors and removal mechanism" (2023) https://www.sciencedirect.com/science/article/abs/pii/S00139...
ScholarlyArticle: "Molecular Insights into Novel Struvite–Hydrogel Composites for Simultaneous Ammonia and Phosphate Removal" (2025) https://pubs.acs.org/doi/10.1021/acs.est.4c11700
Supporting Information for "Molecular Insights into Novel Struvite– Hydrogel Composites for Simultaneous Ammonia and Phosphate Removal" (2025) https://acs.figshare.com/ndownloader/files/54930495
Publish a Python Wheel to GCP Artifact Registry with Poetry
GCP Artifact Registry is an OCI Container Image Registry.
It looks like there there are a few GitHub Actions for pushing container image artifacts to GCP Artifact Registry: https://github.com/marketplace?query=artifact+registry&type=...
FWIW, though it may not be necessary for a plain Python package, "pypa/cibuildwheel" is the easy way to build a Python package for various platforms in CI.
SLSA.dev, Sigstore;
GitHub supports artifact attestations and storing attestations in an OCI Image store FWIU. Does GCP Artifact Registry support attestations?
"Using artifact attestations to establish provenance for builds" https://docs.github.com/en/actions/security-for-github-actio...
> GCP Artifact Registry is an OCI Container Image Registry.
That is one of the supported formats (and maybe most common), but not the only one.
https://cloud.google.com/artifact-registry/docs/supported-fo...
The Python one behaves just like PyPI, you just need to specify the URL provide credentials.
GitHub specifically doesn't have Python package index (PEP 503, PEP 740) support on their roadmap: https://github.com/github/roadmap/issues/94#issuecomment-158...
GitLab has Python package registry support (503?): https://docs.gitlab.com/user/packages/pypi_repository/
Gitea has Python package registry support (503?): https://docs.gitea.com/usage/packages/pypi
PyPI supports attestations for Python packages when built by Trusted Publishers: https://docs.pypi.org/attestations/ :
> PyPI uses the in-toto Attestation Framework for the attestations it accepts. [ in-toto/attestation spec: https://github.com/in-toto/attestation/blob/main/spec/README... ]
> Currently, PyPI allows the following attestation predicates:
> SLSA Provenance, PyPI Publish
Artifact Registry > Artifact Registry documentation > Guides > Manage Python packages: https://cloud.google.com/artifact-registry/docs/python/manag... :
> [Artifact Registry] private repositories use the canonical Python repository implementation, the simple repository API (PEP 503), and work with installation tools like pip.
PEP 503 – Simple Repository API: https://peps.python.org/pep-0503/
PEP 740 – Index support for digital attestations: https://peps.python.org/pep-0740/
Quantum physicists unveil most 'trustworthy' random-number generator yet
What is the bitrate?
Presumably it has passed the NIST randomness tests in paranoid_crypto.
From https://news.ycombinator.com/item?id=43497414 :
> 100Gbit/s is faster than qualifying noise from a [90 meters large] quantum computer?
>> google/paranoid_crypto.lib.randomness_tests
Window-sized device taps the air for safe drinking water
AKA dehumidifier
[deleted]
CISOs urged to push vendors for roadmaps on post-quantum cryptography readiness
How do people test post-quantum cryptography? How do they verify their encryption can not be defeated by quantum computing without having access to real world quantum computing? Are they basing everything off theories?
Best be ready, for Q-day:
> The looming ‘Q-Day’ should also be used as the stick to get approval to carry out a cryptographic inventory and roll out projects that foster cryptographic agility more generally.
> Q-Day will not be announced and businesses need to take action now in the face of a growing threat.
[...]
> “An orderly transition will cost less than emergency planning,” Holmqvist said. “It’s like Y2K but without an actual date.”
Q-Day is in a superimposed state! It's everyday from today to the heat death of the universe. We won't know for sure until we measure it.
Could be software, Could be hardware. AI could be making it sooner, nearer in time.
Practically,
How to add the PQ library or upgrade to the version with PQ ciphers?
How to specify that PQ Ciphers are optional in addition to non-PQ Ciphers (TLS 1.3, downgrade risk) or necessary? Where is the configuration file with the cipher list parameter?
Despite Rising Concerns, 95% of Organizations Lack a Quantum Computing Roadmap
"CISOs urged to push vendors for roadmaps on post-quantum cryptography readiness" https://news.ycombinator.com/item?id=44226486
The first part of the roadmap: Build a useful Quantum Computer;)
How about this:
"New Quantum Algorithm Factors Numbers with One Qubit" (2025-06) https://news.ycombinator.com/item?id=44225120
Or this one:
"How Close Is Commercial Quantum Computing?" (2025-06) https://news.ycombinator.com/item?id=44226205
I read all of Cloudflare's Claude-generated commits
> Reading through these commits sparked an idea: what if we treated prompts as the actual source code? Imagine version control systems where you commit the prompts used to generate features rather than the resulting implementation.
Please god, no, never do this. For one thing, why would you not commit the generated source code when storage is essentially free? That seems insane for multiple reasons.
> When models inevitably improve, you could connect the latest version and regenerate the entire codebase with enhanced capability.
How would you know if the code was better or worse if it was never committed? How do you audit for security vulnerabilities or debug with no source code?
My work has involved a project that is almost entirely generated code for over a decade. Not AI generated, the actual work of the project is in creating the code generator.
One of the things we learned very quickly was that having generated source code in the same repository as actual source code was not sustainable. The nature of reviewing changes is just too different between them.
Another thing we learned very quickly was that attempting to generate code, then modify the result is not sustainable; nor is aiming for a 100% generated code base. The end result of that was that we had to significantly rearchitect the project for us to essentially inject manually crafted code into arbitrary places in the generated code.
Another thing we learned is that any change in the code generator needs to have a feature flag, because someone was relying on the old behavior.
> One of the things we learned very quickly was that having generated source code in the same repository as actual source code was not sustainable.
Keeping a repository with the prompts, or other commands separate is fine, but not committing the generated code at all I find questionable at best.
If you can 100% reproduce the same generated code from the same prompts, even 5 years later, given the same versions and everything then I'd say "Sure, go ahead and don't saved the generated code, we can always regenerate it". As someone who spent some time in frontend development, we've been doing it like that for a long time with (MB+) generated code, keeping it in scm just isn't feasible long-term.
But given this is about LLMs, which people tend to run with temperature>0, this is unlikely to be true, so then I'd really urge anyone to actually store the results (somewhere, maybe not in scm specifically) as otherwise you won't have any idea about what the code was in the future.
Temperature > 0 isn’t a problem as long as you can specify/save the random seed and everything else is deterministic. Of course, “as long as” is still a tall order here.
My understanding is that the implementation of modern hosted LLMs is nondeterministic even with known seed because the generated results are sensitive to a number of other factors including, but not limited to, other prompts running in the same batch.
Gemini, for example, launched implicit caching on or about 2025-05-08: https://developers.googleblog.com/en/gemini-2-5-models-now-s... :
> Now, when you send a request to one of the Gemini 2.5 models, if the request shares a common prefix as one of previous requests, then it’s eligible for a cache hit. We will dynamically pass cost savings back to you, providing the same 75% token discount.
> In order to increase the chance that your request contains a cache hit, you should keep the content at the beginning of the request the same and add things like a user's question or other additional context that might change from request to request at the end of the prompt.
From https://news.ycombinator.com/item?id=43939774 re: same:
> Does this make it appear that the LLM's responses converge on one answer when actually it's just caching?
A Lean companion to Analysis I
A Lean textbook!
Why no HoTT, though?
"Should Type Theory (HoTT) Replace (ZFC) Set Theory as the Foundation of Math?" https://news.ycombinator.com/item?id=43196452
Additional Lean resources from HN this week:
"100 theorems in Lean" https://news.ycombinator.com/item?id=44075061
"Google-DeepMind/formal-conjectures: collection of formalized conjectures in lean" https://news.ycombinator.com/item?id=44119725
> Why no HoTT, though?
Sort of a weird question to ask imo.
Terrence Tao has a couple of analysis textbooks and this is his companion to the first of those books in Lean. He doesn’t have a type theory textbook, so that’s why no higher-order type theory - it’s not what he’s trying to do at all.
If HoTT is already proved and sets, categories, and types are already proven; I agree that it's not necessary to prove same in an applied analysis book; though it is another opportunity to verify HoTT in actual application domains.
"Is this consistent with HoTT?" a tool could ask.
But none of this is what he’s trying to do.
He wrote a book to go with his course in undergraduate real analysis. <= this does not contain HoTT because HoTT is not part of undergraduate real analysis
He’s making a companion to that book for lean <= so this also doesn’t contain HoTT.
Just like it doesn’t contain anything about raising partridges or doing skydiving, it doesn’t have anything about HoTT because that’s not what he’s trying to write a book about. He’s writing a lean companion to his analysis textbook. I get that you are interested in HoTT. If so you should probably get a book on it. This isn’t that book.
He is a working mathematician showing how a proof assistant can be used as part of accomplishing a very specific mainstream mathematical task (ie proving the foundational theorems of analysis). He's not trying to write something about the theoretical basis for proof assistants.
Are there types presented therein?
Presumably, left-to-right MRO solves for diamond-inheritance because of a type theory.
I suppose it doesn't matter if HoTT is the most sufficient type / category / information-theoretic set theory for inductively-presented real analysis in classical spaces at all.
But then why present in Lean, if the precedent formalisms are irrelevant? (Fairly also, is HoTT proven as a basis for Lean itself, though?)
Are there types presented therein?
No. Analysis typically is presented using naive set theory to build the natural numbers, integers, rationals and (via Dedekind cuts) real numbers. For avoidance of doubt in the standard construction of maths these are sets, not types. Then from there a standard first course in analysis deals with the topology of the reals, sequences and series, limits and continuity of real functions, derivatives and the integral. Then if people study analysis further it would typically involve complex numbers and functions, how you do calculus with them, Fourier analysis etc, but types and type theory wouldn’t form part of any standard treatment of analysis that I’m aware of.Types are not a mainstream topic in pure maths. Type theory was part of Bertrand Russell’s attempt to solve the problems of set theory, which ended up as a bit of a sideshow because ZF/ZFC and naive set theory turned out to require far less of a rewrite of all the rest of maths and so became the standard approach. Types come up I think if people go deeply into formal logic or category theory (neither of which is a type of analysis), but category theory is touched on in abstract algebra after dealing with the more conventional topics (Groups, rings, fields, modules) sometimes. Most people I know who know about type theory came at it from a computer science angle rather than pure maths. You might learn some type theory if you do the type of computer science course where you learn lambda calculus.
why present in Lean, if the precedent formalisms are irrelevant?
If someone gave a powerpoint presentation, do they have to use the presentation to talk about how powerpoint is made? If someone writes a paper in latex, does the paper have to be about latex and mathematical typesetting? Or are those tools that you’re allowed to use to accomplish some goal?He’s presenting in Lean because that’s the proof assistant that he actually uses and he’s interested in proof assistants and other tools and how they help mathematicians. He’s not presenting about lean and how it’s made. He’s showing how you can use it to do proofs in analysis.
Well, FWIU, what you refer to as "topology" is founded in HoTT "type theory".
> for avoidance of doubt in the standard construction of maths these are sets, not types. Then from there a standard first course in analysis deals with the topology of the reals, sequences and series, limits and continuity of real functions, derivatives and the integral.
HoTT is precedent to these as well.
So, Lean isn't proven with HoTT either.
Is type theory useful in describing a Hilbert hotel infinite continuum of Reals or for describing quantum values like qubits and qudits?
I don't think I'm overselling HoTT as a useful axiomatic basis for things proven in absence of type and information-theoretic foundations.
From https://news.ycombinator.com/item?id=43191103#43201742 , which I may have already linked to in this thread :
> "Should Type Theory Replace Set Theory as the Foundation of Mathematics?" (2023) https://link.springer.com/article/10.1007/s10516-023-09676-0
That would include Real Analysis (which indeed one isn't at all obligated to prove with a prover and a language for proofs down to {Null, 0, 1} and <0.5|-0.8>).
Are types checked at runtime, with other Hoare logic preconditions and postconditions?
U.K. lab promises air conditioner revolution without polluting gases
> barocaloric solids
Barocaloric material: https://en.wikipedia.org/wiki/Barocaloric_material
Show HN: GPT image editing, but for 3D models
Hey HN!
I’m Zach one of the co-founders of Adam (https://www.adamcad.com). We're building AI-powered tools for CAD and 3D modeling [1].
We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!
For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.
We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b
We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!
We’d love to hear what you think and where we should take this next :)
[1]https://x.com/zachdive/status/1882858765613228287
[2]https://news.ycombinator.com/item?id=43774990
[3]https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...
Build123d and Cadquery procedural CAD output and input support would be a cool feature.
Copilot + GPT-4 can generate build123d Python code; procedural CAD genai. It's spatially incoherent sometimes like there's not much RLHF for 3d in the coding LLM model, but the code examples are close to compiling and helpful. There's also an ocp-vscode extension to render build123d and cadquery CAD models in a tab in the vscode IDE: https://build123d.readthedocs.io/en/latest/external.html#ocp...
PartCAD is probably useful for your application as well.
Are there metadata standards for CAD parts, assemblies, and scenes? For example in PartCAD or in a BIM Building Information Model that catalogs part numbers and provenance for the boardroom hinge screws and maybe also 3D-printable or otherwise fab-able CAD models?
nething is an LLM with build123d output: https://nething.xyz/
FWIU it's possible to transform B-rep (boundary representation) to NURBS with a different tool.
B-rep: https://en.wikipedia.org/wiki/Boundary_representation
NURBS: https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline
glTF 2.0 > Adoption of: https://en.wikipedia.org/wiki/GlTF#Adoption_of_glTF_2.0
PartCAD > Features > Generative AI: https://partcad.readthedocs.io/en/latest/features.html#gener... :
> Individual parts, assemblies and scenes can also can be exported into 3D model file formats, including: STEP, BREP, STL, 3MF, ThreeJS, OBJ, GLTF, IGES
Thanks! We've done a good amount of experimenting with cadquery and build123d and actually got it to the point where it can reliably generate compiling code, the problem is the generation quality is still more limited than openscad. We may still release it so that users can export to STEP/BREP formats but we have been going back and forth internally on how to highly to prioritize it. We definitely want to build about the ability to generate multiple parts in parallel and assemblies. Thanks for all this :)
Multiple export formats ftw, watermarking, and metadata
(/?hnlog "NURBS") ... https://news.ycombinator.com/item?id=40131766 :
> ai-game-development tools lists a few CAD LLM apps like blenderGPT and blender-GPT: https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo...
Justifying OCCT CAD as an alternative; from comments on a post re: "GhostSCAD: Marrying OpenSCAD and Golang" https://news.ycombinator.com/item?id=30940563 :
> > CadQuery's CAD kernel Open CASCADE Technology (OCCT) is much more powerful than the CGAL used by OpenSCAD. Features supported natively by OCCT include NURBS, splines, surface sewing, STL repair, STEP import/export, and other complex operations, in addition to the standard CSG operations supported by CGAL
> Ability to import/export STEP and the ability to begin with a STEP model, created in a CAD package, and then add parametric features. This is possible in OpenSCAD using STL, but STL is a lossy format.
> [...] CadQuery scripts can build STL, STEP, and AMF faster than OpenSCAD.
AMF: Additive manufacturing file format: https://en.wikipedia.org/wiki/Additive_manufacturing_file_fo...
Watermarking:
/? synthID site:github.com https://www.google.com/search?q=SynthID+site%3Agithub.com
/? SynthID site:github.com inurl:awesome https://www.google.com/search?q=SynthID+site%3Agithub.com+in... :
- and-mill/Awesome-GenAI-Watermarking: https://github.com/and-mill/Awesome-GenAI-Watermarking
CAD Metadata; CAD Linked Data:
/? "CAD" Metadata standards: https://www.google.com/search?q="CAD"+metadata+standards
/? CAD 3d design formats and their metadata and RDF: https://www.google.com/search?q=CAD+3d+design+formats+and+th...
- OCX; shipbuilding BIM metadata; https://3docx.org/en/
/? "CAD" "BIM" "RDF": https://www.google.com/search?q=%22CAD%22+%22BIM%22+%22RDF%2... :
- "Scan-to-graph: Semantic enrichment of existing building geometry" (2020) https://www.sciencedirect.com/science/article/abs/pii/S09265... .. https://scholar.google.com/scholar?cites=1199367363418102338... :
> Scan-to-Graph focuses on creating an RDF-based model of an existing asset.
schema.org/CreativeWork > :Drawing, :SoftwareSourceCode
YAML-LD is JSON-LD (RDF) in YAML; with a W3C spec.
PartCAD is plain YAML; but there could probably be an @context to map each YAML metadata attribute to a URI so that Linked Data can be parsed and indexed (for example by search engines which index some schema.org RDFS Classes and Properties).
PartCAD index / electronics / sbcs / partcad.yml: https://github.com/partcad/partcad-index/blob/main/electroni...
partcad.yml: partcad/partcad-electronics-sbcs-intel/blob/main/partcad.yaml: https://github.com/partcad/partcad-electronics-sbcs-intel/bl...
In construction, the building inspector requires the spec sheet for basically every part of a house.
Linked Data is presumably advantageous for traceability given linked provenence metadata from CAD design (w or w/o AI), part manufacturing w/ batch number, assembly assemblage, and supply chain lineage.
Properties for a https://schema.org/CreativeWork > :CADModel RDFS Class to increase reusability?:
- Property: isParametric: bool
- Property: parameteicSize
- Property: parameters: [...]
- Property: ~ sourceCodeOf ; source URL w/ (git) revhash or version tag
- Property: ~ signedOffOnBy ( to W3C Verifiable Credentials)
- Property: ~ promptsAndModels
SBOM Software Bill of Materials tools can work with and generate JSON-LD RDF for the software found installed on containers, VMs, and hosts. The same for CAD models would be advantageous.
So, identify a portable software packaging format manifest that already supports cryptographic signatures (for https://SLSA.dev e.g. with https://sigstore.dev/ ) with additional (schema.org RDFS in RDFa HTML,) Linked Data properties to skip having to grep for whether the described and indexed CAD model is parametric with respect to size, and where its origin is for example:
- Property: originPoint and signed axes layout
Show HN: Air Lab – A portable and open air quality measuring device
Hi HN!
I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.
To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!
The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: https://www.crowdsupply.com/networked-artifacts/air-lab
We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.
The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.
If you seek more high-level info, here are also some videos covering the project: - https://www.youtube.com/watch?v=oBltdMLjUyg (Introduction) - https://www.youtube.com/watch?v=_tzjVYPm_MU (Product Update)
Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.
Happy to answer any questions!
I wish it would have support for Zigbee so I can pair it with other open data aggregation systems like Home Assistant. AirGradient, another cool air quality monitor, for example, does not have this.
Matter protocol support would also be a useful feature.
Potential integration: Run HVAC fans and/or an attic fan and/or a crawlspace fan if indoor AQI is worse than outdoor AQI
This says that Air Quality Sensor support was added to matter protocol in 2023: https://csa-iot.org/newsroom/matter-1-2-arrives-with-nine-ne... :
> Air Quality Sensors – Supported sensors can capture and report on: PM1, PM2.5, PM10, CO2, NO2, VOC, CO, Ozone, Radon, and Formaldehyde. Furthermore, the addition of the Air Quality Cluster enables Matter devices to provide AQI information based on the device’s location
/? matter protocol Air Quality Cluster: https://www.google.com/search?q=matter+protocol+Air+Quality+...
We'll definitely look into supporting Matter in the future, as it would allow integration with the most common home automation platforms/apps out there.
FWIU Thread and Matter work better when there is a "Border Router" ('hub') in the system; https://news.ycombinator.com/item?id=32167256#32186688
United States Digital Service Origins
Hopefully these services can be restored in 3.5 years. One of the greatest things to happen to aging infrastructure of this country in the last 10 years as a lot of our systems are based on aging military systems.
We just need it, there’s no question of the benefits and there were no negatives to speak about.
Thank you President Barack Obama! A true leader and patriot.
I mean, I liked what they made but its kinda sad that greatest thing to happen to our nation's infrastructure was some nice websites.
Meanwhile, healthcare housing and education got way more expensive and taxes for the wealthy went down.
And a lot of bridges fell into disrepair, roads get worse, etc. Gridlock has made funding anything pretty hard in the last decade, and certain parties are so anti spending they won't try to fix it
Anti spending? The deficit has increased under every republican president almost back to ww2. Their two Santa strategy seems to work well confusing people though.
/? https://www.google.com/search?q=The+deficit+has+increased+un... :
- History of the United States public debt: https://en.wikipedia.org/wiki/History_of_the_United_States_p...
- https://www.politifact.com/factchecks/2019/jul/29/tweets/rep... (2019) :
> The deficit is the difference between the money that the government makes and the money it spends. If the government spends more than it collects in revenues, then it’s running a deficit.
> The federal debt is the running total of the accumulated deficits. [Or surpluses]
"Federal Surplus or Deficit [-] (FYFSD)" https://fred.stlouisfed.org/series/FYFSD#
"Federal Surplus or Deficit [-] as Percent of Gross Domestic Product (FYFSGDA188S)" https://fred.stlouisfed.org/series/FYFSGDA188S
Dr. Sbaitso
Dr. Sbaitso: https://en.wikipedia.org/wiki/Dr._Sbaitso
...
Clean Language questions might be good.
Clean Language: https://en.wikipedia.org/wiki/Clean_language
From https://news.ycombinator.com/item?id=40085678 :
> Clean Language questions : https://cleanlearning.co.uk/blog/discuss/clean-language-ques...
Bootstrapping HTTP/1.1, HTTP/2, and HTTP/3
From the article; the HTTP Request header that results in upgrade to HTTP/3:
alt-svc: h3=":443", h2=":443"
> The HTTP Alt-Svc header it received in packet 25 included the directive h3=":443", so when we then reloaded the page (note: not shift-reload, which would have caused Chrome to "forget" the Alt-Svc for this site), Chrome could switch over to QUIC (packets 31 onwards) and then make the request using HTTP/3 (packets 44-45).Show HN: GribStream – Query Weather Forecasts Like a Database
Hey HN,
I shared GribStream here a while ago when it was just a NOAA forecast API. It’s grown into something bigger: a full weather data archive you can query just like a database, by time (with as-of/time-travel!), location, or more interestingly by specific weather conditions.
Use built-in expressions directly in your API requests to calculate derived metrics like wind chill, dew point, or more complex like crop-specific photothermal units capture (see demos below). Computation is fully server-side, ideal for web apps, dashboards, or mobile apps with limited resources.
Demos ( https://gribstream.com/demo ): NBM Snow Accumulation: Real-time ski conditions across North America HRRR Corn Growth Simulator: Predict crop development stages precisely GFS Storm Chaser: Track storms globally using dynamic filtering NBM Wind Field Explorer: Explore wind patterns interactively
Other Improvements: Added models: NBM, GFS, HRRR, RAP, GEFS, CFS, and Google's GraphCast GFS Bounding box queries at custom resolution. Perfect for interactive maps Significant price reduction: over 50% cheaper, plus 90% discount for cached requests Efficient bulk queries (up to 500 points per request) at same quota cost Comprehensive Quickstart guide and OpenAPI spec for easier onboarding
Sneak peek at what I'm currently doing: Realtime streaming of grib files as video. This is raw, not productionized at all, just an alpha-toy but it is cool so I'd like to share a few examples. This does no caching at all, it is generated on the fly. So you can play with the times, try other weather variables, tweak the scaling and framerate. I'll be monitoring networking in case I need to shut it down in a hurry, please be patient. Demo for Hurricane Milton
Wind speed at 80m
https://gribstream.com/video?fromTime=2024-10-08T00:00:00Z&UntilTime=2024-10-12T00:00:00Z&name=WIND&level=80%20m%20above%20ground&info=&scaleMin=0&scaleMax=40&fps=6
Convective available potential energy at the surface
https://gribstream.com/video?fromTime=2024-10-08T00:00:00Z&UntilTime=2024-10-12T00:00:00Z&name=CAPE&level=surface&info=&scaleMin=0&scaleMax=3000&fps=6
# wave height milton
https://gribstream.com/video?fromTime=2024-10-08T00:00:00Z&UntilTime=2024-10-12T00:00:00Z&name=HTSGW&level=surface&info=&scaleMin=0&scaleMax=7&fps=6
Coming Soon:
Aggregations over time/space
Lookups by city, zipcode, airport, or custom shapes
New response formats (Parquet, PNG, MP4)
Threshold-based notifications (webhooks/emails)
Full GEFS/CFS ensemble data for probabilistic forecastsSide-project hunting? If you're looking for your next indie-hack, here are a few ideas that GribStream makes ridiculously easy: Agriculture: Crop growth modeling and irrigation planning Renewable Energy: Forecasting for solar and wind energy production Logistics: Weather-informed routing and delivery scheduling Insurance: Risk modeling based on historical weather patterns Event Planning: Scheduling and resource allocation for outdoor events
Hopefully DOGE will let NOAA keep running smoothly so I can ship faster!
Would appreciate your feedback, feature requests, or any ideas on what you'd love to see next.
Thank you!
Was going to mention GenCast and then realized you already have the models live.
Herbie support for the GribStream API might be worthwhile; https://news.ycombinator.com/item?id=42470493 :
> Are there error and cost benchmarks for these predictive models?
ENH: TODO list / PIM integration potential: tasks notifications for specifically labeled tasks for when it's going to rain, going to be dry for days; when to plant, mow, work on construction projects in various states.
todo.txt labels: +goingtorain +goingtobedry
FarmOS is an open source agtech product that could have Historical and Predictive weather built-in.
"Show HN: We open-sourced our compost monitoring tech" https://news.ycombinator.com/item?id=42201207 ; compost and mulch businesses can save money by working with the rain
https://news.ycombinator.com/item?id=42201251 ; "crop monitoring system" site:github.com , digital agriculture, precision agriculture, SIEM
"Satellite images of plants' fluorescence can predict crop yields" https://news.ycombinator.com/item?id=40234890
Having been inundated with ads on weather apps that must afford notifications somehow, I've tried a few weather apps; TWC, AccuWeather; Wx, WeatherMaster, WeatherRadar
Wx is too complex for the average bear. (Where is animated local radar, for example?)
WeatherMaster is nice and open source, but isn't on FDroid and doesn't have animated radar
WeatherRadar is open source, on FDroid, has a chart that shows {Precipitation and Hi and Lo} all on the one chart, and already requires users to get and configure a uniquely-identifying third-party API token.
Radio Astronomy Software Defined Radio (Rasdr)
Are there Rydberg antenna SDRs for radio astronomy?
What sensitivity (?) is necessary to navigate by the EMF of stars?
FWIU this is called astronometry? There's probably a better word than "astral"?:
From https://news.ycombinator.com/item?id=44054783 :
> How many astral signals does a receiver need to fix to determine lat/long/altitude given the current time?
> How many astral signals does a receiver need to fix to determine to infer the current time, given geometrically-impossible triangulation and trilateration solutions given the known geometry of the cosmos and the spherical shape of the earth?
Microsandbox: Virtual Machines that feel and perform like containers
Thanks for sharing!
I'm the creator of microsandbox. If there is anything you need to know about the project, let me know.
This project is meant to make creating microvms from your machine as easy as using Docker containers.
Ask me anything.
Only did a quick skim of the readme, but a few questions which I would like some elaboration.
How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?
Can I run a GUI inside of it?
Do you think of this as a new Vagrant?
How do I get data in/out?
> How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?
It is a lighweight VM and uses the same technology as Firecracker
> Can I run a GUI inside of it?
It is planned but not yet implemented. But it is absolutely possible.
> Do you think of this as a new Vagrant?
I would consider Docker for VMs instead. In a similar way, it focuses on dev ops type use case like deplying apps, etc.
> How do I get data in/out?
There is an SDK and server that help does that and file streaming is planned. But right now, you can execute commands in the VM and get the result back via the server
> I would consider Docker for VMs instead.
Native Containers would probably solve here, too.
From https://news.ycombinator.com/item?id=43553198 :
>>> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/
And also from that thread:
> How should a microkernel run (WASI) WASM runtimes?
What is the most minimal microvm for WASM / WASI, and what are the advantages to running WASM workloads with firecracker or microsandbox?
> What is the most minimal microvm for WASM / WASI,
By setting up an image with wasmtime for example.
> and what are the advantages to running WASM workloads with firecracker or microsandbox?
I can think of stronger isolation or when you have legacy stuff you need to run alongside.
From https://e2b.dev/blog/firecracker-vs-qemu
> AWS built [Firecracker (which is built on KVM)] to power Lambda and Fargate [2], where they need to quickly spin up isolated environments for running customer code. Companies like E2B use Firecracker to run AI generated code securily in the cloud, while Fly.io uses it to run lightweight container-like VMs at the edge [4, 5].
"We replaced Firecracker with QEMU" (2023) https://news.ycombinator.com/item?id=36666782
"Firecracker's Kernel Support Policy" describes compatible kernel configurations; https://github.com/firecracker-microvm/firecracker/blob/main...
/? wasi microvm kernel [github] https://www.google.com/search?q=wasi+microvm+kernel+GitHub :
- "Mewz: Lightweight Execution Environment for WebAssembly with High Isolation and Portability using Unikernels" (2024) https://arxiv.org/abs/2411.01129 similar: https://scholar.google.com/scholar?q=related:b3657VNcyJ0J:sc...
MCP Jupyter: AI-powered Jupyter collaboration
Re: the _ai_repr_() Jupyter JEP and also MCP, and _repr_jsonld_: https://github.com/jupyter/enhancement-proposals/pull/129#is...
Efficient superconducting diodes and rectifiers for quantum circuitry
ScholarlyArticle: "Efficient superconducting diodes and rectifiers for quantum circuitry" (2025) https://www.nature.com/articles/s41928-025-01375-5
NewsArticle: "Superconducting diode bridge efficiently converts AC to DC for quantum circuits" https://phys.org/news/2025-05-superconducting-diode-bridge-e... :
> Their superconducting diode bridge, introduced in a paper published in Nature Electronics, was found to perform remarkably well at cryogenic temperatures, achieving rectification efficiencies as high as 42% ± 5%.
Yeah, what I see is
The bridge can function as a full-wave rectifier with an efficiency
up to 42 ± 5%, and offers alternating current (a.c.) to direct current
(d.c.) signal conversion capabilities at frequencies up to 40 kHz
ordinary bridge rectifiers are almost twice as efficient as that, it's an unusual question to be concerned about how high frequency they can work at, past 10 kHz the switching time of diodes is a problem but I hear you can get diodes that switch around 1 MHz if you had to.https://electronics.stackexchange.com/questions/152225/what-...
DC current is 0 Hz.
Isn't 110V AC typically 0.06 kHz (60 Hz) in the US?
0.06 kHz < 40 kHz
Yeah, people usually use bridge rectifiers with 50 or 60 Hz or maybe 400 Hz in aviation applications.
However, the way people build power supplies has changed completely since the 1970s when I was a kid reading ham radio books that told you to get a big transformer, hook it to a bridge rectifier, and have a filter with some large capacitors in it. Today the state of the art is
https://en.wikipedia.org/wiki/Switched-mode_power_supply
https://en.wikipedia.org/wiki/Buck%E2%80%93boost_converter
My son was looking for an 18V-1A power supply for a guitar gadget and looking around the house he told me "there doesn't seem to be any connection with the voltage and current and the size of the supply" and it's precisely because of that revolution. Even if the input power is 60 Hz the power supply switches at 100 kHz or more which means the filter network is vastly smaller and you don't have the really dangerous big capacitors you might have seen in the power supply of a PDP-11 or something like that.
Are there smaller AC to USB-C PD power adapters than GaN?
GaN > Transistors and power ICs: https://en.wikipedia.org/wiki/Gallium_nitride#Transistors_an...
Rectifiers > Rectifier circuits, Rectification technologies: https://en.wikipedia.org/wiki/Rectifier#Rectification_techno...
https://news.ycombinator.com/item?id=40693984#40720147 :
> Is there a rectifier for gravitational waves, and what would that do?
Basically just abs() absolute value eh
IDK what the SOTA efficiency of these is: "Nanoscale spin rectifiers for harvesting ambient radiofrequency energy" https://news.ycombinator.com/item?id=43234022
BespokeSynth is an open source "software modular synth" DAW that can host LV2 and VST3 plugins like Guitarix, which can also add signal transforms like guitar effects pedals. Tried searching for an apparently exotic 1A universal power supply. Apparently also exotic: A board of guitar pedals with IDK one USB-A and a USB-C adapter with OSC and MIDI support; USB MIDI trololo pedals
OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)
Can [OpenTPU] TPUs be fabricated out of graphene, with nanoimprinting or a more efficient approach?
From https://news.ycombinator.com/item?id=42314333 :
>> From "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
>>> Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
What about QPUs though?
Can QPUs (Quantum Processing Units) built on with electrons in superconducting graphene ever be faster than photons in integrated nanophotonics?
There are integrated parametric single-photon emitters and detectors.
Is there a lower cost integrated nanophotonic coherent light source for [quantum] computing than a thin metal wire?
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885
Singularities in Space-Time Prove Hard to Kill
My non-canonical and possibly wrong view of black hole singularities is that they can form in finite time (sort of) only in their own frame of reference but in any other frame they require infinite time so from our point of view no singularity ever formed and ever will from. And in practice, in this infinite time something might disrupt their collapse, like getting hit with similarly massive amount of anti-matter an turning into photons which might destabilize the whole process and let them get out in massively energetic event similar to Big Bang. Which I believe was the case with ours, so no white-hole singularity either.
This view is dismissed by physicist because in GR there's no way to unambiguously define simultaneity so they don't event attempt to consider what's "before" and what's "after" regarding remote events in strong gravitational fields so saying that singularity forming is "after" everything else that ever happens in the universe is a hard sell.
We don't know if singularities are even possible. Maybe the universe has some crazy repulsive force when atoms or subatomic particles get really really close (closer than in neutron stars, where atoms are femtometers apart).
You can get this easy by reformulating gravity's effect on spacetime as slowing down the speed of light/causality and putting a natural bound that asymptotically approaches zero. It should agree with GR everywhere except at extremes like black holes.
Looking at gravity as a slowdown of c is appealing because it suggests a computational cost of massive particles. As stuff gets more dense, the clock of the universe must slow down.
GR does not describe the interior topology of black holes, beyond predicting a singularity. Is there a hard boundary with no hair, or is there a [knotted or braided] fluidic attractor system with fluidic turbulence at the boundary?
SQR Superfluid Quantum Relativity seems to suggest that there is no hard event horizon boundary.
I don't understand how any model that lacks descriptions of phase states in BEC superfluids could sufficiently describe the magneto-hydro-thermo-gravito dynamics of a black hole system and things outside of it?
It is unclear whether mass/energy/information is actually drawn into a supermassive or a microscopic black hole; couldn't it be that things are only ever captured into attractor paths that are outside of the event horizon?
Does Hawking radiation disprove that black holes don't absorb mass/energy/information?
Signatures of chiral superconductivity in rhombohedral graphene
"Signatures of chiral superconductivity in rhombohedral graphene" (2025) https://www.nature.com/articles/s41586-025-09169-7 :
> Chiral superconductors are unconventional superconducting states that break time reversal symmetry spontaneously and typically feature Cooper pairing at non-zero angular momentum. Such states may host Majorana fermions and provide an important platform for topological physics research and fault-tolerant quantum computing [1–7]. Despite intensive search and prolonged studies of several candidate systems [8–26], chiral superconductivity has remained elusive so far. Here we report the discovery of robust unconventional superconductivity in rhombohedral tetra- and penta-layer graphene without moiré superlattice effects. [...] We also observed a critical B⊥ of 1.4 Tesla, higher than any graphene superconductivity and indicates a strong-coupling superconductivity close to the BCS-BEC crossover [27]. Our observations establish a pure carbon material for the study of topological superconductivity, with the promise to explore Majorana modes and topological quantum computing.
Are "strange metals" really necessary for the study of topological superconductivity, if comparable effects are now multiply-demonstrated with various forms of graphene?
"'Strange metals' point to a whole new way to understand electricity" (2025) https://news.ycombinator.com/item?id=44087916
'Strange metals' point to a whole new way to understand electricity
so electrons are just like photons being a wave/particle? The article seems to suggest in strange metals their particle properties are absent and only 'electron field' gradients move, like if electrons exhanged their 'charge'.
Electrons are not just like photons. It's tempting to say that, but there are some significant differences that can lead you in error if you think in this picture.
First of all, if you think of a photon as some small ball, not that's not what it is. Mathematically a photon is defined as a state of the EM field (which has been quantised into a set of harmonic oscillators called "normal modes") in which there is exactly one quantum of excitation of a specific normal mode (with given wavevector and frequency). Depending on which kind of modes you consider, a photon could be a gaussian beam, or even a plane wave, so not something localised like you would say of a particle.
Unlike photons, electrons have a position operator, so in principle you can measure and say where one electron is. The same is impossible for photons. Also electrons have a mass, but photon are massless. This means you can have motionless electrons, but this is impossible for photons: they always move at the speed of light. Electrons have a non-relativistic classical limit, while photon do not.
W. E. Lamb used to say that people should be required a license for the use of the word "photon", because it can be very misleading.
Why don't photons have a position operator?
It’s really not accurate to say that a photon has no position at all. How would a photodiode work? You have to be careful with this stuff. https://physics.stackexchange.com/questions/492711/whats-the...
Photons certainly appear to have a real physical location with 1e9 FPS imaging capabilities:
"Visualizing video at the speed of light — one trillion frames per second" (2012) https://youtube.com/watch?v=EtsXgODHMWk&
But is there an identity function for a photon(s), and is "time-polarization" necessary for defining an identity function for photons?
Peer Programming with LLMs, for Senior+ Engineers
What are some of the differences between Peer Programming with LLMs and Vibe Coding?
This is the origin of vibe coding: https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. (...) I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. (...)
Pair programming still very much deals with code and decisions.
So, pair programming continues to emphasize software quality (especially with LLMs) but "vibe coding" is more of a "whoo, I'm a reckless magician" (in a less risky application domain) sort of thing?
But doesn't a 'vibe-coding' "we'll just sort out the engineering challenges later" ensure that there will be re-work and thus less overall efficiency?
Hacker News now runs on top of Common Lisp
So, Hacker News was not rewritten in Common Lisp. Instead they reimplemented the Arc Runtime in Common Lisp.
And that's the sort of thing Lisp excels in
Jupiter was formerly twice its current size, had a much stronger magnetic field
How is the size defined for a gas planet? The gas density just keeps dropping, where do you draw the line (isosurface, rather)? Earth's radius is always the one without earth's atmosphere.
The density falls off pretty steeply at the “edge”, so the exact definition only makes little difference for the radius: https://www.researchgate.net/figure/Density-vs-radius-for-a-...
This is because of Newtonian gravity being inversely proportional to the square of the radius, right?
Gravity changes little over that distance - it's more because of the compounding effect of atmospheric pressure (the deeper you go, the more air you have above you which raises the pressure, raising the density and meaning that pressure increases exponentially faster).
What makes that curve exponential?
Newtonian gravity (classical mechanics).
Two-body gravitational attraction is observed to be an inverse square power law; gravitational attraction decreases with the square of the distance.
g, the gravitational constant of Earth, is observed to be exponential; 9.8 m/s^2.
Atmospheric pressure: https://en.wikipedia.org/wiki/Atmospheric_pressure#:~:text=P... :
> Pressure (P), mass (m), and acceleration due to gravity (g) are related by P = F/A = (m*g)/A, where A is the surface area. Atmospheric pressure is thus proportional to the weight per unit area of the atmospheric mass above that location.
Accelerating Docker Builds by Halving EC2 Boot Time
RUN --mount=type=cache can also significantly reduce build times if there is inter-build locality; i.e. if container build jobs run on the same nodes so that cache mounts can be reused by subsequent build jobs.
Examples of docker image cache mounts:
# yum + dnf
RUN --mount=type=cache,id=yum-cache,target=/var/cache/yum,sharing=shared \
--mount=type=cache,id=dnf-cache,target=/var/cache/dnf,sharing=shared \
# apt
RUN --mount=type=cache,id=aptcache,target=/var/cache/apt,sharing=shared
# pip
RUN --mount=type=cache,id=pip-cache,target=${apphome}/.cache/pip,sharing=shared \
# cargo w/ uid=1000
RUN --mount=type=cache,id=cargocache,target=${apphome}/.cargo/registry,uid=1000,sharing=shared \
"Optimize cache usage in builds" https://docs.docker.com/build/cache/optimize/Starlink being evaluated to enhance GPS PNT capabilities
Aren't there stable astral radio signal sources detectable with ng quantum sensors; such as Rydberg antenna arrays?
How many astral signals does a receiver need to fix to determine the time of day on earth (~theta) given lat/long?
How many astral signals does a receiver need to fix to determine lat/long/altitude given the current time?
How many astral signals does a receiver need to fix to determine to infer the current time, given geometrically-impossible triangulation and trilateration solutions given the known geometry of the cosmos and the spherical shape of the earth?
There is a regular monotonic tick in quantum waves; but not a broadcast planetary time offset/reference?
"Simple Precision Time Protocol at Meta" https://news.ycombinator.com/item?id=39306209 :
> FWIW, from "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506 :
>> TIL there's a regular heartbeat in the quantum foam; there's a regular monotonic heartbeat in the quantum Rydberg wave packet [photoionization] interference; and that should be useful for distributed applications with and without vector clocks and an initial time synchronization service
"Quantum watch and its intrinsic proof of accuracy" (2022) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
..
Re: ntpd-rs and higher-resolution network time protocols {WhiteRabbit (CERN), SPTP (Meta)} and NTP NTS : https://news.ycombinator.com/item?id=40785484 :
> "RFC 8915: Network Time Security for the Network Time Protocol" (2020)
Cavity quantum electrodynamics with moiré photonic crystal nanocavity
Would such an integrated nanophotonic light source substitute for quantum dots, for cavity QED QC and sensors?
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33490730 :
"Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) https://doi.org/10.21203/rs.3.rs-1572967/v1
Show HN: Olelo Foil - NACA Airfoil Sim
Hi HN!
A while back, I started exploring ways to make aerodynamic simulation more interactive and visual for the web. I wanted something that felt immediate—intuitive enough for students, fast enough for hobbyists, and hackable enough for engineers. That’s how Olelo Foil was born.
Foil is a browser-based airfoil simulator written in JavaScript using Three.js and WebGL. It lets you interactively explore how airfoils behave under different conditions, all rendered in real time. Right now, it uses simplified fluid models, but I’m working toward integrating Navier-Stokes for more accurate simulations—and I’d love help from anyone interested in fluid dynamics, GPU compute, or numerical solvers.
I’m also building Olelo Honua, an educational platform focused on Hawaiian STEM content and digital tools. Foil is one piece of that larger vision—bringing STEM education into the browser with open, accessible tools.
Check it out, and if you're interested in collaborating (especially on the physics side), I’d love to connect!
Notes re: CFD, Navier Stokes,
"Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations" https://news.ycombinator.com/item?id=31049608
https://github.com/chennachaos/awesome-FEM4CFD?tab=readme-ov...
>> Numerical methods in fluid mechanics: https://en.wikipedia.org/wiki/Numerical_methods_in_fluid_mec...
jax-cfd mentions phiflow
jax-cfd > Other awesome projects: https://github.com/google/jax-cfd#other-awesome-projects
PhiFlow: https://github.com/tum-pbs/PhiFlow/
We had a monochrome green aerodynamic simulation app literally on floppy disks in middle school in approximately 1999 that was still cool then. IIRC various keyboard keys adjusted various parameters of the 2d hull that was tested to eventually - after floppy disc noises - yield a drag coefficient.
TIL that the teardrop shape maximizes volume and minimizes drag coefficient, but some spoiler wings do generate downward lift to maintain traction at speed.
A competitive game with scores and a leaderboard might be effective.
...
Navier-Stokes for compressible and incompressible fluids, but it's a field of vortices with curl so SQG/SQR Superfluid Quantum Gravity / Relativity has Gross-Pitaevskii for modeling emergent dynamics like fluidic attractor systems in exotic states like superfluids and superconductors and supervacuum.
TIL the mpemba effect says that the phase diagram for water is incomplete because one needs the initial water temperature to predict the time to freeze or boil; those have to be manifold charts like HVAC.
There's a Gross-Pitaevskii model of the solar system; gravity results in n-body fluidic vortices which result in and from the motions of the planets and other local masses.
/?hnlog "CFD" :
From "FFT-based ocean-wave rendering, implemented in Godot" https://news.ycombinator.com/item?id=41683990 :
> Can this model a fluid vortex between 2-liter bottles with a 3d-printable plastic connector?
> Curl, nonlinearity, Bernoulli, Navier-Stokes, and Gross-Pitaevskii are known tools for CFD computational fluid dynamics with Compressible and Incompressible fluids.
> "Ocean waves grow way beyond known limits" (2024-09) https://news.ycombinator.com/item?id=41631177#41631975
Also, recently I learned that longitudinal waves in superfluids (and plasmas) are typically faster than transverse standing waves that we observe in fluid at Earth pressures.
FreeBASIC is a free/open source BASIC compiler for Windows DOS and Linux
This one emulates GW-BASIC as PC-BASIC so old BASIC programs for the IBM PC DOS systems can run on modern systems: https://robhagemans.github.io/pcbasic/
FreeBASIC is like Microsoft's QuickBASIC.
More BASIC Languages: https://www.thefreecountry.com/compilers/basic.shtml
It really isn't - from the docs themselves:
FreeBASIC gives you the FreeBASIC compiler program (fbc or fbc.exe),
plus the tools and libraries used by it. fbc is a command line program
that takes FreeBASIC source code files (*.bas) and compiles them into
executables. In the combined standalone packages for windows, the main
executable is named fbc32.exe (for 32-bit) and fbc64.exe (for 64-bit)
The magic of QuickBasic was that it was an editor, interpreter, and help system all rolled up into a single EXE file. Punch F5 and watch your BAS file execute line-by-line.> The magic of QuickBasic was that it was an editor, interpreter, and help system all rolled up into a single EXE file. Punch F5 and watch your BAS file execute line-by-line.
That's still how vscode works; F5 to debug and Ctrl-[Shift]-P like CtrlP.vim: https://code.visualstudio.com/docs/debugtest/debugging
FWICS,
The sorucoder.freebasic vscode extension has syntax highlighting: https://marketplace.visualstudio.com/items?itemName=sorucode...
There's also an QB64Official/vscode extension that has syntax highlighting and keyboard shortcuts: https://github.com/QB64Official/vscode
re: how qb64 and C-edit are like EDIT.COM, and GORILLA.BAS: https://news.ycombinator.com/item?id=41410427
C-edit: https://github.com/velorek1/C-edit
'Edit' - a CLI/TUI text editor similar to EDIT.COM but written in rust - is now open source https://news.ycombinator.com/item?id=44031529
Laser-Induced Graphene from Commercial Inks and Dyes
What is the advantage of making graphene from inks and dyes instead of from flash-heated unsorted recycled plastic or lasered fruit peels?
/? Graphene laser fruit peel: https://www.google.com/search?q=graphene%20laser%20fruit%20p...
This is covered in the Introduction.
They apply the ink/dye to a surface and then use a laser on it, leaving the graphene behind.
> A versatile “Paint & Scribe” methodology is introduced, enabling to integrate LIG tracks onto any wettable surface, and in particular onto printed and flexible electronics. A process for obtaining freestanding and transferrable LIG is demonstrated by dissolving acrylic paint in acetone and floating LIG in water. This advancement offers novel avenues for diverse applications that necessitate a transfer process of LIG.
I still don't understand why that's only possible with commercial inks and dyes but not with also aromatic fruit peels?
Show HN: Turn any workflow diagram into compilable, running and stateful code
Hi HN folks, I'm a co-creator of the Dapr CNCF project and co-founder of Diagrid. Today we announced a free-to-use web app that takes any form of workflow diagram (UML, BPMN, scribble in your favorite drawing tool or even on paper) and generates code that runs in any IDE and that can be deployed to Kubernetes and other container based systems, based on Dapr's durable execution workflow engine. This essentially allows you to run durable workflows in minutes and leaves out the guesswork for how to structure, code and optimize a code-first workflow app. I'm happy for you to give this a try and provide feedback!
This reminds me of the UML/RUP era from the early 2000s.... Is that an attempt to revive or even resurrect UML diagrams and Rational Unified Process blending it with AI? I would bet it's all dead forever. I'm skeptical about diagram-driven development making a comeback. In my experience, developers today prefer more agile, code-first approaches because requirements change rapidly and maintaining diagram-code synchronization is an unbearable challenge.
I believe in UML usefulness as a whiteboard/blackboard language. A fun way to explain what you need or what you imagine to be a good architecture, but that's all, it's a drafting tool. But then, why not using it as a communication tool ? You would draft something on the board, the LLM would generate the program. Sometimes it is simpler to draw 5 rectangles, name then and show their relationships in UML class modeling than to explain it textually.
UML class diagrams in mermaid syntax require less code than just defining actual classes with stubbed attrs and methods in some programming languages.
Years ago I tried ArgoUML for generating plone classes/models, but there was a limit to how much custom code could be round-tripped and/or stuffed into UML XML IIRC.
Similarly, no-code tools are all leaky abstractions: they model with UI metaphors only a subset of patterns possible in the target programming language, and so round-trip isn't possible after adding code to the initial or periodic code-generation from the synced abstract class diagram.
Instead, it's possible to generate [UML class] diagrams from minimal actual code. For example, the graph_models management command in Django-extensions generates GraphViz diagrams from subclasses of django.models.Model. With code to diagram workflow (instead of the reverse), you don't need to try and stuff code in the extra_code attr of a modeling syntax so that the generated code can be patched and/or transformed after every code generation from a diagram.
https://django-extensions.readthedocs.io/en/latest/graph_mod...
I wrote something similar to generate (IDEF1X) diagrams from models for the specified waterfall process for an MIS degree capstone course.
It may be easier to prototype object-oriented code with UML class diagrams in mermaid syntax, but actual code shouldn't be that tough to generate diagrams from.
IIRC certain journals like ACM have their own preferred algorithmic pseudocode and LaTeX macros.
Brandon's Semiconductor Simulator
Which other simulators show electron charge density and heat dissipation?
Can this simulate this?:
"Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4 .. https://news.ycombinator.com/item?id=43506198
What about (graphene) superconductors though?
On my info page (https://brandonli.net/semisim/info) there's a list of things my simulation can and can't do. After taking a look at the paper you mentioned, I think simulating it may very well be possible, however it might take a bit of effort. As for graphene, its band structure is different enough that I don't think it would work.
Note that my simulation is intended for educational purposes only, not scientific research.
- Brandon
Thanks, quite the useful simulator; I hadn't found that page yet. Additional considerations for circuit simulators:
What does the simulator say about signal delay and/or propagation in electronic circuits and their fields? How long does it take for a lightbulb to turn on after a switch is thrown, given the length of the circuit and the real distance between points in it?
(I learned this gap in our understanding of electron behavior from this experiment, which had never been done FWIU: "How Electricity Actually Works" (2022) https://www.youtube.com/watch?v=oI_X2cMHNe0 )
FWIW, additionally:
Hall Effect and Quantum Anomalous Hall Effect;
"Tunable superconductivity and Hall effect in a transition metal dichalcogenide" (2025) https://news.ycombinator.com/item?id=43347319
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
From "Single-chip photonic deep neural network with forward-only training" https://news.ycombinator.com/item?id=42314581 :
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024) https://www.nature.com/articles/s41586-023-07010-7
"Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 .. https://news.ycombinator.com/item?id=39365579
> "Room-temperature quantum coherence of entangled multiexcitons in a metal-organic framework" (2024) https://www.science.org/doi/10.1126/sciadv.adi3147
Electrons (and photons and phonons and other fields of particles) are more complex than that though.
I recreated Veritasium's setup in my simulator and measured the current through the load resistor, the results of which are here: https://imgur.com/a/sxVihf0
The gap between the wires is about 1 micrometer, so light should take about 3 fs to propagate through. The simulation output approximately matches this prediction, and over the first few tens of femtoseconds the current increases, with a jump at around 70 fs due to the reflected wave. All of this is pretty much in line with the results of Veritasium's experiment.
Thanks for bringing it up. I might include this as another example in my sim.
Nice.
These are cool _ wave propagation vids too; Nils Berglund wave visualizations: https://youtu.be/v0cZjOIfwos?si=07w2Wd4dPlGmNxHp
_: photon, fluid, standing transverse,, plasma
What about longitudinal waves in plasma, superconductors, and superfluids though? https://www.google.com/search?q=What+about+longitudinal+wave...
I suppose vorticity doesn't matter that much for classical electronic circuits
Arduino is at work to make bio-based PCBs
From the actual paper: "The resulting yield of PCB production was around 50%. Signal analysis was successful with analogue data acquisition (voltage) and low-frequency (4 kHz) tests, indistinguishable from sample FR4 boards. Eventually, the samples were subjected to highly accelerated stress test (HAST). HAST tests revealed limitations compared to traditional FR4 printed circuit materials. After six cycles, the weight loss was around 30% in the case of PLA/Flax, and as three-point bending tests showed, the possible ultimate strength (25 MPa at a flexural state) was reduced by 80%."[1]
This sort of problem has come up many times with attempts to put some biological filler material into a composite. Most biological materials absorb and release water, and change size and weight as they do. This causes trouble for anything exposed to humidity changes. The classic "hemp/soybean car" ran into this problem.[2] In 1941, plastics were more expensive, and there were attempts to find some cheap material to use as filler. That never got beyond a prototype. Modern attempts at bio-composites seem to hit the same problem.[3]
This might have potential for cheap disposable toys, where expected lifetime is in months and disposal as ordinary trash is desirable.
[1] https://iopscience.iop.org/article/10.1088/1361-6528/ad66d3
> The classic "hemp/soybean car" ran into this problem.[2] In 1941,
Damn, I always thought that Cheech and Chong's hemp car was fiction.
Ford hired George Washington Carver. They heated soybeans to develop a bioplastic.
Soybean car > History, Internet video (of Rollins with a fireman's , Car ingredients: https://en.wikipedia.org/wiki/Soybean_car#Car_ingredients :
> The exact ingredients of the plastic are not known since there were no records kept of the plastic itself. Speculation is that it was a combination of soybeans, wheat, hemp, flax and ramie. Lowell Overly, the person who had the most influence in creating the car, says it was "...soybean fiber in a phenolic resin with formaldehyde used in the impregnation." [16]
What are the binders for aerospace -grade hemp plastic these days? I don't think that formaldehyde is required anymore.
Hempitecture has salt-treated fire retardant hemp batting home insulation product which competes with fiberglass and cellulose batting and fill, and cork.
FWIU treated polyurethane foam (like old seat cushions) absorbs oil (OleoSponge),
Kestrel has a modern vehicle made of hemp plastic.
Name of the 75% hemp aircraft made by Hempearth scientist from Canada doing engineering in the US:
Radar (ROC curve in ML, too) and these days Infrared signatures for hemp vehicles and crafts:
Hemp plastic would have been an advantage in WWII if:
These days many major auto manufacturers use hemp parts in production automobiles for its durability, cost, and sustainability in terms of carbon cost for example.
Hemp bast fiber competes with graphene in ultracapacitor anode applications, and IDK why not normal capacitors and batteries too. Hemp anodes are possibly more sustainable than graphene anodes (in supercapacitors and solid state batteries) due to the environmental and health hazards of graphene production and the relative costs of production.
YouTube has videos of hemp batteries; batteries made of hemp. https://www.youtube.com/results?sp=mAEA&search_query=hemp+ba...
Dimensional Hemp Wood lumber is real, and it is a formaldehye-free sustainable binder FWIU.
So - and this is what Kestrel and Hempearth are going for - it's probably possible to make closer to 100% of a vehicle or an aircraft with biocomposites inspecific or even hemp-only.
> FWIU treated polyurethane foam (like old seat cushions) absorbs oil (OleoSponge),
And Hemp Aerogels are even more oil absorbent than polyurethane foam.
"Hemp plastic door panel sledgehammer test"; History Channel: https://youtube.com/watch?v=Hx8OTH0eEM0&
Re: dandelion (taraxagum) rubber instead of synthetic rubber (plastic) https://news.ycombinator.com/item?id=40892109
Graphene is free when you flash heat unsorted recycled plastic and sell or use the Hydrogen.
Graphene can be produced from CO2.
CO2 is overly-abundant and present in emissions that need to be filtered anyway.
What types of graphene and other forms of carbon do not conduct electricity, are biodegradable , and would be usable as a graphene PCB for semiconductors and superconductors?
Graphene Oxide (low cost of production), Graphane (hydrogen; high cost of production), Diamond (lowering cost of production, also useful for NV QC nitrogen-vacancy quantum computing; probably in part due to the resistivity of the molecular lattice),
How could graphene oxide PCBs be made fire-proof?
Non-Conductive Flame Retardants: phosphorous, nitrogen (melamine,), intumescent systems, inorganic fillers
Is there a bio-based flame-retardant organic filler for [Graphene Oxide] PCBs?
Greenwashing - this kind of idea has been floating around for years and I don’t think it’s really that big of a problem
No, we have environmentally and financially unsustainable supply chain dependencies on silicon-grade sand and other gases and minerals.
PCBs are not biodegradable but could be. What is the problem?
You haven't pointed out anything specific to FR4, which is what this would be replacing. This is merely a ploy at getting funding, and I'm very skeptical about it because I've seen 2 or 3 companies do the exact same pitch and fail before.
> The goal: to design and test bio-based multilayer PCBs that reduce environmental impact, without compromising on functionality or performance.
What about cost?
And so instead,
What is a sustainable flame retardant for Graphene Oxide PCBs; and is that a filler?
"Study of properties of graphene oxide nanoparticles obtained by laser ablation from banana, mango, and tangerine peels" (2025) https://www.sciencedirect.com/science/article/pii/S266697812...
LLMs get lost in multi-turn conversation
Humans also often get lost in multi-turn conversation.
I have experienced that in person many, many times. Jumps in context that seem easy for one person to follow, but very hard for others.
So, assuming the paper is legit (arxiv, you never know...), its more like something that could be improved than a difference from human beings.
Subjectively the "getting lost" feels totally different than human conversations. Once there is something bad in the context it seems almost impossible to get back on track. All subsequent responses become get a lot worse and it starts contradicting itself. It is possible that with more training this problem can be improved, but what is interesting to me isn't it's worse than humans in this way but that this sort of difficulty scales differently than it does in humans. I would love to get some more objective descriptions of these subjective notions.
Contradictions are normal. Humans make them all the time. They're even easy to induce, due to the simplistic nature of our communication (lots of ambiguities, semantic disputes, etc).
I don't see how that's a problem.
Subjectivity is part of human communication.
Algorithmic convergence and caching :: Consensus in conversational human communication
Any sufficiently large amount of information exchange could be interpreted as computational if you see it as separated parts. It doesn't mean that it is intrinsically computational.
Seeing human interactions as computer-like is a side effect of our most recent shiny toy. In the last century, people saw everything as gears and pulleys. All of these perspectives are essentially the same reductionist thinking, recycled over and over again.
We've seen men promising that they would build a gear-man, resurrect the dead with electricity, and all sorts of (now) crazy talk. People believed it for some time.
If data integrity is assured, and thus there is no change in the data to store/transfer, then that's the opposite of computationally transforming the data?
How do we see robot and AI and helping interactions in film and tv and games?
A curated list of films for consideration:
Mary Shelley's "Frankenstein" or "The Modern Prometheus" (1818), Metropolis (1927), I\, Robot (1940-1950; Three Laws of Robotics, robopsychology), Macy Conferences (1941-1960; Cybernetics), Tobor the Great (1954), Here Comes Tobor (1956), Jetsons' maid's name: Rosie (1962), Lost in Space (1965), 2001: A Space Odyssey (1968), THX 1138 (1971), Star Wars (1977), Terminator (1984), Driving Miss Daisy (1989), Edward Scissorhands (1990), Flubber (1997, 1961), Futurama (TV, 1999-), Star Wars: Phantom Menace (1999), The Iron Giant (1999), Bicentennial Man (1999), A.I. Artificial Intelligence (2001), Minority Report (2003), I\, Robot (2004), Team America: World Police (2004), Wall-E (2008), Iron Man (2008), Eagle Eye (2008), Moon (2009), Surrogates (2009), Tron: Legacy (2010), Hugo (2011), Django Unchained (2012), Her (2013), Transcendence (2014), Chappie (2015), Tomorrowland (2015), The Wild Robot (2016, 2024), Ghost in the Shell (2017),
Giant f robots: Gundam (1979), Transformers (TV: 1984-1987, 2007-), Voltron (1984-1985), MechWarrior (1989), Matrix II: Revolutions (2003), Avatar (2009, 2022, 2025), Pacific Rim (2013-), RoboCop (1987, 2014), Edge of Tomorrow (2014),
~AI vehicle: Herbie, The Love Bug (1968-), Knight Rider (TV, 1982-1986), Thunder in Paradise (TV, 1993-95), Heat Vision and Jack (1999), Transformers (2007), Bumblebee (2018)
Games: Portal (2007), LEGO Bricktales (2022), While True: learn() (2018), "NPC" Non-Player Character
Category:Films_about_artificial_intelligence : https://en.wikipedia.org/wiki/Category:Films_about_artificia...
List of artificial intelligence films: https://en.wikipedia.org/wiki/List_of_artificial_intelligenc...
Category:Films_about_robots: https://en.wikipedia.org/wiki/Category:Films_about_robots
Category:American_robot_films: https://en.wikipedia.org/wiki/Category:American_robot_films
Objcurses – ncurses 3D object viewer using ASCII in console
How does objcurses compare to display3d in implementation and features?
Textual has neat shell control characters for CLI utilities that might be useful.
FSV is an open clone of FSN (the 3d file browser from Jurassic Park), but it requires OpenGL.
Pretty cool! I honestly hadn’t seen display3d before, not when I was researching similar projects, nor while working on my own and debugging issues. Just checked it out now, and as someone currently learning Rust, I really liked it and definitely starred the repo. Love the Unicode rendering idea.
Textual looks fun too, feels very much like a Python equivalent of ratatui from Rust, I also has a project with this library. Definitely something I might explore for building overlays or adding interactive controls around the core renderer, though curses also can render basic buttons and menus.
As for FSV, yeah, that’s more in the OpenGL/GPU territory. My goal was to stay purely terminal-based. By the way, I wasn’t sure if you brought up FSV just for the retro-3D vibe comparison, or if you had something more specific in mind? Curious what you meant there
Just found display3d today, too.
Maybe it was an ascii CLI video of a 3d scene that I remember seeing.
Maybe molecule visualizations? Is it possible to discern handedness from an objcurses render of a molecule like sugar or an amino protein?
Could a 3D CLI file browsing interface useful enough for a computer green scren in a movie like Jurassic Park or Hackers be built with objcurses? wgpu compiles to WASM and WebGL
Oracle VM VirtualBox – VM Escape via VGA Device
For the record: Oracle does not consider that the 3D feature should be enabled when the VM is untrusted. It's still classified as experimental and will likely be so for another decade at least.
https://news.ycombinator.com/item?id=43067347 :
> Still hoping for SR-IOV in retail GPUs.
> Not sure about vCPU functionality in GPUs
> Process isolation on vCPUs with or without SR-IOV is probably not as advanced as secure enclave approaches
[Which just fell to post-spectre side channels]
>> Is there sufficient process isolation in GPUs?
/? Sr-iov iommu: https://www.google.com/search?q=sr-iov+iommu
Is there branch prediction in GPUs? What about other side channels between insufficiently-isolated GPU processes?
I see that vgpu_unlock no longer works for technical reasons.
The first year of free-threaded Python
> Instead, many reach for multiprocessing, but spawning processes is expensive
Agreed.
> and communicating across processes often requires making expensive copies of data
SharedMemory [0] exists. Never understood why this isn’t used more frequently. There’s even a ShareableList which does exactly what it sounds like, and is awesome.
[0]: https://docs.python.org/3/library/multiprocessing.shared_mem...
Spawning processes generally takes much less than 1 ms on Unix
Spawning a PYTHON interpreter process might take 30 ms to 300 ms before you get to main(), depending on the number of imports
It's 1 to 2 orders of magnitude difference, so it's worth being precise
This is a fallacy with say CGI. A CGI in C, Rust, or Go works perfectly well.
e.g. sqlite.org runs with a process PER REQUEST - https://news.ycombinator.com/item?id=3036124
>Spawning a PYTHON interpreter process might take 30 ms to 300 ms
Which is why, at least on Linux, Python's multiprocessing doesn't do that but fork()s the interpreter, which takes low-single-digit ms as well.
Even when the 'spawn' strategy is used (default on Windows, and can be chosen explicitly on Linux), the overhead can largely be avoided. (Why choose it on Linux? Apparently forking can cause problems if you also use threads.) Python imports can be deferred (`import` is a statement, not a compiler or pre-processor directive), and child processes (regardless of the creation strategy) name the main module as `__mp_main__` rather than `__main__`, allowing the programmer to distinguish. (Being able to distinguish is of course necessary here, to avoid making a fork bomb - since the top-level code runs automatically and `if __name__ == '__main__':` is normally top-level code.)
But also keep in mind that cleanup for a Python process also takes time, which is harder to trace.
Refs:
https://docs.python.org/3/library/multiprocessing.html#conte... https://stackoverflow.com/questions/72497140
I really wish Python had a way to annotate things you don't care about cleaning up. I don't know what the API would look like, but I imagine something like:
l = list(cleanup=False)
for i in range(1_000_000_000): l.append(i)
telling the runtime that we don't need to individually GC each of those tiny objects and just let the OS's process model free the whole thing at once.Sure, close TCP connections before you kill the whole thing. I couldn't care less about most objects, though.
There's already a global:
import gc
gc.disable()
So I imagine putting more in there to remove objects from the tracking.That can go a long way, so long as you remember to manually GC the handful of things you do care about.
Is there a good way to add __del__() methods or to wrap Context Manager __enter__()/__exit__() methods around objects that never needed them because of the gc?
Hadn't seen this:
import gc
gc.disable()
Cython has __dealloc__() instead of __del__()?Also, there's a recent proposal to add explicit resource management to JS: "JavaScript's New Superpower: Explicit Resource Management" https://news.ycombinator.com/item?id=44012227
Large Language Models Are More Persuasive Than Incentivized Human Persuaders
> Abstract: [...] Overall, our findings suggest that AI's persuasion capabilities already exceed those of humans that have real-money bonuses tied to performance. Our findings of increasingly capable AI persuaders thus underscore the urgency of emerging alignment and governance frameworks.
P.22:
> 4. Implications for AI Regulation and Ethical Considerations
Getting AI to write good SQL
From "Show HN: We open sourced our entire text-to-SQL product" (2024) https://news.ycombinator.com/item?id=40456236 :
> awesome-Text2SQL: https://github.com/eosphoros-ai/Awesome-Text2SQL
> Awesome-code-llm > Benchmarks > Text to SQL: https://github.com/codefuse-ai/Awesome-Code-LLM#text-to-sql
Zinc Microcapacitors Are the Best of Both Worlds
Why Zinc if Carbon is sufficient?
From "Eco-friendly artificial muscle fibers can produce and store energy" https://news.ycombinator.com/item?id=42942421 :
> "Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
That's with mechanical twisting though; graphene supercapacitors in general have lower energy density than (micro-) capacitors?
Show HN: SQL-tString a t-string SQL builder in Python
SQL-tString is a SQL builder that utilises the recently accepted PEP-750, https://peps.python.org/pep-0750/, t-strings to build SQL queries, for example,
from sql_tstring import sql
val = 2
query, values = sql(t"SELECT x FROM y WHERE x = {val}")
assert query == "SELECT x FROM y WHERE x = ?"
assert values == [2]
db.execute(query, values) # Most DB engines support this
The placeholder ? protects against SQL injection, but cannot be used everywhere. For example, a column name cannot be a placeholder. If you try this SQL-tString will raise an error, col = "x"
sql(t"SELECT {col} FROM y") # Raises ValueError
To proceed you'll need to declare what the valid values of col can be, from sql_tstring import sql_context
with sql_context(columns="x"):
query, values = sql(t"SELECT {col} FROM y")
assert query == "SELECT x FROM y"
assert values == []
Thus allowing you to protect against SQL injection.As t-strings are format strings you can safely format the literals you'd like to pass as variables,
text = "world"
query, values = sql(t"SELECT x FROM y WHERE x LIKE '%{text}'")
assert query == "SELECT x FROM y WHERE x LIKE ?"
assert values == ["%world"]
This is especially useful when used with the Absent rewriting value.SQL-tString is a SQL builder and as such you can use special RewritingValues to alter and build the query you want at runtime. This is best shown by considering a query you sometimes want to search by one column a, sometimes by b, and sometimes both,
def search(
*,
a: str | AbsentType = Absent,
b: str | AbsentType = Absent
) -> tuple[str, list[str]]:
return sql(t"SELECT x FROM y WHERE a = {a} AND b = {b}")
assert search() == "SELECT x FROM y", []
assert search(a="hello") == "SELECT x FROM y WHERE a = ?", ["hello"]
assert search(b="world") == "SELECT x FROM y WHERE b = ?", ["world"]
assert search(a="hello", b="world") == (
"SELECT x FROM y WHERE a = ? AND b = ?", ["hello", "world"]
)
Specifically Absent (which is an alias of RewritingValue.ABSENT) will remove the expression it is present in, and if there an no expressions left after the removal it will also remove the clause.The other rewriting values I've included are handle the frustrating case of comparing to NULL, for example the following is valid but won't work as you'd likely expect,
optional = None
sql(t"SELECT x FROM y WHERE x = {optional}")
Instead you can use IsNull to achieve the right result, from sql_tstring import IsNull
optional = IsNull
query, values = sql(t"SELECT x FROM y WHERE x = {optional}")
assert query == "SELECT x FROM y WHERE x IS NULL"
assert values == []
There is also a IsNotNull for the negated comparison.The final feature allows for complex query building by nesting a t-string within the existing,
inner = t"x = 'a'"
query, _ = sql(t"SELECT x FROM y WHERE {inner}")
assert query == "SELECT x FROM y WHERE x = 'a'"
This library can be used today without Python3.14's t-strings with some limitations, https://github.com/pgjones/sql-tstring?tab=readme-ov-file#pr..., and I've been doing so this year. Thoughts and feedback very welcome.Just took a quick look, and it seams like the parser is hand written which is great, but you probably want to build a lexer and parser based on the BNF grammar take a look at how I do it here https://github.com/elixir-dbvisor/sql/tree/main/lib and do conformance testing with https://github.com/elliotchance/sqltest
Thanks, do you have a reference for SQL grammar - I've had no success finding an official source.
Ibis has sqlglot for parsing and rewriting SQL query graphs; and there's sql-to-ibis: https://github.com/ibis-project/ibis/issues/9529
sqlglot: https://github.com/tobymao/sqlglot :
> SQLGlot is a no-dependency SQL parser, transpiler, optimizer, and engine [written in Python]. It can be used to format SQL or translate between 24 different dialects like DuckDB, Presto / Trino, Spark / Databricks, Snowflake, and BigQuery. It aims to read a wide variety of SQL inputs and output syntactically and semantically correct SQL in the targeted dialects.
Open Hardware Ethernet Switch project, part 1
There are 48+2 port switches with OpenWRT support.
Re: initial specs for the (4 port) OpenWRT One, which is built on Banana Pi's, which supports U-boot: https://www.cnx-software.com/2024/01/12/openwrt-one-ap-24-xy... .. https://openwrt.org/toh/openwrt/one:
> The non-open-source components include the 2.5GbE PHY and WiFi firmware with blobs running on separate cores that are independent of the main SoC where OpenWrt is running. The DRAM calibration routines are closed-source binaries as well.
Software for FPGA switch, probe, and GHz oscilloscope projects?
/? inurl:awesome vivado https://www.google.com/search?q=inurl%3Aawesome+vivado :
awesome-hdl: https://github.com/drom/awesome-hdl :
sphinx-hwt:
d3-wave probably won't do GHz in realtime. https://github.com/Nic30/d3-wave
Pyqtgraph probably can't realtime plot GHz probe data without resampling either?
pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
The hwtLib README says Vivado supports IP-XACT format.
hwtLib: https://github.com/Nic30/hwtLib :
> hwtLib is the library of hardware components writen using hwt library. Any component can be exported as Xilinx Vivado (IP-exact) or Quartus IPcore using IpPackager or as raw Verilog / VHDL / SystemC code and constraints by to_rtl() function. Target language is specified by keyword parameter serializer.
IP-XACT: https://en.wikipedia.org/wiki/IP-XACT
hwtlib docs > hwtLib.peripheral.ethernet package: https://hwtlib.readthedocs.io/en/latest/hwtLib.peripheral.et...
hwtLib.peripheral.uart package: https://hwtlib.readthedocs.io/en/latest/hwtLib.peripheral.ua...
It looks like there are CRC implementations in hwtlib. Which CRC or hash does U-boot use for firmware flashing? https://www.google.com/search?q=Which+CRC+or+hash+does+U-boo... ... Looks like CRC32 like .zip files but not .tar.gz files.
U-boot: https://github.com/u-boot/u-boot
OpenWRT docs > "Failsafe mode, factory reset, and recovery mode": https://openwrt.org/docs/guide-user/troubleshooting/failsafe...
Open vSwitch: https://en.wikipedia.org/wiki/Open_vSwitch :
> Open vSwitch can operate both as a software-based network switch running within a virtual machine (VM) hypervisor, and as the control stack for dedicated switching hardware; as a result, it has been ported to multiple virtualization platforms, switching chipsets, and networking hardware accelerators.[7]
"Porting Open vSwitch to New Software or Hardware": https://docs.openvswitch.org/en/latest/topics/porting/
awesome-open-source-hardware: https://github.com/aolofsson/awesome-opensource-hardware
awesome-open-hardware: https://github.com/delftopenhardware/awesome-open-hardware :
> Journal of Open Hardware (JOH), HardwareX Journal,
There are also xilinx (now AMD) FPGA modules in hwtlib:
hwtLib.xilinx package: https://hwtlib.readthedocs.io/en/latest/hwtLib.xilinx.html#
New stainless steel pulls green hydrogen directly out of seawater
... NewsArticle: "New ultra stainless steel for hydrogen production" ; SS-H2 https://www.sciencedaily.com/releases/2023/11/231117102539.h...
ScholarlyArticle: "A sequential dual-passivation strategy for designing stainless steel used above water oxidation." (2023) https://www.sciencedirect.com/science/article/abs/pii/S13697... .. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22...
SS-H2 resists Chloride ions (in saltwater) with Manganese and Chromium in order to maintain hydrolysis.
Aluminum requires Gallium to maintain hydrolysis FWIU. Is there a laser or other treatment of aluminum to achieve the same effect as Gallium? Could low voltage be required to maintain hydrolysis; like an air brake?
Beyond qubits: Meet the qutrit (and ququart)
ScholarlyArticle: "Quantum error correction of qudits beyond break-even" (2025) https://www.nature.com/articles/s41586-025-08899-y
"Logical states for fault-tolerant quantum computation with propagating light" (2024) https://www.science.org/doi/10.1126/science.adk7560 .. "A physical qubit with built-in error correction" https://news.ycombinator.com/item?id=39243929
"Layer codes" (2024) https://www.nature.com/articles/s41467-024-53881-3 .. https://news.ycombinator.com/item?id=42264340 ; 3D layer codes now instead of 2D surface codes
From https://news.ycombinator.com/item?id=42723063 :
> /? How can fractional quantum hall effect be used for quantum computing https://www.google.com/search?q=How+can+a+fractional+quantum...
>> Non-Abelian Anyons, Majorana Fermions are their own anti particles, Topologically protected entanglement [has lower or no error and thus less need for QEC quantum error correction]
Additive manufacturing of zinc biomaterials for biodegradable in vivo use
NewsArticle: "Additive manufacturing of zinc biomaterials opens new possibilities for biodegradable medical implants" (2025) https://3dprintingindustry.com/news/additive-manufacturing-o...
Ultrasound deep tissue in vivo sound printing
NewsArticle: "Using Ultrasound to Print Inside the Body: Caltech Unveils Deep Tissue In Vivo Sound Printing Technique" (2025) https://3dprintingindustry.com/news/using-ultrasound-to-prin...
The world could run on older hardware if software optimization was a priority
Code bloat: https://en.wikipedia.org/wiki/Code_bloat
Software bloat > Causes: https://en.wikipedia.org/wiki/Software_bloat#Causes
Program optimization > Automated and manual optimization: https://en.wikipedia.org/wiki/Program_optimization#Automated...
Making PyPI's test suite faster
I get that pytest has features that unittest does not, but how is scanning for test files in a directory considered appropriate for what is called a high security application in the article?
For high security applications the test suite should be boring and straightforward. pytest is full of magic, which makes it so slow.
Python in general has become so complex, informally specified and bug ridden that it only survives because of AI while silencing critics in their bubble.
The complexity includes PSF development processes, which lead to:
https://www.schneier.com/blog/archives/2024/08/leaked-github...
strace is one way to determine how many stat calls a process makes.
Developers avoid refactoring costs by using dependency inversion, fixtures and functional test assertions without OO in the tests, too.
Pytest collection could be made faster with ripgrep and does it even need AST? A thread here mentions how it's possible to prepare a list of .py test files containing functions that start with "test_" to pass to the `pytest -k` option; for example with ripgrep.
One day I did too much work refactoring tests to minimize maintenance burden and wrote myself a functional test runner that captures AssertionErrors and outputs with stdlib only.
It's possible to use unittest.TestCase() assertion methods functionally:
assert 0 == 1
# AssertionError
import unittest
test = unittest.TestCase()
test.assertEqual(0, 1)
# AssertionError: 0 != 1
unittest.TestCase assertion methods
have default error messages, but the `assert` keyword does not.In order to support one file stdlib-only modules, I have mocked pytest.mark.parametrize a number of times.
chmp/ipytest is one way to transform `assert a == b` to `assertEqual(a,b)` like Pytest in Jupyter notebooks.
Python continues to top language use and popularity benchmarks.
Python is not a formally specified language, mostly does not have constant time operations (or documented complexity in docstring attrs), has a stackless variant, supported asynchronous coroutines natively before C++, now has some tail-call optimization in 3.14, now has nogil mode, and is GPU accelerated in many different ways.
How best could they scan for API tokens committed to public repos?
Google launches 'implicit caching' to make accessing latest AI models cheaper
Progress toward fusion energy gain as measured against the Lawson criteria
It should be noted that "breakeven" is often misleading.
There's "breakeven" as in "the reaction produces more energy than put into it", and there's breakeven as in "the entire reactor system produces more energy than put into it", which isn't quite the same thing.
In the laser business, the latter is called "wall plug efficiency," which is laser power out per electrical power in.
"Uptime Percentage", "Operational Availability" (OA), "Duty Cycle"
Availability (reliability engineering) https://en.wikipedia.org/wiki/Availability
Terms from other types of work: kilowatt/hour (kWh), Weight per rep, number of reps, Total Time Under Tension
Mass spectrometry method identifies pathogens within minutes instead of days
Would be interesting how much resolution they need for the diagnosis to be reliable.
Because high resolution mass spectrometers cost millions of dollars, and "minutes" for a diagnosis can mean that one spectrometer can only run 3 samples per hour - or 72 per day.
And while a research university can afford a million dollar spectrometer (and the grad students that run it), even a small hospital will create 72 bacterial swaps per hour - while absolutely not having the money to get 10 spectrometers with the corresponding technicians.
And the incumbent/competitor - standard bacterial cultures - is cheap!
MRI machines cost up to 0.5 million, and take more than minutes for a scan. So this is in the upper realm of reasonable. At this point it is an engineering problem to get costs down.
There are 0.05 Tesla MRI machines that almost work with a normal 15A 110V outlet now FWIU; https://news.ycombinator.com/item?id=40965068 :
> "Whole-body magnetic resonance imaging at 0.05 Tesla" [1800W] https://www.science.org/doi/10.1126/science.adm7168 .. https://news.ycombinator.com/item?id=40335170
Other emerging developments in __ spectroscopy:
/?hnlog Spectro:
NIRS;
> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?
NIRS would be low cost, but the wavelength compared to the sample size.
From https://news.ycombinator.com/item?id=38528844 :
> "Reversible optical data storage below the diffraction limit (2023)" [at cryogenic temperatures] https://news.ycombinator.com/item?id=38528844 :
> [...] have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
"Eye-safe laser technology to diagnose traumatic brain injury in minutes" https://news.ycombinator.com/item?id=38510092 :
> "Window into the mind: Advanced handheld spectroscopic eye-safe technology for point-of-care neurodiagnostic" (2023) https://www.science.org/doi/10.1126/sciadv.adg5431
> multiplex resonance Raman spectroscopy
Holotomographic imaging is yet another imaging method that could be less costly than MRI; https://news.ycombinator.com/item?id=40819864
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054 :
> "Terahertz spectroscopy of collective charge density wave dynamics at the atomic scale" (2024) https://www.nature.com/articles/s41567-024-02552-7
Microservices are a tax your startup probably can't afford
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains. Before that? You’re paying the price without getting the benefit: duplicated infra, fragile local setups, and slow iteration. For example, Segment eventually reversed their microservice split for this exact reason — too much cost, not enough value.
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
Some resume driven developers will choose microservices for startups as a way to LARP a future megacorp job. Startup may fail, but they at least got some distributed system experience. It takes extremely savvy technical leadership to prevent this.
In my experience, it seems the majority of folks know the pitfalls of microservices, and have since like... 2016? Maybe I'm just blessed to have been at places with good engineering, technical leadership, and places that took my advice seriously, but I feel like the majority of folks I've interacted with all have experienced some horror story with microservices that they don't want to repeat.
Does [self-hosted, multi-tenant] serverless achieve similar separation of concerns in comparison to microservices?
Should the URLs contain a version; like /api/v1/ ?
FWIU OpenAPI API schema enable e.g. MCP service discovery, but not multi-API workflows or orchestrations.
(Edit: "The Arazzo Specification - A Tapestry for Deterministic API Workflows" by OpenAPI; src: https://github.com/OAI/Arazzo-Specification .. spec: https://spec.openapis.org/arazzo/latest.html (TIL by using this comment as a prompt))
People are losing loved ones to AI-fueled spiritual fantasies
Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
The idea that people in medieval times believed in a flat Earth is a myth that was invented in the 1800s. See https://en.wikipedia.org/wiki/Myth_of_the_flat_Earth for more.
Galileo Galilei: https://en.wikipedia.org/wiki/Galileo_Galilei :
> Galileo's championing of Copernican heliocentrism was met with opposition
... By the most published majority, whose texts would've been used to train science LLMs at the time back then.
And that most published majority believed in the Ptolemaic model. Which as https://en.wikipedia.org/wiki/Geocentric_model#Ptolemaic_mod... says:
> Ptolemy argued that the Earth was a sphere in the center of the universe...
Note. Spherical Earth. Not flat.
But could the Greeks sail?
Did ancient (Eastern?) Jacob's Staff surveying and navigation methods account for the curvature of the earth? https://www.google.com/search?q=Did%20ancient%20(Eastern%3F)... :
- History of geodesy: https://en.wikipedia.org/wiki/History_of_geodesy
FWIU Egyptian sails are Phoenician in origin.
The criticism being directed at this comment because in fact most pre-Copernican scholarship believed the world spherical is missing the point.
Musk's xAI in Memphis: 35 gas turbines, no air pollution permits
>methane gas turbines
>nitrogen oxides
Can someone explain to me how those produce nitrogen oxide? High school chem taught me CH4 + 2-O2 -> CO2 + 2-H2O... where's the nitrogen oxide?
The turbine produces high temperatures which then causes reactions with the nitrogen that's present in the atmosphere. There's also nitrogen present in the fuel. It's not 100% pure methane.
Does AGR Acidic Gas Reduction work with methane turbines?
Shouldn't all methane-powered equipment have this AGR (or similar) new emission reduction technology?
From https://www.ornl.gov/news/add-device-makes-home-furnaces-cle... :
> ORNL’s award-winning ultraclean condensing high-efficiency natural gas furnace features an affordable add-on technology that can remove more than 99.9% of acidic gases and other emissions. The technology can also be added to other natural gas-driven equipment.
FWIU basically no generators have catalytic converters, because that requires computer controlled fuel ignition.
...
FWIU, in data centers, "100% Green" means "100% offset by PPAs" (power-purchase agreement); so "200% green" could mean "100% directly-sourced clean energy".
Should they pack up and pay out and start over elsewhere with enough clean energy, [ or should force methane generators to comply? ]
It sounds like that's what they're adding for their permanent generators. Unfortunate that their portable "temporary" generators weren't equipped with this technology.
Though it's not a methane leak, in their case the problem is the poisonous byproducts of combustion FWIU;
Looks like methane.jpl.nasa.gov isn't up anymore? https://methane.jpl.nasa.gov/
We invested tax dollars in NASA space-based methane leak imaging, because methane is a potent greenhouse gas.
Is this another casualty of their distracting, wasteful, and saboteurial hit on NASA Earth Science funding?
I decided to pay off a school’s lunch debt
On the off chance you're interested in school lunches I highly recommend watching videos of Japanese school lunches on YouTube. There's a bunch out there now and if you were raised in the American system it will probably blow your mind. The idea that lunches can be freshly made, on site, out of healthy ingredients and children are active participants in serving and cleaning up is just crazy. When I encountered it for the first time I felt like a big part of my childhood had been sold to the lowest bidder.
> The idea that lunches can be freshly made, on site, out of healthy ingredients and children
Excellent garden path sentence.
A friend who's a pre-school teacher has this excellent t-shirt (I LMAO the first time I saw it):
let's eat, kids.
let's eat kids.
punctuation saves lives.
https://m.media-amazon.com/images/I/B1pppR4gVKL._CLa%7C2140%...
"Eats, Shoots & Leaves" https://en.wikipedia.org/wiki/Eats,_Shoots_%26_Leaves
A new hairlike electrode for long-term, high-quality EEG monitoring
This is actually an improvement on the lead electrode technology, making them smaller and improving the scalp adhesion for better fidelity. Ostensibly, you would still need an array of them for medical diagnoses, like isolating and/or monitoring a seizure to a particular portion of the brain.
Isn't that circular pad the electrode, and the "hair" just the lead which can be replaced by any copper wire?
Terrible headline. The single hair-like electrode outperforms the connection performance (longevity/signal to noise) of a single electrode from a 21-lead EEG.
It's not just the headline. "... a single electrode that looks just like a strand of hair and is more reliable than the standard, multi-electrode version." "The researchers tested the device’s long-term adhesion and electrical performance and compared it to the current, standard EEG using multiple electrodes."
I read the story three times and I'm still confused. But I'm sure you're right, and I think it's the author who's confused.
My second mental image was an alligator clip connected to a hair on a person’s head.
https://www.psu.edu/news/research/story/future-brain-activit...
Better link from Penn State. My reading of this seems to suggest that these electrodes are better than the standard one, NOT that one electrode is better than 24 leads.
OK, thanks, we’ve changed the URL to this from https://newatlas.com/medical-devices/3d-printed-hairlike-eeg....
[flagged]
That's not what an EEG is, and they have been around for decades.
Did they read and write brain signals with AI? What would the goal of that even be?
Presuming a higher bandwidth interface than what we currently have (voice/text chat).
Many Black Mirror episodes explore this.
The brain machine interface concept seems very useful. My question is the AI part. Aside from the machine learning likely needed to decode brain signals meaningfully at all, why would we want to hook up something with any resemblance to current AI to a brain directly?
The current AI stuff can easily be described as a breakthrough in natural language interfaces. A field engineers have been working on for many years. It easy to imagine that the methods used to develop current AI could be used for different types of interfaces we've been stuck on.
No
Nitrogen runoff leads to preventable health outcomes, experts say (2021)
/? Nitrogen: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... :
- "Discovery of nitrogen-fixing corn variety could reduce need for added fertilizer" (2018) https://news.ycombinator.com/item?id=17721741
I should have loved biology too
I should write a blog post entitled "I should have loved computer science"
Do you do bioinformatics?
Bioinformatics: https://en.wikipedia.org/wiki/Bioinformatics
Health informatics: https://en.wikipedia.org/wiki/Health_informatics
Genetic algorithm: https://en.wikipedia.org/wiki/Genetic_algorithm :
> Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation.
AP®/College Biology: https://www.khanacademy.org/science/ap-biology
AP®/College Biology > Unit 6: Gene Expression and Regulation > Lesson 6: Mutations: https://www.khanacademy.org/science/ap-biology/gene-expressi...
AP®/College Biology > Unit 7: Natural selection: https://www.khanacademy.org/science/ap-biology/natural-selec...
Rosalind.info has free CS algorithms applied bioinformatics exercises in Python; in a tree or a list; including genetic combinatorics. https://rosalind.info/problems/list-view/
FWICS there is not a "GA with code exercise" in the AP Bio or Rosalind curricula.
YouTube has videos of simulated humanoids learning to walk with mujoco and genetic algorithms that demonstrate goal-based genetic programming with Cost / Error / Fitness / Survival functions.
Mutating source code AST is a bit different from mutating to optimize a defined optimization problem with specific parameters; though the task is basically the same: minimize error between input and output, and then XAI.
Trump says Harvard will lose tax exempt status
What prevents them from instead justifying tax-exempt status as a faith-based nonprofit for their FSM-related charitable work, say?
I think that assumes that there's some fair evaluating of tax status going on.
Harvard defied Trump and that's why they're making this push. I don't think any given argument to the feds will change that math.
If you work for the feds, I think you know that if you were to follow a process and find that Harvard should retain their tax exempt status then you won't have a job long.
Show HN: Frecenfile – Instantly rank Git files by edit activity
frecenfile is a tiny CLI tool written in Rust that analyzes your Git commit history in order to identify "hot" or "trending" files using a frecency score that incorporates both the frequency and recency of edits.
It is fast enough to get an instant response in most repositories, and you will get a reasonably fast response in practically any repository.
It can be useful for getting a list of "recent"/important files when Git history is the only "usage" history you have available.
Felix86: Run x86-64 programs on RISC-V Linux
What remaining challenges are you most interested in solving for felix86—GPU driver support, full 32-bit compatibility, better Wine integration, or something else entirely?
The felix86 compatibility list also lists SuperTux and SuperTuxCart.
"lsteamclient: Add support for ARM64." https://github.com/ValveSoftware/Proton/commit/8ff40aad6ef00... .. https://news.ycombinator.com/item?id=43847860
/? box86: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
"New box86 v0.3.2 and Box64 v0.2.4 released – RISC-V and WoW64 support" (2023) https://news.ycombinator.com/item?id=37197074
/? box64 is:pr RISC-V is:closed: https://github.com/ptitSeb/box64/pulls?q=is%3Apr+risc-v+is%3...
Wikipedia says it will use AI, but not to replace human volunteers
>Scaling the onboarding of new Wikipedia volunteers
could be pretty helpful. I edit a bit and it's confusing in a number of ways, some I still haven't got the hang of. There's very little "thank you for your contribution but it needs a better source - why not this?" and usually just your work reverted without thanks or much explanation.
Could AI sift through removals and score as biased or vandalist?
And then what to do about "original research" that should've been moved to a different platform or better (also with community review) instead of being deleted?
Wikipedia:No_original_research: https://en.wikipedia.org/wiki/Wikipedia:No_original_research #Using_sources
I'm guessing it could advise about that even if it didn't make decisions.
Healthy soil is the hidden ingredient
I'm a gardening and landscaping enjoyer, but I am constantly confused about the bordering magical thinking surrounding dirt, among other aspects of growing things.
If you look at hydroponics/aeroponics, plants basically need water, light, and fertilizer (N (nitrogen) P (phosphorous) K (potassium), and a few trace minerals). It can be the most synthetic process you've ever seen, and the plants will grow amazingly well.
The other elements regarding soil health, etc, would be much better framed in another way, rather than as directly necessary for plant health. The benefits of maintaining a nice living soil is that it makes the environment self-sustaining. You could just dump synthetic fertilizer on the plant, with some soil additives to help retain the right amount of drainage/retention, and it would do completely fine. But without constant optimal inputs, the plants would die.
If you cultivate a nice soil, such that the plants own/surrounding detritus can be broken down effectively, such that the nutrients in the natural processes can be broken down and made available to the plant, and the otherwise nonoptimal soil texture characteristics could be brought to some positive characteristics by those same processes, then you can theoretically arrive at a point that requires very few additional inputs.
sure, we can make them grow well in a lab. but a natural system is so much simpler and elegant
Plants absorb nitrogen and CO2 from the air and store it in their roots; plants fertilize soil.
If you only grow plants with externally-sourced nutrients, that is neither sustainable nor permaculture.
Though it may be more efficient to grow without soil; soil depletion isn't prevented by production processes that do not generate topsoil.
JADAM is a system developed by a chemicals engineer given what is observed to work in JNF/KNF. https://news.ycombinator.com/item?id=38527264
Where do soil amendments come from, and what would deplete those stocks (with consideration for soil depletion)?
(Also, there are extremely efficient ammonia/nitrogen fertilizer generators, but still then the algae due to runoff problem. FWIU we should we asks ng farmers to Please produce granulated fertilizer instead of liquid.)
The new biofuel subsidies require no-till farming practices; which other countries are further along at implementing (in or to prevent or reverse soil depletion).
Tilling turns topsoil to dirt due to loss of moisture, oxidation, and solar radiation.
The vast majority of plants do not absorb nitrogen from the air. Legumes are the well-known exception.
I think that's why it's good to rotate beans or plant clover cover crop.
Three Sisters: Corn, Beans, Squash: https://en.wikipedia.org/wiki/Three_Sisters_(agriculture)
Companion planting: https://en.wikipedia.org/wiki/Companion_planting
Nitrogen fixation: https://en.wikipedia.org/wiki/Nitrogen_fixation
Most plants do not absorb atmospheric nitrogen, but need external nitride fertilizer to grow! That causes serious ground water polution!
> The new biofuel subsidies require no-till farming practices
This actually depletes soil of nitrogen!
Why do you believe that no-till farming practices deplete soil of nitrogen more than tilling?
A plausible hypothesis: tilling destroys the bacteria that get nitrogen to plant roots.
Isn't runoff erision the primary preventable source of nitrogen depletion?
FWIU residue mulch initially absorbs atmospheric nitrogen instead of the soil absorbing it, but that residue and its additional nitrogen eventually decays into the soil.
I have heard that it takes something like five years to successfully completely transform acreage with no-till; and then it's relatively soft and easily plantable and not impacted, so it absorbs and holds water.
No-till farmers are not lacking soil samples.
What would be a good test of total change in soil nitrogen content (and runoff) given no-till and legacy farming practices?
With pressure-injection seeders and laser weeders, how many fewer chemicals are necessary for pro farming?
String Types Considered Harmful
But a lack of string types (or tagged strings) results in injection vulnerabilities: OS, SQL, XSS (JS, CSS, HTML), XML, URI, query string,.
How should template autoescaping be implemented [in Zig without string types or type-tagged strings]?
E.g. Jinja2 implements autoescaping with MarkupSafe; strings wrapped in a Markup() type will not be autoescaped because they already have an .__html__() method.
MarkupSafe: https://pypi.org/project/MarkupSafe/
Some time ago, I started to create a project called "strypes" to teach or handle typed strings and escaping correctly.
"Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')" https://cwe.mitre.org/data/definitions/74.html
How to Write a Fast Matrix Multiplication from Scratch with Tensor Cores (2024)
Multiplication algorithm: https://en.wikipedia.org/wiki/Multiplication_algorithm
From https://news.ycombinator.com/item?id=40519828 re: LLMs and matrix multiplication with tensors:
> "You Need to Pay Better Attention" (2024) https://arxiv.org/abs/2403.01643 :
>> Our first contribution is Optimised Attention, which performs similarly to standard attention, but has 3/4 as many parameters and one matrix multiplication fewer per head. Next, we introduce Efficient Attention, which performs on par with standard attention with only 1/2 as many parameters as many parameters and two matrix multiplications fewer per head and is up to twice as fast as standard attention. Lastly, we introduce Super Attention, which surpasses standard attention by a significant margin in both vision and natural language processing tasks while having fewer parameters and matrix multiplications.
From "Transformer is a holographic associative memory" (2025) https://news.ycombinator.com/item?id=43029899 .. https://westurner.github.io/hnlog/#story-43028710 :
>>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products.
From https://news.ycombinator.com/item?id=41322088 :
> "A carbon-nanotube-based tensor processing unit" (2024)
"Karatsuba Matrix Multiplication and Its Efficient Hardware Implementations" (2025) https://arxiv.org/abs/2501.08889 .. https://news.ycombinator.com/item?id=43372227
Google Search to redirect its country level TLDs to Google.com
I wonder if this is related to the first party cookie security model. That is supposedly why Google switched maps from maps.google.com to www.google.com/maps. Running everything off a single subdomain of a single root domain should allow better pooling of data.
Subdomains were chosen historically because it was the sane way to run different infrastructure for each service. Nowadays, with the globally distributed frontends that Google's Cloud offers, path routing and subdomain routing are mostly equivalent. Subdomains are archaic, they are exposing the architecture (separate services) to users out of necessity. I don't think cookies were the motivation but it's probably a nice benefit.
https://cloud.google.com/load-balancing/docs/url-map-concept...
Has anything changed about the risks of running everything with the same key, on the apex domain?
Why doesn't Google have DNSSEC.
To a first approximation, nobody has DNSSEC. It's not very good.
DNSSEC is necessary like GPG signatures are necessary; though also there are DoH/DoT/DoQ and HTTPS.
Google doesn't have DNSSEC because they've chosen not to implement it, FWIU.
/? DNSSEC deployment statistics: https://www.google.com/search?q=dnssec+deployment+statistics...
If not DNSSEC, then they should push another standard for signing DNS records (so that they are signed at rest (and encrypted in motion)).
Do DS records or multiple TLDs and x.509 certs prevent load balancing?
Were there multiple keys for a reason?
So, not remotely necessary at all? Neither DNSSEC nor GPG have any meaningful penetration in any problem domain. GPG is used for package signing in some language ecosystems, and, notoriously, those signatures are all busted (Python is the best example). They're both examples of failed 1990s cryptosystems.
Do you think the way Debian uses gpg signatures for package verification is also broken?
Red Hat too.
Containers, pip, and conda packages have TUF and now there's sigstore.dev and SLSA.dev. W3C Verifiable Credentials is the open web standard JSONLD RDF spec for signatures/attestations.
IDK how many reinventions of GPG there are.
Do all of these systems differ only in key distribution and key authorization, ceteris paribus?
The Chemistry Trick Poised to Slash Steel's Carbon Footprint
> Their process, which uses saltwater and iron oxide instead of carbon-heavy blast furnaces, has been optimized to work with naturally sourced materials. By identifying low-cost, porous iron oxides that dramatically boost efficiency, the team is laying the groundwork for large-scale, eco-friendly steel production. And with help from engineers and manufacturers, they’re pushing this green tech closer to the real world.
ScholarlyArticle: "Pathways to Electrochemical Ironmaking at Scale Via the Direct Reduction of Fe2O3" (2025) https://pubs.acs.org/doi/10.1021/acsenergylett.5c00166 https://doi.org/10.1021/acsenergylett.5c00166
Hypertext TV
thought this was going to be about things like https://en.wikipedia.org/wiki/Hybrid_Broadcast_Broadband_TV (hypertext on tv)
Same. Thanks; TIL about HbbTV: Hybrid Broadcast Broadband TV: https://en.wikipedia.org/wiki/Hybrid_Broadcast_Broadband_TV
Had been wondering how to add a game clock below the TV that syncs acceptably; a third screen: https://news.ycombinator.com/item?id=30890265
Show HN: A VS Code extension to visualise Rust logs in the context of your code
We made a VS Code extension [1] that lets you visualise logs and traces in the context of your code. It basically lets you recreate a debugger-like experience (with a call stack) from logs alone.
This saves you from browsing logs and trying to make sense of them outside the context of your code base.
We got this idea from endlessly browsing traces emitted by the tracing crate [3] in the Google Cloud Logging UI. We really wanted to see the logs in the context of the code that emitted them, rather than switching back-and-forth between logs and source code to make sense of what happened.
It's a prototype [2], but if you're interested, we’d love some feedback.
---
References:
[1]: VS Code: marketplace.visualstudio.com/items?itemName=hyperdrive-eng.traceback
[2]: Github: github.com/hyperdrive-eng/traceback
[3]: Crate: docs.rs/tracing/latest/tracing
Good idea!
This probably saves resources by eliminating need to re-run code to walk through error messages again.
Integration with time-travel debugging would even more useful; https://news.ycombinator.com/item?id=30779019
From https://news.ycombinator.com/item?id=31688180 :
> [ eBPF; Pixie, Sysdig, Falco, kubectl-capture,, stratoshark, ]
> Jaeger (Uber contributed to CNCF) supports OpenTracing, OpenTelemetry, and exporting stats for Prometheus.
From https://news.ycombinator.com/item?id=39421710 re: distributed tracing:
> W3C Trace Context v1: https://www.w3.org/TR/trace-context-1/#overview
Thanks for sharing all these links, super handy! I really appreciate it.
NP; improving QA feedback loops with IDE support is probably as useful as test coverage and test result metrics
/? vscode distributed tracing: https://www.google.com/search?q=vscode+distributed+tracing :
- jaegertracing/jaeger-vscode: https://github.com/jaegertracing/jaeger-vscode
/? line-based display of distributed tracing information in vs code: https://www.google.com/search?q=line-based%20display%20of%20... :
- sprkl personal observability platform: https://github.com/sprkl-dev/use-sprkl
Theoretically it should be possible to correlate deployed code changes with the logs and traces preceding 500 errors; and then recreate the failure condition given a sufficient clone of production (in CI) to isolate and verify the fix before deploying new code.
Practically then, each PR generates logs, traces, and metrics when tested in a test deployment and then in production. FWIU that's the "personal" part of sprkl.
Thanks for sharing, first time I hear about sprkl.dev
'Cosmic radio' detector could discover dark matter within 15 years
From https://news.ycombinator.com/item?id=42376759 :
> FWIU this Superfluid Quantum Gravity rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From https://news.ycombinator.com/item?id=42371946 :
> Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
>> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
High-voltage hydrogel electrolytes enable safe stretchable Li-ion batteries
Isolated Execution Environment for eBPF
From https://news.ycombinator.com/item?id=43553198 .. https://news.ycombinator.com/item?id=43564972 :
> Can [or should] a microkernel run eBPF? [or WASM?]
The performance benefits of running eBPF in the kernel are substantial and justifying, but how much should a kernel or a microkernel do?
Ask HN: Why is there no better protocol support for WiFi captive portals?
I'm curious why we still rely on hacky techniques like requesting captive.apple.com and waiting for interception, rather than having proper protocol-level support built into WPA. Why can't the WPA protocol simply announce that authentication requires a captive portal?
This seems like every public hotspot I connect to it's flakey and will sometimes report it's connected when it still requires captive portal auth. Or even when it does work it's a 15 second delay before the captive screen pops-up. Shouldn't this have been solved properly by now.
Does anyone have insight into the technical or historical reasons this remains so messy? If the wireless protocol could announce to the client thru some standard, that they have to complete auth via HTTP I feel the clients could implement much better experience.
Related issue: secured DNS must downgrade/fallback to unsecured DNS because of captive portal DNS redirection (because captive portals block access to DNS until the user logs in, and the user can't log into the captive portal without DNS redirection that is prevented by DoH, DoT, and DoQ).
Impact: if you set up someone's computer to use secured DNS only, and their device doesn't have per-SSID connection profiles, then they can't use captive portal hotspot Wi-Fi without knowing how to disable secured DNS.
"Do not downgrade to unsecured DNS unless it's an explicitly authorized captive portal"
IIRC there's a new-ish way to configure DNS-over-HTTPS over DHCP like there is for normal DNS.
Bilinear interpolation on a quadrilateral using Barycentric coordinates
/? Barycentric
From "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
Phase from second order amplitude FWIU
Universal photonic artificial intelligence acceleration
"Universal photonic artificial intelligence acceleration" (2025) https://www.nature.com/articles/s41586-025-08854-x :
> Abstract: [...] Here we introduce a photonic AI processor that executes advanced AI models, including ResNet [3] and BERT [20,21], along with the Atari deep reinforcement learning algorithm originally demonstrated by DeepMind [22]. This processor achieves near-electronic precision for many workloads, marking a notable entry for photonic computing into competition with established electronic AI accelerators [23] and an essential step towards developing post-transistor computing technologies.
Photon-based chips vs electron-based chips? It's interesting that photons and electrons are closely related: two neutral photons can become an electron-positron pair and vice versa.
The paper mentions that their photonic chip is less precise than an electronic one, but this looks like an advantage for AI. In fact, the stubborn precision of electron-based processors that erase the quantum nature of electrons is what I think is preventing the creation of real AI. In other words, if a microprocessor is effectively a deterministic complex mechanism, it won't become AI no matter what, but if the quantum nature is let loose, at least slightly, interesting things will start to happen.
There are certainly infinitely non-halting automata on deterministic electronic processors.
Abstractly, in terms of constructor theory, the non-halting task is independent from a constructor, which is implemented with a computation medium.
FWIU from reading a table on wikipedia about physical platforms for QC it would be possible to do quantum computing with just electron voltage but typical component quality.
So phase; certain phases.
And now parametic single photon emission and detection.
"Low-noise balanced homodyne detection with superconducting nanowire single-photon detectors" (2024) https://news.ycombinator.com/item?id=39537236
"A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
"Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 .. https://arxiv.org/abs/2302.05967 .. "Study of photons in QC reveals photon collisions in matter create vortices" https://news.ycombinator.com/item?id=40600736
"How do photons mediate both attraction and repulsion?" (2025) [as phonons in matter] https://news.ycombinator.com/item?id=42661511 notes re: recent findings with photons ("quanta")
Fedora change aims for 99% package reproducibility
This goal feels like a marketing OKR to me. A proper technical goal would be "all packages, except the ones that have a valid reason, such as signatures, not to be reproducible".
As someone who dabbles a bit in the RHEL world, IIRC all packages in Fedora are signed. In additional the DNF/Yum meta-data is also signed.
IIRC I don't think Debian packages themselves are signed themselves but the apt meta-data is signed.
I learned this from an ansible molecule test env setup script for use in containers and VMs years ago; because `which` isn't necessarily installed in containers for example:
type -p apt && (set -x; apt install -y debsums; debsums | grep -v 'OK$') || \
type -p rpm && rpm -Va # --verify --all
dnf reads .repo files from /etc/yum.repos.d/ [1] which have various gpg options; here's an /etc/yum.repos.d/fedora-updates.repo: [updates]
name=Fedora $releasever - $basearch - Updates
#baseurl=http://download.example/pub/fedora/linux/updates/$releasever/Everything/$basearch/
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
enabled=1
countme=1
repo_gpgcheck=0
type=rpm
gpgcheck=1
metadata_expire=6h
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
From the dnf conf docs [1], there are actually even more per-repo gpg options: gpgkey
gpgkey_dns_verification
repo_gpgcheck
localpkg_gpgcheck
gpgcheck
1. https://dnf.readthedocs.io/en/latest/conf_ref.html#repo-opti...2. https://docs.ansible.com/ansible/latest/collections/ansible/... lists a gpgcakey parameter for the ansible.builtin.yum_repository module
For Debian, Ubuntu, Raspberry Pi OS and other dpkg .deb and apt distros:
man sources.list
man sources.list | grep -i keyring -C 10
# trusted:
# signed-by:
# /etc/apt/ trusted.gpg.d/
man apt-secure
man apt-key
apt-key help
less "$(type -p apt-key)"
signing-apt-repo-faq:
https://github.com/crystall1nedev/signing-apt-repo-faqFrom "New requirements for APT repository signing in 24.04" (2024) https://discourse.ubuntu.com/t/new-requirements-for-apt-repo... :
> In Ubuntu 24.04, APT will require repositories to be signed using one of the following public key algorithms: [ RSA with at least 2048-bit keys, Ed25519, Ed448 ]
> This has been made possible thanks to recent work in GnuPG 2.4 82 by Werner Koch to allow us to specify a “public key algorithm assertion” in APT when calling the gpgv tool for verifying repositories.
The Mutable OS: Why Isn't Windows Immutable in 2025?
Hey all—this is something I’ve been thinking about for a while in my day-to-day as a desktop support tech. We’ve made huge strides in OS security, but immutability is still seen as exotic, and I don’t think it should be. Curious to hear thoughts or counterpoints from folks who’ve wrestled with these same issues.
I'm working with rpm-ostree distros on workstations. The Universal Blue (Fedora Atomic (CoreOS)) project has OCI images that install as immutable host images.
We were able to install programs as admin on Windows in our university computer lab because of DeepFreeze, almost 20 years ago
"Is DeepFreeze worth it?" https://www.reddit.com/r/sysadmin/comments/18zn3jn/is_deepfr...
TIL Windows has UWF built-in:
"Unified Write Filter (UWF) feature" https://learn.microsoft.com/en-us/windows/configuration/unif...
Re: ~immutable NixOS and SELinux and Flatpaks' chroot filesystems not having SELinux labels like WSL2 either: https://news.ycombinator.com/item?id=43617363
Huh, I had no idea that UFW was a feature of Windows and I'm kind of surprised to not see more widespread adoption for workstation rollouts. DeepFreeze was great (excepting updates and other minor issues) and actively reduced a lot of nuisance issues that we might otherwise have to deal with when I worked for a school.
> On September 20, 2024, Microsoft announced that Windows Server Update Service would no longer be developed starting with Windows Server 2025.[4] Microsoft encourages business to adopt cloud-based solution for client and server updates, such as Windows Autopatch, Microsoft Intune, and Azure Update Manager. [5]
WSUS Offline installer is also deprecated now.
And then to keep userspace updated too, a package manager like Chocolatey NuGet and this power shell script: https://github.com/westurner/dotfiles/blob/develop/scripts/s...
Universal Blue immutable OCI images;
ublue-os/main: https://github.com/ublue-os/main :
> OCI base images of Fedora with batteries included
ublue-os/image-template: https://github.com/ublue-os/image-template :
> Build your own custom Universal Blue Image!
Microsoft took Torvalds, who also devs on Fedora FWIU.
systemd/particleos is an immutable Linux distribution built with mkosi:
"systemd ParticleOS" (2025) https://news.ycombinator.com/item?id=43649088
Immutable:
Idempotent:
Ansible is designed for idempotent tasks; that do not further change state if re-run.
Windows Containers are relatively immutable. Docker Desktop and Podman Desktop include a copy of k8s kubernetes and also kubectl IIRC
Do GUI apps run in Windows containers?
The MSIX apps can opt to run in a sandbox. It's not perfect, but it's _something_. Plus MSIX helps ensure clean install/uninstall as well as delta updates.
Again, not perfect, but serviceable.
fedora/toolbox and distrobox create containers that can access the X socket and /dev/dri/ to run GUI apps from containers.
Flatpaks share (GNOME,KDE,NVIDIA,podman,) runtimes, by comparison.
Re: MSIX https://news.ycombinator.com/item?id=23394302 :
> MSIX only enforces a sandbox if an application doesn’t elect to use the restricted capabilities that allow it to run without. File system and registry virtualization can be disabled quite easily with a few lines in the package manifest, as well as a host of other isolation features.
Flatseal and KDE and Gnome can modify per-flatpak permissions. IDK if there's a way to do per-flatpak-instance permissions, like containers.
MOUNT --type=cache
Man pages are great, man readers are the problem
I disagree. I have been writing man pages for a while, and mastering the language is hard. The documentation for both mdoc and mandb format is not covering the whole language, and the only remaining reference for roff itself seems to be the book by Brian Kernigham. mdoc and mandb are like a macro set on top of roff.
Just this week i considered proposing to $distro to convert all manpages to markdown as part of the build process, and then use a markdown renderer on the shipped system to display them. This would allow the distro to stop shipping *roff per default.
Markdown profits from the much larger amount of tooling. There are a ton of WYSIWYG editors that would allow non-technical users to write such documentation. I imagine we would all profit if creating manual pages was that easy.
On the other side, Markdown is even less formalized. Its like 15 different dialects from different programs that differ in their feature sets and parsing rules. I do not believe things like "How do i quote an asterisk or underscore so it is rendered verbatim" can be portably achieved in Markdown.
RST/Sphinx solves that problem in that there is a single canonical dialect, and it's already effectively used in many very large projects, including the Linux kernel and (obviously) the Python programming language.
Another big plus in my book is that rST has a much more well defined model for extensions than Markdown.
Running CERN httpd 3.0A from 1996 (2022)
A CVE from the year 2000: https://www.cve.org/CVERecord?id=CVE-2000-0079
Do SBOM tools identify web servers that old?
EngFlow Makes C++ Builds 21x Faster and Software a Lot Safer
Fixing the Introductory Statistics Curriculum
I took AP Stat in HS and then College Stats in college unnecessarily. In HS it was TI-83 calculators, and in college it was excel with optional minitab. R was new then
I remember ANOVA being the pinnacle of AP Stat. There are newer ANOVA-like statistical procedures like CANOVA;
"Efficient test for nonlinear dependence of two continuous variables" (2015) https://pmc.ncbi.nlm.nih.gov/articles/PMC4539721/
Estimators:
Yellowbrick implements the scikit-learn Estimator API: https://www.scikit-yb.org/en/latest/
"Developing scikit-learn estimators" https://scikit-learn.org/stable/developers/develop.html#esti... :
> All estimators implement the fit method:
estimator.fit(X, y)
> Out of all the methods that an estimator implements, fit is usually the one you want to implement yourself. Other methods such as set_params, get_params, etc. are implemented in BaseEstimator, which you should inherit from. You might need to inherit from more mixins, which we will explain later.https://scikit-learn.org/stable/modules/generated/sklearn.pi...
Sklearn Glossary > estimator: https://scikit-learn.org/stable/glossary.html#term-estimator
https://news.ycombinator.com/item?id=41311052
https://news.ycombinator.com/item?id=28523442
What is GridSearchCV and what are ways to find optimal faster than brute force grid search?
IRL case studies with applied stats and ML and AI:
- "ML and LLM system design: 500 case studies to learn from" https://news.ycombinator.com/item?id=43629360
Additional stats resources:
- "Seeing Theory" Brown https://seeing-theory.brown.edu/
- "AP®/College Statistics" Khan Academy https://www.khanacademy.org/math/ap-statistics
- "Think Stats: 3rd Edition" notebooks https://github.com/AllenDowney/ThinkStats/tree/v3
And physics and information theory in relation to stats as a field with many applications:
- "Information Theory: A Tutorial Introduction" (2019) https://arxiv.org/abs/1802.05968 .. https://g.co/kgs/sPha7qR
- Entropy > Statistical mechanics: https://en.wikipedia.org/wiki/Entropy#Statistical_mechanics
- Statistical mechanics: https://en.wikipedia.org/wiki/Statistical_mechanics
- Quantum Statistical mechanics: https://en.wikipedia.org/wiki/Quantum_statistical_mechanics
Statistical procedures are inferential procedures.
There are inductive, deductive, and abductive methods of inference.
Statistical inference: https://en.wikipedia.org/wiki/Statistical_inference
Statistical literacy: https://en.wikipedia.org/wiki/Statistical_literacy
Also, the ROC curve Wikipedia sidebar: https://en.wikipedia.org/wiki/Receiver_operating_characteris...
What is the difference between accuracy, precision, specificity, and sensitivity?
Khan Academy has curriculum alignment codes on their OER content, but not yet Schema.org/about and or :educationalAlignment for their https://schema.org/LearningResource (s)
A step towards life on Mars? Lichens survive Martian simulation in new study
"Ionizing radiation resilience: how metabolically active lichens endure exposure to the simulated Mars atmosphere" (2025) https://imafungus.pensoft.net/article/145477/
We've outsourced our confirmation biases to search engines
To write an unbiased systematic review, there are procedures.
In context to the scientific method, isn't RAG also wrong?
Generate a bad argument and then source support for it with RAG or by only searching for confirmation biased support.
I suppose I make this mistake too; I don't prepare systematic reviews, so my research meta-procedure has always been inadequate.
Usually I just [...] but a more scientific procedure would be [...].
Baby Steps into Genetic Programming
Genetic programming: https://en.wikipedia.org/wiki/Genetic_programming
Evolutionary computation > History: https://en.wikipedia.org/wiki/Evolutionary_computation #History
- "The sad state of property-based testing libraries" re coverage-guided fuzzing and other Hilbert spaces: https://news.ycombinator.com/item?id=40884466
- re: MOSES and Combo (a Lisp) and now Python too in re: "Show HN: Codemodder – A new codemod library for Java and Python" and libCST and AST and FST; https://news.ycombinator.com/item?id=39139198
- opencog/asmoses implements MOSES on AtomSpace, a hypergraph for algorithmic graph rewriting: https://github.com/opencog/asmoses
SELinux on NixOS
There's also the Fedora SELinux policy and the container-selinux policy set.
Rootless containers with podman lack labels like virtualenvs and NixOS.
Distrobox and toolbox set a number of options for rootless and regular containers;
--userns=keepid
--security-opt=label=disable
-v /tmp/path:/tmp/path:Z
-v /tmp/path:/tmp/path:z
--gpus all
"How Nix Works"
https://nixos.org/guides/how-nix-works/ :How Nix works: builds have a cache key that's a hash of all build parameters - /nix/store/<cachekey> - so that atomic upgrades and rollback work.
How NixOS works: config files are also snapshotted at a cache key for rollback and atomic upgrades that don't fail if interrupted mid-package-install.
> A big implication of the way that Nix/NixOS stores packages is that there is no /bin, /sbin, /lib, /usr, and so on. Instead all packages are kept in /nix/store. (The only exception is a symlink /bin/sh to Bash in the Nix store.) Not using ‘global’ directories such as /bin is what allows multiple versions of a package to coexist. Nix does have a /etc to keep system-wide configuration files, but most files in that directory are symlinks to generated files in /nix/store.
But which files should be assigned which extended filesystem attribute labels; signed packages only, local builds of [GPU drivers, out-of-tree modules,], not-yet-packaged things like ProtonGE;
I remembered hearing that Bazzite ships with Nix.
This [1] describes installing with the nix-determinate-installer [2] and then [3] installing home-manager :
nix run nixpkgs#home-manager -- switch --flake nix/#$USER
[1] https://universal-blue.discourse.group/t/issue-installing-ce...[2] https://github.com/DeterminateSystems/nix-installer
[3] https://nixos.wiki/wiki/Home_Manager
Working with rpm-ostree; upgrading the layered firefox RPM without a reboot requires -A/--apply-live (which runs twice) and upgrading the firefox flatpak doesn't require a reboot, but SELinux policies don't apply to flatpaks which run unconfined FWIU.
"SEC: Flatpak Chrome, Chromium, and Firefox run without SELinux confinement?" https://discussion.fedoraproject.org/t/sec-flatpak-chrome-ch...
From https://news.ycombinator.com/item?id=43564972 :
> Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for /s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).*
Having written a venv_relabel.sh script to copy selinux labels from /etc onto $VIRTUAL_ENV/etc , IDK; consistently-built packages are signed and maintained. The relevant commands from that script: https://github.com/westurner/dotfiles/blob/develop/scripts/v... :
sudo semanage fcontext --modify -e "${_equalpth}" "${_path}";
sudo restorecon -Rv "${_path}";
Is something like this necessary for every /nix/store/<key> directory? sudo semanage fcontext --modify -e "/etc" "${VIRTUAL_ENV}/etc";
sudo restorecon -Rv "${VIRTUAL_ENV}/etc";
Or is there a better way to support chroots with selinux with or without extending the existing SELinux functionality?Max severity RCE flaw discovered in widely used Apache Parquet
Does anyone know if pandas is affected? I serialize/deserialize dataframes which pandas uses parquet under the hood.
Pandas doesn't use the parquet python package under the hood: https://pandas.pydata.org/docs/reference/api/pandas.read_par...
> Parquet library to use. If ‘auto’, then the option io.parquet.engine is used. The default io.parquet.engine behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if ‘pyarrow’ is unavailable.
Those should be unaffected.
Python pickles have the same issue but it is a design decision per the docs.
Python docs > library > pickle: https://docs.python.org/3/library/pickle.html
Re: a hypothetical pickle parser protocol that doesn't eval code at parse time; "skipcode pickle protocol 6: "AI Supply Chain Attack: How Malicious Pickle Files Backdoor Models" .. "Insecurity and Python Pickles" : https://news.ycombinator.com/item?id=43426963
But python pickle is only supposed to be used with trusted input, so it’s not a vulnerability.
[deleted]
A university president makes a case against cowardice
[flagged]
Then they would need to tax nonprofit religious organizations too.
Why don't they just make the special interests pay their own multi-trillion dollar war bills instead of sabotaging US universities with surprise taxes?
If you increase expenses and cut revenue, what should you expect for your companies?
Why not just make a flat tax for everyone and end all the special interest pandering and exceptions for the rich. It is a poisonous misapplication of the time of our government to constantly be fiddling with tax code to favor one group or another.
Because a lot of people, including many economists, believe capital accumulating endlessly to the same class of thousand-ish people is bad. A flat income tax exacerbates wealth inequality considerably.
Our tax now is worse than flat. Warren buffet brags about paying less % than his secretary.
Either compare ideal tax structures with “no loopholes” (none of these exist in the real world) or compare actually-existing tax structures.
Comparing your ideal flat income tax with the current system is apples to oranges.
>>Why don't they just make the special interests pay their own multi-trillion dollar war bills instead of sabotaging US universities with surprise taxes?
>Either compare ideal tax structures with “no loopholes” (none of these exist in the real world) or compare actually-existing tax structures.
Hence I cannot compare your suggestion with the current system as it is apple to oranges because loopholes would exist.
My thesis is a flat tax would help to minimize the very loopholes you damn. The larger the tax code and the more it panders to particular interest, generally the more opportunity for 'loopholes.'
I don't want to work for a business created by, uh, upper class folks that wouldn't have done it if not for temporary tax breaks by a pandering grifter executive.
I believe in a strong middle class and upward mobility for all.
I don't think we want businesses that are dependent on war, hate, fear, and division for continued profitability.
I don't know whether a flat or a regressive or a progressive tax system is more fair or more total society optimal.
I suspect it is true that, Higher income individuals receive more total subsidies than lower-income individuals.
You don't want a job at a firm that an already-wealthy founder could only pull off due to short-term tax breaks and wouldn't have founded if taxes go any higher.
You want a job at a firm run by people who are going to keep solving for their mission regardless of high taxes due to immediately necessary war expenses, for example.
In the interests of long-term economic health and national security of the United States, I don't think they should be cutting science and medical research funding.
Science funding has positive returns. Science funding has greater returns than illegal wars (that still aren't paid for).
Find 1980 on these charts of tax receipts, GDP, and income inequality: https://news.ycombinator.com/item?id=43140500 :
> "Federal Receipts as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/FYFRGDA188S
> "Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
From https://news.ycombinator.com/item?id=43220833 re: income inequality:
> GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
Find 1980 on a GINI index chart.
Yeah, I mean, I think we agree on most points.
I think there’s too many confounding economic factors to look at GINI alone and conclude the 1980 turning point was caused by nerfing the top income tax bracket. But a compelling argument could probably be made with more supporting data, which of course this margin is too narrow to contain and etc.
Better would be to remove inheritance after death, instead distributing that wealth among the citizenship equally.
List of countries by inheritance tax rates: https://en.wikipedia.org/wiki/List_of_countries_by_inheritan...
InitWare, a portable systemd fork running on BSDs and Linux
Shoot. Almost there, at least for us cybersecurity-minded folks.
A need for a default-deny-all and then select what a process needs is the better security granularity.
This default-ALLOW-all is too problematic for today's (and future) security needs.
Cuts down on the compliance paperworks too.
DAC: Discretionary Access Control: https://en.wikipedia.org/wiki/Discretionary_access_control :
> The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control).
Which permissions and authorizations can be delegated?
DAC is the out of the box SELinux configuration for most Linux distros; some processes are confined, but if the process executable does not have the necessary extended filesystem attribute labels the process runs unconfined; default allow all.
You can see which processes are confined with SELinux contexts with `ps -Z`.
MAC is default deny all;
MAC: Mandatory Access Control: https://en.wikipedia.org/wiki/Mandatory_access_control
Biggest problem is the use of a SELinux compiler into components understood only by SELinux engine.
Does not help when the SELinux source text file is not buildable by function/procedure axiom: it is at its grittiest granularity, which ironically is the best kind of security, but only if composed by the most savviest SELinux system admins.
Often requires full knowledge of any static/dynamic libraries and any additional dynamic libraries it calls and its resource usages.
Additional frontend UI will be required to proactively determine suitability with those dynamic libraries before any ease of SELinux deployment.
For now, it is a trial and error in part on those intermediate system admins or younger.
From https://news.ycombinator.com/item?id=30025477 :
> [ audit2allow, https://stopdisablingselinux.com/ ]
Applications don't need to be compiled with selinux libraries unless they want to bypass CLI tools like chcon and restorecon (which set extended filesystem attributes according to the system policy; typically at package install time if the package provenance is sufficient) by linking with libselinux.
Deterministic remote entanglement using a chiral quantum interconnect
From https://news.ycombinator.com/item?id=43044159 :
>>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
ScholarlyArticle: "Deterministic remote entanglement using a chiral quantum interconnect" (2025) https://www.nature.com/articles/s41567-025-02811-1
The state of binary compatibility on Linux and how to address it
I don't understand why they don't just statically link their binaries. First, they said this:
> Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime.
But then they immediately said they actually statically link all of their deps aside from libc.
> Instead, we take a different approach: statically linking everything we can.
If they're statically linking everything other than libc, then using musl or statically linking glibc will finish the job. Unless they have some need for loading share libs at runtime which they didn't already have linked into their binary (i.e. manual dlopen), this solves the portability problem on Linux.
What am I missing (assuming I know of the security implications of statically linked binaries -- which they didn't mention as a concern)?
And please, statically linking everything is NOT a solution -- the only reason I can run some games from 20 years ago still on my recent Linux is because they didn't decide to stupidly statically link everything, so I at least _can_ replace the libraries with hooks that make the games work with newer versions.
As long as the library is available.
Neither static nor dynamic linking is looking to solve the 20 year old binaries issue, so both will have different issues.
But I think it's easier for me to find a 20 year old ISO of a Red Hat/Slackware where I can simply run the statically linked binary. Dependency hell for older distros become really difficult when the older packages are not archived anywhere anymore.
It's interesting to think how a 20 year old OS plus one program is probably a smaller bundle size than many modern Electron apps ostensibly built "for cross platform compatibility". Maybe microkernels are the way.
How should a microkernel run (WASI) WASM runtimes?
Docker can run WASM runtimes, but I don't think podman or nerdctl can yet.
From https://news.ycombinator.com/item?id=38779803 :
docker run \
--runtime=io.containerd.wasmedge.v1 \
--platform=wasi/wasm \
secondstate/rust-example-hello
From https://news.ycombinator.com/item?id=41306658 :> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
Native containers run on the host and can host normal containers if a container engine is installed. Compared to an electron runtime, IDK how minimal a native container with systemd and podman, and WASM runtimes, and portable GUI rendering libraries could be.CoreOS - which was for creating minimal host images that host containers - is now Fedora Atomic is now Fedora Atomic Desktops and rpm-ostree. Silverblue, Kinoite, Sericea; and Bazzite and Secure Blue.
Secureblue has a hardened_malloc implementation.
From https://jangafx.com/insights/linux-binary-compatibility :
> To handle this correctly, each libc version would need a way to enumerate files across all other libc instances, including dynamically loaded ones, ensuring that every file is visited exactly once without forming cycles. This enumeration must also be thread-safe. Additionally, while enumeration is in progress, another libc could be dynamically loaded (e.g., via dlopen) on a separate thread, or a new file could be opened (e.g., a global constructor in a dynamically loaded library calling fopen).
FWIU, ROP Return-Oriented Programming and Gadgets approaches have implementations of things like dynamic header discovery of static and dynamic libraries at runtime; to compile more at runtime (which isn't safe, though: nothing reverifies what's mutated after loading the PE into process space, after NX tagging or not, before and after secure enclaves and LD_PRELOAD (which some go binaries don't respect, for example).
Can a microkernel do eBPF?
What about a RISC machine for WASM and WASI?
"Customasm – An assembler for custom, user-defined instruction sets" (2024) https://news.ycombinator.com/item?id=42717357
Maybe that would shrink some of these flatpaks which ship their own Electron runtimes instead of like the Gnome and KDE shared runtimes.
Python's manylinux project specifies a number of libc versions that manylinux packages portably target.
Manylinux requires a tool called auditwheel for Linux, delicate for MacOS, and delvewheel for windows;
Auditwheel > Overview: https://github.com/pypa/auditwheel#overview :
> auditwheel is a command line tool to facilitate the creation of Python wheel packages for Linux (containing pre-compiled binary extensions) that are compatible with a wide variety of Linux distributions, consistent with the PEP 600 manylinux_x_y, PEP 513 manylinux1, PEP 571 manylinux2010 and PEP 599 manylinux2014 platform tags.
> auditwheel show: shows external shared libraries that the wheel depends on (beyond the libraries included in the manylinux policies), and checks the extension modules for the use of versioned symbols that exceed the manylinux ABI.
> auditwheel repair: copies these external shared libraries into the wheel itself, and automatically modifies the appropriate RPATH entries such that these libraries will be picked up at runtime. This accomplishes a similar result as if the libraries had been statically linked without requiring changes to the build system. Packagers are advised that bundling, like static linking, may implicate copyright concerns
github/choosealicense.com: https://github.com/github/choosealicense.com
From https://news.ycombinator.com/item?id=42347468 :
> A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600
Return oriented programming: https://en.wikipedia.org/wiki/Return-oriented_programming
/? awesome return oriented programming sire:github.com https://www.google.com/search?q=awesome+return+oriented+prog...
This can probably find multiple versions of libc at runtime, too: https://github.com/0vercl0k/rp :
> rp++ is a fast C++ ROP gadget finder for PE/ELF/Mach-O x86/x64/ARM/ARM64 binaries.
> How should a microkernel run (WASI) WASM runtimes?
Same as any other kernel—the runtime is just a userspace program.
> Can a microkernel do eBPF?
If it implements it, why not?
Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?
Linux containers have process isolation features that userspace sandboxes like bubblewrap and runtimes don't.
Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for */s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).
Thus, e.g. Firefox from a signed package in a container on the host, and Firefox from a package on the host are more process-isolated than Firefox in a Flatpak or from a curl'ed statically-linked binary because one couldn't figure out the build system.
Container-selinux, Kata containers, and GVisor further secure containers without requiring the RAM necessary for full VM virtualization with Xen or Qemu; and that is possible because of container interface standards.
Linux machines run ELF binaries, which could include WASM instructions
/? ELF binary WASM : https://www.google.com/search?q=elf+binary+wasm :
mewz-project/wasker https://github.com/mewz-project/wasker :
> What's new with Wasker is, Wasker generates an OS-independent ELF file where WASI calls from Wasm applications remain unresolved.*
> This unresolved feature allows Wasker's output ELF file to be linked with WASI implementations provided by various operating systems, enabling each OS to execute Wasm applications.
> Wasker empowers your favorite OS to serve as a Wasm runtime!
Why shouldn't we container2wasm everything? Because (rootless) Linux containers better isolate the workload than any current WASM runtime in userspace.
Non-Abelian Anyons and Non-Abelian Vortices in Topological Superconductors
"Non-Abelian Anyons and Non-Abelian Vortices in Topological Superconductors" (2023) https://arxiv.org/abs/2301.11614 :
> Abstract: Anyons are particles obeying statistics of neither bosons nor fermions. Non-Abelian anyons, whose exchanges are described by a non-Abelian group acting on a set of wave functions, are attracting a great attention because of possible applications to topological quantum computations. Braiding of non-Abelian anyons corresponds to quantum computations. The simplest non-Abelian anyons are Ising anyons which can be realized by Majorana fermions hosted by vortices or edges of topological superconductors, ν=5/2 quantum Hall states, spin liquids, and dense quark matter. While Ising anyons are insufficient for universal quantum computations, Fibonacci anyons present in ν=12/5 quantum Hall states can be used for universal quantum computations. Yang-Lee anyons are non-unitary counterparts of Fibonacci anyons. Another possibility of non-Abelian anyons (of bosonic origin) is given by vortex anyons, which are constructed from non-Abelian vortices supported by a non-Abelian first homotopy group, relevant for certain nematic liquid crystals, superfluid 3He, spinor Bose-Einstein condensates, and high density quark matter. Finally, there is a unique system admitting two types of non-Abelian anyons, Majorana fermions (Ising anyons) and non-Abelian vortex anyons. That is 3P2 superfluids (spin-triplet, p-wave paring of neutrons), expected to exist in neutron star interiors as the largest topological quantum matter in our universe.
> nematic liquid crystals,
- "Tunable entangled photon-pair generation in a liquid crystal" (2024) https://www.nature.com/articles/s41586-024-07543-5 .. https://news.ycombinator.com/item?id=40815388 .. SPDC entangled photon generation with nematic liquid crystal
NewsArticle: "A liquid crystal source of photon pairs" (2024) https://www.sciencedaily.com/releases/2024/06/240614141916.h...
Sadly, despite a stem PhD, I have no way of assessing whether this is an April Fools submission or not. I recognise some of the words in the abstract, like 'non-Abelian', but that's it.
Maybe it's time to retire.
It is not a joke. Topological anyons are the real deal, someday.
This is an article I found while preparing a comment about superfluid quantum gravity; https://www.reddit.com/r/AskPhysics/comments/1iqvxn0/comment...
I was collecting relevant articles and found this one.
TIL about the connection between superfluid quantum gravity, supersolid spin nematic crystals, and spin-nematic liquid crystals (LCDs,) for waveguiding entangled photons.
And also TIL about anyons in neutron stars, which are typically or always the impetus for black holes.
And also, hey, "3He" again.
Show HN: Offline SOS signaling+recovery app for disasters/wars
A couple of months ago, I built this app to help identify people stuck under rubble.
First responders have awesome tools. But in tough situations, even common folks need to help.
After what happened in Myanmar, we need something like this that works properly.
It has only been tested in controlled environments. It can also be improved; I know BLE is not _that_ effective under rubble.
If you have any feedback or can contribute, don't hold back.
> It can also be improved; I know BLE is not _that_ effective under rubble.
It's a tough problem to solve because you're up against the laws of physics and the very boring (and often counterintuitive) "Antenna Theory". Bluetooth is in the UHF band, and UHF isn't good for penetrating anything let a lone concrete rubble.
To penetrate rubble effectively you really want to be in the ELF-VLF bands, (That's what submarines/mining bots/underground seismic sensors use to get signals out).
Obviously that's ridiculous. Everything from ELF to even HF is impossible to use in a "under the rubble" situation because of physics[1]. Bluetooth (UHF) might be "better than nothing" but you're losing at least 25-30 dBs (which is like 99.99% signal) in 12 inches of concrete rubble. VHF (like a handheld radio) can buy you another 5 inches.
Honestly I think sound waves travel further in such medium than RF waves.
[1]: Your "standard reference dipole" antenna needs to be 1/2 or 1/4 your wave length to resonate. At ELF-VLF range you need an antenna that's 10k-1k feet long. You can play with inductors and loops to electrically lengthen your antenna without physically lengthening it, but you're not gonna get that below 500-200 feet. The length of a submarine is an important design consideration when deciding on what type of radio signal it needs to be able to receive/transmit vs how deep it needs to be for stealth.
What about muon imaging?
What about Rydberg sensors for VLF earth penetrating imaging, at least?
From "3D scene reconstruction in adverse weather conditions via Gaussian splatting" https://news.ycombinator.com/item?id=42900053 :
> Is it possible to see microwave ovens on the ground with a Rydberg antenna array for in-cockpit drone Remote ID signal location?
With a frequency range from below 1 KHz to multiple THz fwiu, Rydberg antennae can receive VLF but IDK about ELF.
IIRC there's actually also a way to transmit with or like Rydberg antennae now; probably with VLF if that's best for the application and there's not already enough backscatter to e.g. infer occlusion with? https://www.google.com/search?q=transmit+with+Rydberg+antenn....
This is pretty cool! I need to learn more about both.
I imagine such fancier tools would be less available among common folks, and more among first responders.
NASA already has the tech to detect heartbeats under rubble using radar [1]. No additional equipment is needed by the rescued. The problem is emergency response can get overwhelmed in large disasters.
If Rydberg sensors would be more common, and new tech is added to mobile devices, this can seriously shift the playing field.
I will look into this, because we need out of the box solutions. Thank you!
[1]: https://www.dhs.gov/archive/detecting-heartbeats-rubble-dhs-...
"The Dark Knight" (Batman, 2009) is the one with the phone-based - is it wifi backscatter imaging - and the societal concerns.
FWIU there are contractor-grade imaging capabilities and there are military-grade see through walls and earth capabilities that law enforcement also have but there challenges with due process.
At the right time of day, with the right ambient temperature, it's possible to see the studs in the walls with consumer IR but only at a distance.
Also, FWIU it's possible to find plastic in the ground - i.e. banned mines - with thermal imaging at certain times of day.
Could there be a transceiver on a post in the truck at the road, with other flying drones to measure backscatter and/or transceiver emissions?
Hopefully NASA or another solvent company builds a product out of their FINDER research (from JPL).
How many of such heartbeat detection devices were ever manufactured? Did they ever add a better directional mic; like one that can read heartbeats from hundreds of meters awat? Is it mountable on a motorized tripod?
It sounds like there is a signal separation challenge that applied onboard AI could help with.
From "Homemade AI drone software finds people when search and rescue teams can't" https://news.ycombinator.com/item?id=41764486#41775397 :
> {Code-and-Response, Call-for-Code}/DroneAid: https://github.com/Code-and-Response/DroneAid
> "DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2010) https://github.com/Code-and-Response/DroneAid
> All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.
> Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?
I am a solo founder developing this new social media platform
Hey everyone! I am a 19 y/o founder taking a gap year to build Airdel - a social media platform startup. I’ve noticed many of today's social media platforms aren't action-oriented nor optimized for real-time collaboration which is important for people who have personal goals, are working on impactful real-world missions and projects (Humanitarian events e.g California wildfire victims, Emergency healthcare deliveries, climbing Mt.everest, working on research/patents or even just building cool projects). That’s why I’m building a social media platform that connects your needs with people who can help— whether it’s industry stakeholders, professionals, or individuals with a common solution to your problem.
Airdel contains a feed where you can post your mission progress, updates, and achievements, an online directory where you can search for needs and missions to collaborate and partner with others on, and a professional working platform with productivity tools to collaborate on missions until completion. The platform tracks the entire life of your mission/project from start to finish which is recorded on your personal profile. Collaborators can be anonymously reviewed, allowing others to judge trustworthiness for future collaborations.
People who have signed up are in the humanitarian, healthcare, tech, science, business, entertainment, sports and creatives sector needing specific urgent solutions, connections, and resources. These span from NGOs, Institutions, Individuals, Suppliers, Sellers, Organizations and government.
Let me know what you guys think! I will be answering every one of your questions. Shoot!
Landing page: www.getairdel.com Interested in helping out? Contact me shannon@getairdel.com
Best, Shannon
From https://news.ycombinator.com/item?id=33302599 :
> If you label things with #GlobalGoal hashtags, others can find solutions to the very same problems.
The Global Goals are the UN Sustainable Development Goals for 2015-2030.
There are 17 Goals, 169 Targets and 247 Indicators.
There have been social media campaigns with hashtags, so there are already "hames" (hashtaggable names) for the high level goals.
Anyone can implement #hashtags, +tags, WikiWords, or similar. The twitter-text library is open source, for example.
Re: Schema.org/Mission, :Goal, :Objective [...] linked data: https://news.ycombinator.com/item?id=12525141
"Ask HN: Any well funded tech companies tackling big, meaningful problems?" https://news.ycombinator.com/item?id=24412493
From https://github.com/thegreenwebfoundation/carbon.txt/issues/3... re: a proposed carbon.txt:
> Is there a way for sites to sign a claim with e.g. ld-proofs and then have an e.g. an independent auditor - maybe also with a W3C DID Decentralized Identifier - sign to independently verify?
When an auditor confirms a sustainability report, they could sign it and award a signed blockcert.
(Other ideas considered for similar matching, recommendation, and expert-finding objectives: A StackExchange site for SDG Q&A with upvotes and downvotes, )
I saw that you're not interested in (syndicating content for inbound links with) AT protocol? Is it perceived development cost and estimation of inbound traffic?
What differentiates your offering from existing platforms like LinkedIn? How could your mission be achieved with existing solutions?
There is an SDG vocabulary to link problems and solutions with: https://unsceb.org/common-digital-identifiers-sdgs :
> The common identifiers for Sustainable Development Goals, Targets and Indicators are available online. The portal is provided by the UN Library, which also hosts the UN Bibliographic Information System (UN BIS) Taxonomy. The IRIs have also been mapped to the UN Environment SDG Interface Ontology (SDGIO, by UNEP) and to the UN Bibliographic Information System vocabulary, to enable the use of these resources seamlessly in linking documents and data from different sources to Goals, Targets, Indicators, and closely related concepts.
The UN SDG vocabulary: https://metadata.un.org/sdg/?lang=en
"A Knowledge Organization System for the United Nations Sustainable Development Goals" (2021) https://link.springer.com/chapter/10.1007/978-3-030-77385-4_...
Researchers get spiking neural behavior out of a pair of transistors
> Specifically, the researchers operate a transistor under what are called "punch-through conditions." This happens when charges build up in a semiconductor in a way that can allow bursts of current to cross through the transistor even when it's in the off state. Normally, this is considered a problem, so processors are made so that this doesn't occur. But the researchers recognized that a punch-through event would look a lot like the spike of a neuron's activity.
> The team found that, when set up to operate on the verge of punch-through mode, it was possible to use the gate voltage to control the charge build-up in the silicon, either shutting the device down or enabling the spikes of activity that mimic neurons. Adjustments to this voltage could allow different frequencies of spiking. Those adjustments could be made using spikes as well, essentially allowing spiking activity
> [...] All of this simply required standard transistors made with CMOS processes, so this is something that could potentially be put into practice fairly quickly.
ScholarlyArticle: "Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4
How do these compare to memristors?
Memristors: https://en.wikipedia.org/wiki/Memristor
From "A Chip Has Broken the Critical Barrier That Could Ultimately Begin the Singularity" (2025) https://www.aol.com/chip-broken-critical-barrier-could-17000... :
> Here we report an analogue computing platform based on a selector-less analogue memristor array. We use interfacial-type titanium oxide memristors with a gradual oxygen distribution that exhibit high reliability, high linearity, forming-free attribute and self-rectification. Our platform — which consists of a selector-less (one-memristor) 1 K (32 × 32) crossbar array, peripheral circuitry and digital controller — can run AI algorithms in the analogue domain by self-calibration without compensation operations or pretraining.
Can't these components model spreading activation?
Spreading activation: https://en.wikipedia.org/wiki/Spreading_activation
Memristor is cool, and it is all in one (including memory), but it is not standard CMOS process.
That is whole difference.
Any way, will be sort of CCD matrix to use any of these techs (may be something like modern Flash as storage, or DRAM cell with refresh), but CMOS is very straightforward to produce and to use.
Why I mention CCD - it is analog storage with multiple levels, organized as multiple lines with output line on one side. It could also be used as solid-state circular buffer to access separate cells.
So, these CMOS transistors will work as neuron, but weights will be stored as analog value in CCD.
Is that more debuggable?
Re: the Von Neumann bottleneck, debuggability, and I guess any form of computation in RAM; https://news.ycombinator.com/item?id=42312971
It seems like memristors have been n years away for quite awhile now; maybe like QC.
Wonder if these would work for spiking neural behavior with electronic transistors:
"Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9" (2024) https://news.ycombinator.com/item?id=42318944
Cerebras WSE is probably the fastest RAM bus, though it's not really a bus it's just addressed multiple chips on the same wafer FWIU.
> Is that more debuggable?
I've seen many approaches to computing in my life - optical, mechanical, hydro, even pneumatic. Classic digital based on CMOS is the most universal with huge range of mature debugging instruments.
CMOS digital is so universal, it even worth to pay magnitudes worse power consumption before find best structure, and then, sure use something less debuggable, but with better consumption.
Unfortunately, I don't have enough data to state, which will be better on power, CMOS or memristor. Just now CMOS is mature COTS tech, but memristor is still few years from COTS.
Cerebras, as I know, based on digital CMOS. Just using some tricks to handle near whole wafer space. BTW, Sir Clive Sinclair tried similar approach to make wafer-scale storage, but unsuccessful.
> it's not really a bus it's just addressed multiple chips on the same wafer
I'm electronics engineer, and even have once baked one chip layer on semiconductor practice, so I'm aware about technologies.
As I said before on Sinclair, few companies tried to make new on semiconductor market, and even some have success.
RAM manufacturers for a long time using approach of make multi-chip on one wafer - most RAM chips actually have 4..6 RAMs in one package, but few of them don't pass tests and disabled by fuses, so appear chips with 2 or 4 RAMs enabled and even with odd number of enabled chips.
Looks like Cerebras use similar to RAM manufacturers approach, just for other niche.
Compiler Options Hardening Guide for C and C++
While all of these are very useful, you'll find that a lot of these are already enabled by default in many distributions of the gcc compiler. Sometimes they're embedded in the compiler itself through a patch or configure flag, and sometimes they're added through CFLAGS variables during the compilation of distribution packages. I can only really speak of gentoo, but here's a non-exhaustive list:
* -fPIE is enabled with --enable-default-pie in GCC's ./configure script
* -fstack-protector-strong is enabled with --enable-default-ssp in GCC's ./configure script
* -Wl,-z,relro is enabled with --enable-relro in Binutils' ./configure script
* -Wp,-D_FORTIFY_SOURCE=2, -fstack-clash-protection, -Wl,-z,now and -fcf-protection=full are enabled by default through patches to GCC in Gentoo.
* -Wl,--as-needed is enabled through the default LDFLAGS
For reference, here's the default compiler flags for a few other distributions. Note that these don't include GCC patches:
* Arch Linux: https://gitlab.archlinux.org/archlinux/packaging/packages/pa...
* Alpine Linux: https://gitlab.alpinelinux.org/alpine/abuild/-/blob/master/d...
* Debian: It's a tiny bit more obscure, but running `dpkg-buildflags` on a fresh container returns the following: CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/home/<myuser>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection
From https://news.ycombinator.com/item?id=38505448 :
> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Is there a good reference for comparing these compile-time build flags and their defaults with Make, CMake, Ninja Build, and other build systems, on each platform and architecture?
From https://news.ycombinator.com/item?id=41306658 :
> From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
>> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
But those are per-arch performance flags, not security flags.
In my experience distributions only patch GCC or modify the package building environment variables to add compiler flags. You can be certain that the compiler flags used in build systems like cmake and meson will be vanilla.
Make adds no additional compiler flags (check the output of "make -n -p"). Neither does Ninja.
Autotools is extremely conservative with compiler flags and will only really add -O2 -g, as well as include paths and defines specified by the developer.
CMake has some default compiler flags, depending on your CMAKE_BUILD_TYPE, mostly affecting optimization, and disabling asserts() with Release (-DNDEBUG). It also has some helpers for precompiled headers and link-time optimizations that enable the relevant flags.
Meson uses practically the same flags as cmake, with the exception of not passing -DNDEBUG unless the developer of the meson build really wants it to.
These are all the relevant build systems for linux packages. I'm not familiar with gn, bazel, and etc. In general, build systems dabble a bit in optimization flags, but pay no mind to hardening.
Show HN: Make SVGs interactive in React with 1 line
Hey HN
I built svggles (npm: interactive-illustrations), a React utility that makes it easy to add playful, interactive SVGs to your frontend.
It supports mouse-tracking, scroll, hover, and other common interactions, and it's designed to be lightweight and intuitive for React devs.
The inspiration came from my time playing with p5.js — I loved how expressive and fun it was to create interactive visuals. But I also wanted to bring that kind of creative freedom to everyday frontend work, in a way that fits naturally into the React ecosystem.
My goal is to help frontend developers make their UIs feel more alive — not just functional, but fun. I also know creativity thrives in community, so it's open source and I’d love to see contributions from artists, developers, or anyone interested in visual interaction.
Links: Website + Docs: svggles.vercel.app
GitHub: github.com/shantinghou/interactive-illustrations
NPM: interactive-illustrations
Let me know what you think — ideas, feedback, and contributions are all welcome
MDN docs > SVG and CSS: https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorials/S...
Python lock files have officially been standardized
What does this mean for pip-tools' requirements.in, Pipfile.lock, pip constraints.txt, Poetry.lock, pyroject.toml, and uv.lock?
I think the plan is to replace all of those.
The PEP has buy in from all the major tools.
It doesn't mention them in alphabetical order.
From last week re: GitHub Dependabot and conda/mamba/micromamba/pixi support:
"github dependabot, meta.yaml, environment.yml, conda-lock.yaml, pixi.lock" https://github.com/regro/cf-scripts/issues/3920#issuecomment... https://github.com/dependabot/dependabot-core/issues/2227#is... incl. links to the source of dependabot
Quantum advantage for learning shallow NNs with natural data distributions
ScholarlyArticle: "Quantum advantage for learning shallow neural networks with natural data distributions" (2025) https://arxiv.org/abs/2503.20879
NewsArticle: "Google Researchers Say Quantum Theory Suggests a Shortcut for Learning Certain Neural Networks" (2025) https://thequantuminsider.com/2025/03/31/google-researchers-... :
> Using this model, [Quantum Statistical Query (QSQ) learning,] the authors design a two-part algorithm. First, the quantum algorithm finds the hidden period in the function using a modified form of quantum Fourier transform — a core capability of quantum computers. This step identifies the unknown weight vector that defines the periodic neuron. In the second part, it applies classical gradient descent to learn the remaining parameters of the cosine combination. The algorithm is shown to require only a polynomial number of steps, compared to the exponential cost for classical learners. [...]
> The researchers carefully address several technical challenges. For one, real-valued data must be discretized into digital form to use in a quantum computer.
Quantum embedding:
> Another way to put this: real-world numbers must be converted into digital chunks so a quantum computer can process them. But naive discretization can lose the periodic structure, making it impossible to detect the right signal. The authors solve this by designing a pseudoperiodic discretization. This approximates the period well enough for quantum algorithms to detect it.
> They also adapt an algorithm from quantum number theory called Hallgren’s algorithm to detect non-integer periods in the data. While Hallgren’s method originally worked only for uniform distributions, the authors generalize it to work with “sufficiently flat” non-uniform distributions like Gaussians and logistics, as long as the variance is large enough.
There is not yet a Wikipedia article on (methods of) "Quantum embedding".
How many qubits are necessary to roll 2, 8, or 6-sided quantum dice?
Embedding (mathematics) https://en.wikipedia.org/wiki/Embedding
Embedding (machine learning) https://en.wikipedia.org/wiki/Embedding_(machine_learning)
/? quantum embedding review: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=qua...
Continuous embedding: https://en.wikipedia.org/wiki/Continuous_embedding
Continuous-variable quantum information; https://en.wikipedia.org/wiki/Continuous-variable_quantum_in...
Symmetry between up and down quarks is more broken than expected
BTW isospin is actually how many up vs down quarks they are. It's not a fundamental property like spin or charge.
It's an old term that was created before they knew that up and down quarks existed.
Personally I find the term outdated because there are 4 other quarks, and isospin only talks about two of them.
Isospin symmetry: https://en.wikipedia.org/wiki/Isospin #History :
> Isospin is also known as isobaric spin or isotopic spin.
Supersymmetry: https://en.wikipedia.org/wiki/Supersymmetry
Does the observed isospin asymmetry disprove supersymmetry, if isospin symmetry is an approximate symmetry?
Stochastic Reservoir Computers
"Stochastic reservoir computers" (2025) https://www.nature.com/articles/s41467-025-58349-6 :
> Abstract: [...] This allows the number of readouts to scale exponentially with the size of the reservoir hardware, offering the advantage of compact device size. We prove that classes of stochastic echo state networks form universal approximating classes. We also investigate the performance of two practical examples in classification and chaotic time series prediction. While shot noise is a limiting factor, we show significantly improved performance compared to a deterministic reservoir computer with similar hardware when noise effects are small.
Glutamate Unlocks Brain Cell Channels to Enable Thinking and Learning
ScholarlyArticle: "Glutamate gating of AMPA-subtype iGluRs at physiological temperatures" (2025) https://www.nature.com/articles/s41586-025-08770-0 ; CryoEM
From https://news.ycombinator.com/item?id=39836127 :
> High glutamine-to-glutamate ratio predicts the ability to sustain motivation [...]
> MSG [...]
> So, IIUC, when you feed LAB glutamate, you get GABA?
Pixar One Thirty
Tessellation #History, #Overview: https://en.wikipedia.org/wiki/Tessellation :
> More formally, a tessellation or tiling is a cover of the Euclidean plane by a countable number of closed sets, called tiles, such that the tiles intersect only on their boundaries.
Category:Tileable textures: https://commons.wikimedia.org/wiki/Category:Tileable_texture...
Can PxrRoundCube be called from RenderManForBlender, with BlenderMCP? https://github.com/prman-pixar/RenderManForBlender
Texture synthesis > Methods: https://en.wikipedia.org/wiki/Texture_synthesis#Methods
Khan Academy > Computing > Pixar in a Box > Unit 8: Patterns: https://www.khanacademy.org/computing/pixar/pattern
Matrix Calculus (For Machine Learning and Beyond)
> The class involved numerous example numerical computations using the Julia language, which you can install on your own computer following these instructions. The material for this class is also located on GitHub at https://github.com/mitmath/matrixcalc
Mathematical Compact Models of Advanced Transistors [pdf]
This is from 2018, anyone in the field know if it's still state of the art or a historic curiosity? I know that we've started using euv since then which seems like it would change things.
State of the art in transistors?
- "Researchers get spiking neural behavior out of a pair of [CMOS] transistors" (2025) https://news.ycombinator.com/item?id=43503644
- Memristors
- Graphene-based transistors
EUV and nanolithography?
SOTA alternatives to EUV for nanolithography include NIL nanoimprint lithography (at 10-14nm at present fwiu), nanoassembly methods like atomic/molecular deposition and optical tweezers, and a new DUV solid-state laser light source at 193nm.
How to report a security issue in an open source project
Security.txt is a standard for sharing vuln disclosure information; /.well-known/security.txt or /security.txt .
security.txt: https://en.wikipedia.org/wiki/Security.txt
Responsible disclosure -> CVD: Coordinated Vulnerability Disclosure: https://en.wikipedia.org/wiki/Coordinated_vulnerability_disc...
OWASP Vulnerability Disclosure Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...
Publishers trial paying peer reviewers – what did they find?
> USD $250
How much deep research does $250 yield by comparison?
Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples
I'm not sure why one would compare reviews by acknowledged experts in a field with stuff written by anonymous randos, and it seems highly unlikely that anyone with the appropriate qualifications would be lurking on some mechanical turk-like site.
I'm also deeply suspicious of the confidentiality of anything sent to one of those sites.
However this does suggest the idea that a high-powered university in a low-income country might be able to cut a deal to provide reviewing services...
You can get 50 reviews on Fiverr for that price!
Tracing the thoughts of a large language model
XAI: Explainable artificial intelligence: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Inside arXiv–The Most Transformative Platform in All of Science
ArXiv: https://en.wikipedia.org/wiki/ArXiv
ArXiv accepts .ps (PostScript), .tex (LaTeX source), and .pdf (PDF) ScholarlyArticle uploads.
ArXiv docs > Formats for text of submission: https://info.arxiv.org/help/submit/index.html#formats-for-te...
The internet and the web are the most transformative platforms in all of science, though.
Chimpanzees act as 'engineers', choosing materials to make tools
Tool use by non-humans > Primates > Chimpanzees and bonobos: https://en.wikipedia.org/wiki/Tool_use_by_non-humans#Chimpan...
Accessible open textbooks in math-heavy disciplines
"BookML: automated LaTeX to bookdown-style HTML and SCORM, powered by LaTeXML" https://vlmantova.github.io/bookml/
LaTeXML: https://en.wikipedia.org/wiki/LaTeXML :
LaTeXML emits XML from a parsing of LaTex with Perl.
SCORM is a standard for educational content in ZIP packages which is supported by Moodle, ILIAS, Sakai, Canvas, and a number of other LMS Learning Management Systems.
SCORM: https://en.wikipedia.org/wiki/Sharable_Content_Object_Refere...
xAPI (aka Experience API, aka TinCan API) is a successor spec to SCORM for event messages to LRS Learning Record Stores. Like SCORM, xAPI was granted by ADL.
re: xAPI, schema.org/Action, and JSON-LD: https://github.com/RusticiSoftware/TinCanSchema/issues/7
schema.org/Action describes potential actions: https://schema.org/docs/actions.html
For example, from the Schema.org "Potential Actions" doc: https://schema.org/docs/actions.html :
{
"@context": "https://schema.org",
"@type": "Movie",
"name": "Footloose",
"potentialAction": {
"@type": "WatchAction"
}
}
That could be a syllabus.ActionTypes include: BuyAction, AssessAction > ReviewAction,
Schema.org > "Full schema hierarchy" > [Open hierarchy] > Action and rdfs:subClassOf subclasses thereof: https://schema.org/docs/full.html
What Linked Data should [math textbook] publishing software include when generating HTML for the web?
https://schema.org/CreativeWork > Book, Audiobook, Article > ScholarlyArticle, Guide, HowTo, Blog, MathSolver
The schema.org Thing > CreativeWork LearningResource RDFS class has the :assesses, :competencyRequired, :educationalLevel, :educationalAlignment, and :teaches RDFS properties; https://schema.org/LearningResource
You can add bibliographic metadata and curricular Linked Data to [OER LearningResource] HTML with schema.org classes and properties as JSON-LD, RDFa, or Microdata.
The schema.org/about property has a domain which includes CreativeWork and a range which includes Thing, so a :CreativeWork is :about a :Thing which could be a subclass of :CreativeWork.
.
I work with MathJax and LaTeX in notebooks a bit, and have generated LaTeX and then PDF with Sphinx and texlive like the ReadTheDocs docker container which already has the multiple necessary GB of LaTeX installed to render a README.rst as PDF without pandoc:
The Jupyter Book docs now describe how that works.
Jupyter Book docs > Customize LaTeX via Sphinx: https://jupyterbook.org/en/stable/advanced/pdf.html#customiz...
How to build the docs with the readthedocs docker image onesself: https://github.com/jupyter-book/jupyter-book/issues/991
ReadTheDocs > Dev > Design > Build Images > Time required to install languages at build time [with different package managers with varying performance] https://docs.readthedocs.com/dev/latest/design/build-images....
The jupyter-docker-stacks, binderhub, and condaforge/miniforge3 images build with micromamba now IIRC.
condaforge/miniforge3: https://hub.docker.com/r/condaforge/miniforge3
Recently, I've gotten into .devcontainers/devcontainers.json; which allows use of one's own Dockerfile or a preexisting docker image and installs LSP and vscode on top, and then runs the onCreateCommand, postStartCommand
A number of tools support devcontainer.json: https://containers.dev/supporting
Devcontainers could be useful for open textbooks in math-heavy disciplines; so that others can work within, rebuild, and upgrade the same container env used to build the textbook.
Re: MathJax, LaTeX, and notebooks:
To left-align a LaTeX expression in a (Jupyter,Colab,VScode,) notebook wrap the expression with single dollar signs. To center-align a LaTeX expression in a notebook, wrap it with double dollar signs:
$ \alpha_{\beta_1} $
$$ \alpha_{\beta_2} $$
Textbooks, though? Interactive is what they want.How can we make textbooks interactive?
It used to be that textbooks were to be copied down from; copy by hand from the textbook.
To engage and entertain this generation.
ManimCE, scriptable 3d simulators with test assertions, Thebelab,
Jupyter Book docs > "Launch into interactive computing interfaces" > BinderHub ( https://mybinder.org ), JupyterHub, Colab, Deepnote: https://jupyterbook.org/en/stable/interactive/launchbuttons....
JupyterLite-xeus builds a jupyterlite static site from an environment.yml; such that e.g. the xeus-python kernel and other packages are compiled to WebAssembly (WASM) so that you can run Jupyter notebooks in a browser without a server:
repo2jupyterlite works like repo2docker, which powers BinderHub, which generates a container with a current version of Jupyter installed after building the container according to one or more software dependency requirement specification files in /.binder or the root of the repo.
repo2jupyter: https://github.com/jupyterlite/repo2jupyterlite
jupyterlite-xeus: https://jupyterlite-xeus.readthedocs.io/en/latest/
Getting hit by lightning is good for some tropical trees
Also re: lightning and living things, from https://news.ycombinator.com/item?id=43044159 :
> "Gamma radiation is produced in large tropical thunderstorms" (2024)
> "Gamma rays convert CH4 to complex organic molecules [like glycine,], may explain origin of life" (2024)
Harnessing Quantum Computing for Certified Randomness
ScholarlyArticle: "Certified randomness using a trapped-ion quantum processor" (2025) https://www.nature.com/articles/s41586-025-08737-1
Re: different - probably much less expensive - approaches to RNG;
From "Cloudflare: New source of randomness just dropped" (2005) https://news.ycombinator.com/item?id=43321797 :
> Another source of random entropy better than a wall of lava lamps:
>>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
100Gbit/s is faster than qualifying noise from a 56 qubit quantum computer?
>> google/paranoid_crypto.lib.randomness_tests
There's yet no USB or optical interconnect RNG based on quantum vacuum fluctuations, though.
Building and deploying a custom site using GitHub Actions and GitHub Pages
I just set up an apt repo of uv packages using GitHub Pages via Actions and wrote up some notes here: https://linsomniac.com/post/2025-03-18-building_and_publishi...
Looks like Simon and I were working on very similar things simultaneously.
There should be a "Verify signature of checksums and Verify checksums" at "Download latest uv binary".
GitHub has package repos; "GitHub Packages" for a few package formats, and OCI artifacts. https://docs.github.com/en/packages/working-with-a-github-pa...
OCI Artifacts with labels include signed Container images and signed Packages.
Packages can be hosted as OCI image repository artifacts with signatures; but package managers don't natively support OCI image stores, though many can install packages hosted at GitHub Packages URLs which are referenced by a signed package repository Manifest on a GitHub Pages or GitLab Pages site (e.g. with CF DNS and/or CloudFlare Pages in front)
>There should be a "Verify signature of checksums and Verify checksums"
I get it, but verifying checksums of downloads via https from github releases, downloaded across github's architecture (admittedly Azure) to githubs's runners seems like overkill. Especially as I don't see any signatures of the checksums, so the checksums are being retrieved via that same infrastructure with the same security guarantees. What am I missing?
If downloading over https and installing were enough, we wouldn't need SLSA, TUF, or sigstore.
.deb and .rpm Package managers typically reference a .tar.gz with a checksum; commit that to git signed with a GPG key; and then the package is signed by a (reproducible) CI build with the signing key for that repo's packages.
conda-forge can automatically send a Pull Request to update the upstream archive URL and cached checksum in the package manifest when there is a new upstream package.
Actually, the conda-forge bot does more than that;
From https://github.com/regro/cf-scripts/issues/3920#issuecomment... re: (now github) dependabot maybe someday scanning conda environment.yml, conda-lock.yml, and/or feedstock meta.yml:
> So the bot here does a fair bit more than update dependencies and/or versions.
> We have globally pinned ABIs which have to migrated in a specific order. We also use the bot here to start the migrations, produce progress / error outputs on the status page, and then close the migrations. The bot here also does specific migrations of dependencies that have been renamed, recipe maintenance, and more.
Dependabot scans for vulnerable software package versions in software dependency specification documents for a number of languages and package formats, when you git push to a repo which has dependabot configured in their dependabot.yml:
From the dependabot.yml docs: https://docs.github.com/en/code-security/dependabot/working-... :
> Define one package-ecosystem element for each package manager that you want Dependabot to monitor for new versions
Dependabot supports e.g. npm, pip, gomod, cargo, docker, github-actions, devcontainers,
SBOM tools have additional methods of determining which packages are installed on a specific server or container instance.
Static sites are sometimes build and forget projects that also need regular review of the statically-compiled-in dependencies for known vulns.
E.g. jupyterlite-xeus builds WASM static sites from environment.yml; though Hugo is probably much faster.
AI Supply Chain Attack: How Malicious Pickle Files Backdoor Models
From "Insecurity and Python Pickles" (2024) https://news.ycombinator.com/item?id=39685128 :
> There should be a data-only pickle serialization protocol (that won't serialize or deserialize code).
> How much work would it be to create a pickle protocol that does not exec or eval code?
"Title: Pickle protocol version 6: skipcode pickles" https://discuss.python.org/t/create-a-new-pickle-protocol-ve...
I have to agree with Chris Angelico there:
> Then the obvious question is: Why? Why use pickle? The most likely answer is “because <X> can’t represent what I need to transmit”, but for that to be at all useful to your proposal, you need to show examples that won’t work in well-known safe serializers.
Code in packages should be signed.
Code in pickles should also be signed.
I have no need for the pickle module now, but years ago thought there might have been safer way to read data that was already in pickles.
For backwards compatibility, skipcode=False must be the default,
were someone to implement a pickle str parser that doesn't eval code.
JS/ES/TS Map doesn't map to JSON.
Pickle still is good for custom objects (JSON loses methods and also order), Graphs & circular refs (JSON breaks), Functions & lambdas (Essential for ML & distributed systems) and is provided out of box.
We're contemplating protocols that don't evaluate or run code; that rules out serializing functions or lambdas (i.e., code).
Custom objects in Python don't have "order" unless they're using `__slots__` - in which case the application already knows what they are from its own class definition. Similarly, methods don't need to be serialized.
A general graph is isomorphic to a sequence of nodes plus a sequence of vertex definitions. You only need your own lightweight protocol on top.
Because globals(), locals(), Classes and classInstances are backed by dicts, and dicts are insertion ordered in CPython since 3.6 (and in the Python spec since 3.7), object attributes are effectively ordered in Python.
Object instances with __slots__ do not have a dict of attributes.
__slots__ attributes of Python classes are ordered, too.
(Sorting and order; Python 3 objects must define at least __eq__ and __lt__ in order to be sorted. @functools.total_ordering https://docs.python.org/3/library/functools.html#functools.t... )
Are graphs isomorphic if their nodes and edges are in a different sequence?
assert dict(a=1, b=2) == dict(b=2, a=1)
from collections import OrderedDict as odict
assert dict(a=1, b=2) != dict(b=2, a=1)
To crytographically sign RDF in any format (XML, JSON, JSON-LD, RDFa), a canonicalization algorithm is applied to normalize the input data prior to hashing and cryptographically signing. Like Merkle hashes of tree branches, a cryptographic signature of a normalized graph is a substitute for more complete tests of isomorphism.RDF Dataset Canonicalization algorithm: https://w3c-ccg.github.io/rdf-dataset-canonicalization/spec/...
Also, pickle stores the class name to unpickle data into as a (variously-dotted) str. If the version of the object class is not in the class name, pickle will unpickle data from appA.Pickleable into appB.Pickleable (or PickleableV1 into PickleableV2 objects, as long as PickleableV2=PickleableV1 is specified in the deserializer).
So do methods need to be pickled? No for security. Yes because otherwise the appB unpickled data is not isomorphic with the pickled appA.Pickleable class instances.
One Solution: add a version attribute on each object, store it with every object, and discard it before testing equality by other attributes.
Another solution: include the source object version in the class name that gets stored with every pickled object instance, and try hard to make sure the dest object is the same.
Experimental test of the nonlocal energy alteration between two quantum memories
ScholarlyArticle: "Test of Nonlocal Energy Alteration between Two Quantum Memories" (2025) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
Notebooks as reusable Python programs
i wanted to like marimo, but the best notebook interface i've tried so far is vscode's interactive window [0]. the important thing is that it's a python file first, but you can divide up the code into cells to run in the jupyter kernel either all at once or interactively.
0: https://code.visualstudio.com/docs/python/jupyter-support-py
Spyder also has these, possibly for longer than vscode [0]. I don't know who had this idea first but I remember some vim plugins doing that long ago, so maybe the vim community?
[0] https://docs.spyder-ide.org/current/panes/editor.html#code-c...
Jupytext docs > The percent format: https://github.com/mwouts/jupytext/blob/main/docs/formats-sc... :
# %% [markdown]
# Another Markdown cell
# %%
# This is a code cell
class A():
def one():
return 1
# %% Optional title [cell type] key="value"
MyST Markdown has: https://mystmd.org/guide/notebooks-with-markdown : ```{code-cell} LANGUAGE
:key: value
CODE TO BE EXECUTED
```
And : ---
kernelspec:
name: javascript
display_name: JavaScript
---
# Another markdown cell
```{code-cell} javascript
// This is a code cell
console.log("hello javascript kernel");
```
But also does not store the outputs in the markdown.Thanks, this is a good read. I did not know MyST, it is very cool.
The vim plugin I was talking about was vim-slime [0], which seems to date from 2007 and does have regions delimited with #%%.
Slime comes from Emacs originally, but I could not find if the original Emacs slime has regions.
Matlab also has those, which they call code sections [1]. Hard to find when they were introduced. Maybe 2021, but I suspect older.
None of those stores gge output of the command.
[0] https://vimawesome.com/plugin/vim-slime
[1] https://www.mathworks.com/help/matlab/matlab_prog/create-and...
The Spyder docs list other implementations of the percent format for notebooks as markdown and for delimiting runnable blocks within source code:
# %%
"^# %%"
Org-mode was released in 2003:
https://en.wikipedia.org/wiki/Org-modeOrg-mode supports code blocks: https://orgmode.org/manual/Structure-of-Code-Blocks.html :
#+BEGIN_SRC <language> and
#+END_SRC.
Literate programming; LaTeX:
https://en.wikipedia.org/wiki/Literate_programmingNotebook interface; Markdown + MathTeX: https://en.wikipedia.org/wiki/Notebook_interface ;
$ \delta_{2 3 4} = 5 $
$$ \Delta_\text{3 4 5} = 56 $$
Show HN: Aiopandas – Async .apply() and .map() for Pandas, Faster API/LLMs Calls
Can this be merged into pandas?
Pandas does not currently install tqdm by default.
pandas-dev/pandas//pyproject.toml [project.optional-dependencies] https://github.com/pandas-dev/pandas/blob/8943c97c597677ae98...
Dask solves for various adjacent problems; IDK if pandas, dask, or dask-cudf would be faster with async?
Dask docs > Scheduling > Dask Distributed (local) https://docs.dask.org/en/stable/scheduling.html#dask-distrib... :
> Asynchronous Futures API
Dask docs > Deploy Dask Clusters; local multiprocessing poll, k8s (docker desktop, podman-desktop,), public and private clouds, dask-jobqueue (SLURM,), dask-mpi: https://docs.dask.org/en/stable/deploying.html#deploy-dask-c...
Dask docs > Dask DataFrame: https://docs.dask.org/en/stable/dataframe.html :
> Dask DataFrames are a collection of many pandas DataFrames.
> The API is the same. The execution is the same.
> [concurrent.futures and/or @dask.delayed]
tqdm.dask: https://tqdm.github.io/docs/dask/#tqdmdask .. tests/tests_pandas.py: https://github.com/tqdm/tqdm/blob/master/tests/tests_pandas.... , tests/tests_dask.py: https://github.com/tqdm/tqdm/blob/master/tests/tests_dask.py
tqdm with dask.distributed: https://github.com/tqdm/tqdm/issues/1230#issuecomment-222379... , not yet a PR: https://github.com/tqdm/tqdm/issues/278#issuecomment-5070062...
dask.diagnostics.progress: https://docs.dask.org/en/stable/diagnostics-local.html#progr...
dask.distributed.progress: https://docs.dask.org/en/stable/diagnostics-distributed.html...
dask-labextension runs in JupyterLab and has a parallel plot visualization of the dask task graph and progress through it: https://github.com/dask/dask-labextension
dask-jobqueue docs > Interactive Use > Viewing the Dask Dashboard: https://jobqueue.dask.org/en/latest/clusters-interactive.htm...
https://examples.dask.org/ > "Embarrassingly parallel Workloads" tutorial re: "three different ways of doing this with Dask: dask.delayed, concurrent.Futures, dask.bag": https://examples.dask.org/applications/embarrassingly-parall...
Thank you for the input! To be honest, I don’t use Dask often, and as a regular Pandas user, I don’t feel the most qualified to comment—but here we go.
Can this be merged into Pandas?
I’d be honored if something I built got incorporated into Pandas! That said, keeping aiopandas as a standalone package has the advantage of working with older Pandas versions, which is useful for workflows where upgrading isn’t feasible. I also can’t speak to the downstream implications of adding this directly into Pandas.
Pandas does not install tqdm by default.
That makes sense, and aiopandas doesn’t require tqdm either. You can pass any class with __init__, update, and close methods as the tqdm argument, and it will work the same. Keeping dependencies minimal helps avoid unnecessary breakage.
What about Dask?
I’m not a regular Dask user, so I can’t comment much on its internals. Dask already supports async coroutines (Dask Async API), but for simple async API calls or LLM requests, aiopandas is meant to be a lightweight extension of Pandas rather than a full-scale parallelization framework. If you’re already using Dask, it probably covers most of what you need, but if you’re just looking to add async support to Pandas without additional complexity, aiopandas might be a more lightweight option.
Fair benchmarks would justify merging aiopandas into pandas. Benchmark grid axes: aiopandas, dtype_backend="pyarrow", dask-cudf
pandas pyarrow docs: https://pandas.pydata.org/docs/dev/user_guide/pyarrow.html
/? async pyarrow: https://www.google.com/search?q=async+pyarrow
/? repo:apache/arrow async language:Python : https://github.com/search?q=repo%3Aapache%2Farrow+async+lang... :
test_flight_async.py https://github.com/apache/arrow/blob/main/python/pyarrow/tes...
pyarrow/src/arrow/python/async.h: https://github.com/apache/arrow/blob/main/python/pyarrow/src... : "Bind a Python callback to an arrow::Future."
--
dask-cudf: https://docs.rapids.ai/api/dask-cudf/stable/ :
> Neither Dask cuDF nor Dask DataFrame provide support for multi-GPU or multi-node execution on their own. You must also deploy a dask.distributed cluster to leverage multiple GPUs. We strongly recommend using Dask-CUDA to simplify the setup of the cluster, taking advantage of all features of the GPU and networking hardware.
cudf.pandas > FAQ > "When should I use cudf.pandas vs using the cuDF library directly?" https://docs.rapids.ai/api/cudf/stable/cudf_pandas/faq/#when... :
> cuDF implements a subset of the pandas API, while cudf.pandas will fall back automatically to pandas as needed.
> Can I use cudf.pandas with Dask or PySpark?
> [Not at this time, though you can change the dask df to e.g. cudf, which does not implement the full pandas dataframe API]
--
dask.distributed docs > Asynchronous Operation; re Tornado or asyncio: https://distributed.dask.org/en/latest/asynchronous.html#asy...
--
tqdm.dask, tqdm.notebook: https://github.com/tqdm/tqdm#ipythonjupyter-integration
from tqdm.notebook import trange, tqdm
for n in trange(10):
time.sleep(1)
--But then TPUs instead of or in addition to async GPUs;
TensorFlow TPU docs: https://www.tensorflow.org/guide/tpu
Optimization by Decoded Quantum Interferometry
ScholarlyArticle: "Optimization by Decoded Quantum Interferometry" (2025) https://arxiv.org/abs/2408.08292
NewsArticle: "Quantum Speedup Found for Huge Class of Hard Problems" (2025) https://www.quantamagazine.org/quantum-speedup-found-for-hug...
High-fidelity entanglement between telecom photon and room-temp quantum memory
ScholarlyArticle: "High-fidelity entanglement between a telecom photon and a room-temperature quantum memory" (2025) https://arxiv.org/html/2503.11564v1
NewsArticle: "Scientists Achieve Telecom-Compatible Quantum Entanglement with Room-Temperature Memory" (2025) https://thequantuminsider.com/2025/03/19/scientists-achieve-...
Sound that can bend itself through space, reaching only your ear in a crowd
Which quantum operators can be found in [ultrasonic acoustic] wave convergences?
Surround sound systems must be calibrated in order to place the sweet spots of least cacophony.
There also exist ultrasonic scalpels that enable noninvasive subcutaneous surgical procedures that function by wave guiding to cause convergence.
"Functional ultrasound through the skull" (2025) https://news.ycombinator.com/item?id=42086408
"Neurosurgeon pioneers Alzheimer's, addiction treatments using ultrasound [video]" (2024) https://news.ycombinator.com/item?id=39556615
Did the Particle Go Through the Two Slits, or Did the Wave Function?
According to modern QFT, there are no particles except as an approximation. There are no fields except as mathematical formalisms. There's no locality. There is instead some kind of interaction of graph nodes, representing quantum interactions, via "entanglement" and "decoherence".
In this model, there are no "split particle" paradoxes, because there are no entities that resemble the behavior of macroscopic bodies, with our intuitions about them.
Imagine a Fortran program, with some neat index-based FOR loops, and some per-element computations on a bunch of big arrays. When you look at its compiled form, you notice that the neat loops are now something weird, produced by automatic vectorization. If you try to find out how it runs, you notice that the CPU not only has several cores that run parts of the loop in parallel, but the very instructions in one core run out of order, while still preserving the data dependency invariants.
"But did the computation of X(I) run before or after the computation of X(I+1)?!", you ask in desperation. You cannot tell. It depends. The result is correct though, your program has no bugs and computes what it should. It's counter-intuitive, but the underlying hardware reality is counter-intuitive. It's not illogical or paradoxical though.
This is incorrect. There are particles. They are excitations in the field.
There still is the 'split particle paradox' because QFT does not solve the measurement problem.
The 'some kind of interaction of graph nodes' by which I am guessing you are referring to Feynman diagrams are not of a fundamental nature. They are an approximation known as 'perturbation theory'.
I think what they must be referring to is the fact that particles are only rigorously defined in the free theory. When coupling is introduced, how the free theory relates to the coupled theory depends on heuristic/formal assumptions.
We're leaving my area of understanding, but I believe Haag's theorem shows that the naïve approach, where the interacting and free theories share a Hilbert space, completely fails -- even stronger than that, _no_ Hilbert space could even support an interacting QFT (in the ways required by scattering theory). This is a pretty strong argument against the existence of particles except as asymptotic approximations.
Since we don't have consensus on a well-defined, non-perturbative gauge theory, mathematically speaking it's difficult to make any firm statements about what states "exist" in absolute. (I'm certain that people working on the various flavours of non-perturbative (but still heuristic) QFT -- like lattice QFT -- would have more insights about the internal structure of non-asymptotic interactions.)
Though it doesn't resolve whether a "quanta" is a particle or a measurable convergence of waves, Electrons and Photons are observed with high speed imaging.
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054
There exist single photon emitters and single photon detectors.
Qualify that there are single photons if there are single photon emitters:
Single-photon source: https://en.wikipedia.org/wiki/Single-photon_source
QFT is not yet reconciled with (n-body) [quantum] gravity, which it has 100% error in oredicting. random chance. TOD
IIRC, QFT cannot explain why superfluid helium walks up the sides of a container against gravity, given the mass of each particle/wave of the superfluid and of the beaker and the earth, sun, and moon; though we say that gravity at any given point is the net sum of directional vectors acting upon said given point, or actually gravitational waves with phase and amplitude.
You said "gauge theory",
"Topological gauge theory of vortices in type-III superconductors" https://news.ycombinator.com/item?id=41803662
From https://news.ycombinator.com/context?id=43081303 .. https://news.ycombinator.com/item?id=43310933 :
> Probably not gauge symmetry there, then.
Generate impressive-looking terminal output, look busy when stakeholders walk by
The amount of time I spent getting asciifx and agg to work with syntax highlighting because IPython now has only Python Prompt Toolkit instead of deadline.
In order to leave Python coding demo tut GIF/MP4 on their monitor(s) at conferences or science fairs.
stemkiosk arithmetic in Python GIF v0.1.2: https://github.com/stemkiosk/stemkiosk/blob/e8f54704c6de32fb...
Ask HN: Project Management Class Recommendations?
I need to improve my skills in defining project goals, steps, and timelines and aligning teams around them.
Can anyone recommend some of their favorite online courses in this area? Particularly in technical environments.
Justified methods?
"How did software get so reliable without proof? (1996) [pdf]" https://news.ycombinator.com/item?id=42425617
CPM: Critical Path Method, resource scheduling, agile complexity estimation, planning poker, WBS Work Breakdown Structure: https://news.ycombinator.com/item?id=33582264#33583666
Project network: https://en.wikipedia.org/wiki/Project_network
Project planning: https://en.wikipedia.org/wiki/Project_planning
PERT: Program evaluation and review technique: https://en.m.wikipedia.org/wiki/Program_evaluation_and_revie...
Project management > Approaches of project management,: https://en.wikipedia.org/wiki/Project_management
PMI: Project Management Institute > Certifications: https://en.wikipedia.org/wiki/Project_Management_Institute#C...
PMBOK: Project Management Body of Knowledge > Contents: https://en.wikipedia.org/wiki/Project_Management_Body_of_Kno...
Agile software development > Methods: TDD, https://en.wikipedia.org/wiki/Agile_software_development
/? site:github.com inurl:awesome project management ; Books, Courses https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
Class Central > Project Management Courses and Certifications: https://www.classcentral.com/subject/project-management
Which processes scale without reengineering, once DevSecOps is automated with GitOps?
Brooks's law (observation) > Exceptions and possible solutions: https://en.wikipedia.org/wiki/Brooks%27s_law
Business process re-engineering > See also: https://en.wikipedia.org/wiki/Business_process_re-engineerin... :
Learning agenda: https://en.wikipedia.org/wiki/Learning Agenda , CLO
Agendas; Robert's Rules of Order has a procedure for motioning to add something to a meeting agenda.
Robert's Rules of Order: https://en.wikipedia.org/wiki/Robert%27s_Rules_of_Order
3 Questions; a status report to be ready with for team chat async or time bound: Since, Before, Obstacles
Stand-up meeting > Three questions: https://en.wikipedia.org/wiki/Stand-up_meeting#Three_questio...
Recursion kills: The story behind CVE-2024-8176 in libexpat
> Please leave recursion to math and keep it out of (in particular C) software: it kills and will kill again.
This is just nonsense. The issue is doing an unbounded amount of resource consuming work. Don't do an unbounded amount of resource consuming work, regardless of whether that work is expressed in a recursive or iterative form.
Any recursive function can be transformed into a tail recursive form, exchanging stack allocation for heap allocation. And any tail recursive program can be transformed into a loop (a trampoline). It's really not the language construct that is the issue here.
> Any recursive function can be transformed into a tail recursive form, exchanging stack allocation for heap allocation.
Can't all recursive functions be transformed to stack-based algorithms? And then, generally, isn't there a flatter resource consumption curve for stack-based algorithms, unless the language supports tail recursion?
E.g. Python has sys.getrecursionlimit() == 1000 by default, RecursionError, collections.deque; and "Performance of the Python 3.14 tail-call interpreter": https://news.ycombinator.com/item?id=43317592
> Can't all recursive functions be transformed to stack-based algorithms?
Yes, you can explicitly manage the stack. You still consume memory at the same rate, but now it is a heap-allocated stack that is programmatically controlled, instead of a stack-allocated stack that is automatically controlled.
The issue is either:
* whatever the parser is doing is already tail recursive, but not expressed in C in a way that doesn't consume stack. In this case it's trivial, but perhaps tedious, to convert it to a form that doesn't use unbounded resources.
* whatever the parser is doing uses the intermediate results of the recursion, and hence is not trivially tail recursive. In this case any reformulation of the algorithm using different language constructs will continue to use unbounded resources, just heap instead of stack.
It's not clear how the issue was fixed. The only description in the blog post is "It used the same mechanism of delayed interpretation" which suggests they didn't translate the existing algorithm to a different form, but changed the algorithm itself to a different algorithm. (It reads like they went from eager to lazy evaluation.)
Either way, it's not recursion itself that is the problem.
> You still consume memory at the same rate, but now it is a heap-allocated stack that is programmatically controlled, instead of a stack-allocated stack that is automatically controlled
To be precise; with precision:
In C (and IIUC now Python, too), a stack frame is created for each function call (unless tail call optimization is applied by the compiler).
To avoid creating an unnecessary stack frame for each recursive function call, instead create a collections.deque and add traversed nodes to the beginning or end enqueued for processing in a loop within one non-recursive function.
Is Tail Call Optimization faster than collections.deque in a loop in one function scope stack frame?
Yes, that is basically it. A tail call should be the same jump instruction as a loop, but performance really depends on the language implementation and is hard to make general statements about.
The Lost Art of Logarithms
Notes from "How should logarithms be taught?" (2021) https://news.ycombinator.com/item?id=28519356 re: logarithms in the Python standard library, NumPy, SymPy, TensorFlow, PyTorch, Wikipedia
Jupyter JEP: AI Representation for tools that interact with notebooks
(TIL about) MCP: Model Context Protocol; https://modelcontextprotocol.io/introduction
MCP Specification: https://spec.modelcontextprotocol.io/specification/draft/
/? Model Context Protocol: https://hn.algolia.com/?q=Model+Context+Protocol
From https://github.com/jupyter/enhancement-proposals/pull/129#is... :
> Would a JSON-LD 'ified nbformat and `_repr_jsonld_()` solve for this too?
There's a (closed) issue to: "Add JSONLD @context to the top level .ipynb node nbformat#44"
Datoviz: High-Performance GPU Scientific Visualization Library with Vulkan
"Datoviz: high-performance GPU scientific data visualization C/C++/Python library" https://github.com/datoviz/datoviz
> In the long term, Datoviz will mostly be used as a VisPy 2.0 backend.
ctypes bindings for Python
Matplotlib and MATLAB colormaps
0.4: WebGPU, Jupyter
... jupyter-xeus and JupyterLite; https://github.com/jupyter-xeus/xeus
From https://news.ycombinator.com/item?id=43201706 :
> jupyter-xeus supports environment.yml with jupyterlite with packages from emscripten-forge [a Quetz repo built with rattler-build]
> emscripten-forge src: https://github.com/emscripten-forge/recipes/tree/main/recipe... web: https://repo.mamba.pm/emscripten-forge
High-performance computing, with much less code
> The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.
exo-lang/exo: https://github.com/exo-lang/exo
Proposed Patches Would Allow Using Linux Kernel's Libperf from Python
> Those interested in the prospects of leveraging the libperf API from Python code can see this RFC patch series for all the details: https://lore.kernel.org/lkml/20250313075126.547881-1-gautam@... :
>> In this RFC series, we are introducing a C extension module to allow python programs to call the libperf API functions. Currently libperf can be used by C programs, but expanding the support to python is beneficial for python users.
Beta: Connect your Colab notebooks directly to Kaggle's Jupyter Servers
"Kaggle's notebook environment is now based on Colab's Docker image" https://news.ycombinator.com/item?id=42480582
kaggle/docker-python: https://github.com/Kaggle/docker-python
Google Colaboratory docs > Local runtimes: https://research.google.com/colaboratory/local-runtimes.html :
> https://us-docker.pkg.dev/colab-images/public/runtime
docker run --gpus=all -p 127.0.0.1:9000:8080 us-docker.pkg.dev/colab-images/public/runtime
Is Rust a good fit for business apps?
Rust is hard to get started with, but once you reach optimal development speed, I can't see how you can go back to any other language.
I have a production application that runs on Rust. It never crashed (yet), it's rock solid and does not require frequent restarts of the container due to memory leaks or whatever, and every time I need to fix something, I can jump into code after weeks of not touching it, and being confident that my changes won't break anything, as long as the code complies (and is free of logical bugs).
I can't say the same about any other language, and I use a few of them in production: NodeJS, Ruby, Python, Java. Each of them has their own quirks, and I'm never 100% confident that changes in one place of the code won't cause harm in another place, or that the code is free of stupid bugs like null-pointer exceptions.
100% agreed. After writing Rust as my day job and using it in production for the last 2 years, my only criticism is the lack of a good high level standard library (like Go) and the insanity and fragmentation of Future type signatures.
Though at this point, I wish I could write everything in Rust - it's fantastic
Also, I can't wait to (practically) use it as a replacement for JavaScript on the web
So you can't write everything in rust? You mean like business apps? Just asking.
Last I checked, WASM support isn't quite there, yet, basically. I haven't checked in a little while, though.
re: noarch, emscripten-32, and/or emscripten-wasm32 WASM packages of rust on emscripten-forge a couple days ago. [1][2]
emscripten-forge is a package repo of conda packages for `linux-64 emscripten-32 emscripten-wasm32 osx-arm64 noarch` built with rattler-build and hosted with quetz: https://repo.mamba.pm/emscripten-forge
Evcxr is a rust kernel for jupyter; but jupyter-xeus is the new (cpp) way to write jupyterlite kernels like xeus-, xeus-sqlite, xeus-lua, xeus-javascript, xeus-javascript
[1]: evcxr_jupyter > "jupyter lite and conda-forge feedstock(s)" https://github.com/evcxr/evcxr/issues/399
[2]: emscripten-forge > "recipes_emscripten/rust and evxcr_jupyter kernel" https://github.com/emscripten-forge/recipes/issues/1983
container2wasm c2w might already compile rust to WASI WASM? https://github.com/container2wasm/container2wasm
Trump's Big Bet: Americans Will Tolerate Downturn to Restore Manufacturing
Is there are reason to think suddenly a bunch of manufacturing pops up and pushes prices down?
I’m not convinced there’s any real strategy here.
The SDGs have Goals, Targets, and Indicators.
What are the goals for domestic manufacturing?
FRED series tagged "Manufacturing" https://fred.stlouisfed.org/tags/series?t=manufacturing
"Manufacturers' New Orders: Total Manufacturing (AMTMNO)" https://fred.stlouisfed.org/series/AMTMNO
The DuckDB Local UI
The UI aesthetics look similar to the excellent Rill, also powered by DuckDB: https://www.rilldata.com/
Rill has better built in visualizations and pivot tables and overall a polished product with open-source code in Go/Svelte. But the DuckDB UI has very nice Jupyter notebook-style "cells" for editing SQL queries.
Rill founder here, I have no comment on the UI similarity :) but I would emphasize our vision is building DuckDB-powered metrics layers and exploratory dashboards -- which we presented at DuckCon #6 last month, PDF below [1] -- and less on notebook style UIs like Hex and Jupyter.
Rill is fully open-source under the Apache license. [2]
[1] https://blobs.duckdb.org/events/duckcon6/mike-driscoll-rill-...
WhatTheDuck does SQL with duckdb-wasm
Pygwalker does open-source descriptive statistics and charts from pandas dataframes: https://github.com/Kanaries/pygwalker
ydata-profiling does open-source Exploratory Data Analysis (EDA) with Pandas and Spark DataFrames and integrates with various apps: https://github.com/ydataai/ydata-profiling #integrations, #use-cases
xeus-sqlite is a xeus kernel for jupyter and jupyterlite which has Vega visualizations for sql queries: https://github.com/jupyter-xeus/xeus-sqlite
jupyterlite-xeus installs packages specified in an environment.yml from emscripten-forge: https://jupyterlite-xeus.readthedocs.io/en/latest/environmen...
emscripten-forge has xeus-sqlite and pandas and numpy and so on; but not yet duckdb-wasm: https://repo.mamba.pm/emscripten-forge
duckdb-wasm "Feature Request: emscripten-forge package" https://github.com/duckdb/duckdb-wasm/discussions/1978
Scientists discover an RNA that repairs DNA damage
ScholarlyArticle: "NEAT1 promotes genome stability via m6A methylation-dependent regulation of CHD4" (2025) https://genesdev.cshlp.org/content/38/17-20/915
"Supercomputer draws molecular blueprint for repairing damaged DNA" (2025) https://news.ycombinator.com/item?id=43349021
Supercomputer draws molecular blueprint for repairing damaged DNA
"Molecular architecture and functional dynamics of the pre-incision complex in nucleotide excision repair" (2025) https://www.nature.com/articles/s41467-024-52860-y
Also, "Scientists discover an RNA that repairs DNA damage" (2025) https://news.ycombinator.com/item?id=43313781
"NEAT1 promotes genome stability via m6A methylation-dependent regulation of CHD4" (2025) https://genesdev.cshlp.org/content/38/17-20/915
Tunable superconductivity and Hall effect in a transition metal dichalcogenide
ScholarlyArticle: "Tunable superconductivity coexisting with the anomalous Hall effect in a transition metal dichalcogenide" (2025) https://www.nature.com/articles/s41467-025-56919-2
D-Wave First to Demonstrate Quantum Supremacy on Useful, Real-World Problem
ScholarlyArticle: "Beyond-classical computation in quantum simulation" (2025) https://www.science.org/doi/10.1126/science.ado6285
Move over graphene Scientists forge bismuthene and host of atoms-thick metals
From https://news.ycombinator.com/item?id=43337868 :
> The new bismuth-based transistor could revolutionize chip design, offering higher efficiency while bypassing silicon’s limitations [...]
Can bismuthene and similar be nanoimprinted?
Peer-to-peer file transfers in the browser
I keep a long list of browser based and CLI p2p file transfer tools[1].
LimeWire (which now has its crypto currency probably) has been on a rampage recently aquiring some of really good tools including ShareDrop and SnapDrop. https://pairdrop.net/ is the last one standing so far.
[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
/? inurl:awesome p2p site:github.com: https://google.com/search?q=inurl:awesome+p2p+site:github.co...
Peer-to-peer: https://en.wikipedia.org/wiki/Peer-to-peer
How to build your own replica of TARS from Interstellar
mujoco > "Renderer API" https://github.com/google-deepmind/mujoco/issues/2487 re: renderers in addition to Unity
Maybe a 3d robot on screen?
Low-power 2D gate-all-around logics via epitaxial monolithic 3D integration
ScholarlyArticle: "Low-power 2D gate-all-around logics via epitaxial monolithic 3D integration" (2025) https://www.nature.com/articles/s41563-025-02117-w
"China’s new silicon-free chip beats Intel with 40% more speed and 10% less energy" (2025) https://interestingengineering.com/innovation/chinas-chip-ru... :
> The new bismuth-based transistor could revolutionize chip design, offering higher efficiency while bypassing silicon’s limitations [...]
> Their newly developed 2D transistor is said to be 40% faster than the latest 3-nanometre silicon chips from Intel and TSMC while consuming 10% less energy. This innovation, they say, could allow China to bypass the challenges of silicon-based chipmaking entirely.
> “It is the fastest, most efficient transistor ever,” according to an official statement published last week on the PKU website.
Does Visual Studio rot the mind? (2005)
Absolutely it does. In the same way Socrates warned us about books, VS externalizes our memory and ability. and makes us reliant on a tool to accomplish something we have the ability to do without. This reliance goes even further to make us dependent on it as our natural ability withers from neglect.
I cannot put it more plainly that it incentives us to make a part of us atrophy. It would be like us giving up the ability to run a mile because our reliance on cars weakened our legs and de-conditioned us to the point of making it physically impossible.
Cloudflare: New source of randomness just dropped
Another source of random entropy better than a wall of lava lamps:
>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
But is that good enough random for rngd to continually re-seed an rng with?
(Is our description of such measurable fluctuations in the quantum foam inadequate to predict what we're calling random?)
> google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid... .. docs: https://github.com/google/paranoid_crypto/blob/main/docs/ran...
Stem cell therapy trial reverses "irreversible" damage to cornea
From https://news.ycombinator.com/item?id=37204123 (2023) :
> From "Sight for sore eyes" (2009) https://newsroom.unsw.edu.au/news/health/sight-sore-eyes :
>> "A contact lens-based technique for expansion and transplantation of autologous epithelial progenitors for ocular surface reconstruction" (2009) http://dx.doi.org/10.1097/TP.0b013e3181a4bbf2
>> In this study, they found that only one brand (Bausch and Lomb IIRC) of contact lens worked well as a scaffold for the SC
Good to see stem cell research in the US.
Stem cell laws and policy in the United States: https://en.m.wikipedia.org/wiki/Stem_cell_laws_and_policy_in...
If you witness a cardiac arrest, here's what to do
From https://news.ycombinator.com/item?id=39850383#39863280 :
> Basic life support (BLS) https://en.wikipedia.org/wiki/Basic_life_support :
>> DRSABCD: Danger, Response, Send for help, Airway, Breathing, CPR, Defibrillation
> "Drs. ABCD"
From "Defibrillation devices save lives using 1k times less electricity" (2024) https://news.ycombinator.com/item?id=42061556 :
> "New defib placement increases chance of surviving heart attack by 264%" (2024) https://newatlas.com/medical/defibrillator-pads-anterior-pos... :
>> Placing defibrillator pads on the chest and back, rather than the usual method of putting two on the chest, increases the odds of surviving an out-of-hospital cardiac arrest by more than two-and-a-half times, according to a new study.
From SBA.gov blog > "Review Your Workplace Safety Policies" (2019) https://www.sba.gov/blog/review-your-workplace-safety-polici... :
> Also, consider offering training for CPR to employees. Be sure to have an automatic external defibrillator (AED) on site and have employees trained on how to use it. The American Red Cross and various other organizations offer free or low-cost training.
Ask HN: Optical Tweezers for Neurovascular Resection?
Applications, Scale, Feasibility?
Optical tweezers: https://en.wikipedia.org/wiki/Optical_tweezers
NewsArticle: "Engineers create a chip-based tractor beam for biological particles" (2024) https://phys.org/news/2024-10-chip-based-tractor-biological-...
ScholarlyArticle: "Optical tweezing of microparticles and cells using silicon-photonics-based optical phased arrays" (2024) https://www.nature.com/articles/s41467-024-52273-x
Europe bets once again on RISC-V for supercomputing
China recently moved that direction. That would be nice collaboration to see between EU and China.
China to publish policy to boost RISC-V chip use nationwide, sources say https://www.reuters.com/technology/china-publish-policy-boos...
If you ignore the military ambitions of China and the fact they’re openly sharing technology with Russia, perhaps.
I don’t see anything but regret for Europe several decades from now if they decide to start providing China with the technical expertise they’re currently lacking in this space.
This is all about China trying to find a way to escape the pressure of sanctions from Europe and the US.
Didn't ARM start in Europe?
And RISC-V started at UC Berkeley in 2010.
RISC-V: https://en.wikipedia.org/wiki/RISC-V
"Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490 with nanoimprinting (which 10x's current gen nanolithography FWIU)
"Nanoimprint Lithography Aims to Take on EUV" (2025) https://news.ycombinator.com/item?id=42575111 :
> Called nanoimprint lithography (NIL), it’s capable of patterning circuit features as small as 14 nanometers—enabling logic chips on par with Intel, AMD, and Nvidia processors now in mass production.
Online Embedded Rust Simulator
TX for this! My older kid and I are learning Rust with Rustlings, but this will help me add a physical element to it (even though it's simulated), eventually we may get an esp32. Younger kid loves doing Microsoft microbit, he will definitely be interested in this.
Wokwi also supports Pi Pico w/ Python: https://news.ycombinator.com/item?id=38034530 , https://news.ycombinator.com/item?id=36970206
This kit connects a BBC Microbit v2 to a USB-chargeable Li-Ion battery on a mountable expansion board with connectors for Motors for LEGOs ® and a sonar:bit ultrasonic sensor: "ELECFREAKS micro:bit 32 IN 1 Wonder Building Kit, Programmable K12 Educational Learning Kit with [MOC blocks / Technics®] Building Blocks/Sensors/Wukong Expansion Board" https://shop.elecfreaks.com/products/elecfreaks-micro-bit-32...
There are docs on GitHub for the kit: https://github.com/elecfreaks/learn-en/tree/master/microbitK... .. web: https://wiki.elecfreaks.com/en/microbit/building-blocks/wond...
"Raspberry Pico Rust support" https://github.com/wokwi/wokwi-features/issues/469#issuecomm... :
> For future people who come across this issue: you can still simulate Rust on Raspberry Pi Pico with Wokwi, but you'll have to compile the firmware yourself. Then you can load it into Wokwi for VS Code or Wokwi for Intellij.
Scientists Confirm the Existence of 'Second Sound'
ScholarlyArticle: "Thermography of the superfluid transition in a strongly interacting Fermi gas" (2025) https://www.science.org/doi/10.1126/science.adg3430
New visualization but not a new phenomenon. I observed it in the 510 lab when I was getting my physics PhD at Cornell.
Armchair physicist with dissonance about superfluids (or Bose-Einstein condensates) which break all the existing models.
And my armchair physicist notes; /? superfluid https://westurner.github.io/hnlog/ #search=superfluid
I think I've probably already directly mentioned Fedi's?
Finally found this; https://news.ycombinator.com/item?id=42957014 :
> Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
- [ ] Models fluidic attractor systems
- [ ] Models superfluids
- [ ] Models n-body gravity in fluidic systems
- [ ] Models retrocausality
Re: gauge theory, superfluids: https://news.ycombinator.com/item?id=43081303
He said there's a newer version of this:
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
People Are Paying $5M and $1M to Dine with Donald Trump
Sleaziest thing I've ever heard!
Isn't that illegal influence peddling?
For the record, when Buffet auctions a meeting for pay it's for charity; and he's not on the clock as a public servant.
For context,
Change.org (2007) https://en.wikipedia.org/wiki/Change.org
We The People (2009-01) https://en.wikipedia.org/wiki/We_the_People_(petitioning_sys... :
> The right "to petition the Government for a redress of grievances" is guaranteed by the United States Constitution's First Amendment. [...]
> Overview > Thresholds: Under the Obama administration's rules, a petition had to reach 150 signatures (Dunbar's Number) within 30 days to be searchable on WhiteHouse.gov, according to Tom Cochran, former director of digital technology. [8] It had to reach 100,000 signatures within 30 days to receive an official response. [9] The original threshold was set at 5,000 signatures on September 1, 2011,[10] was raised to 25,000 on October 3, 2011, [11] and raised again to 100,000 as of January 15, 2013. [12] The White House typically would not comment when a petition concerned an ongoing investigation. [13]
> Sleaziest
Sorry, that's ad hominem (name calling) but not a valid argument.
Is it civilly or criminally illegal for the standing president to do "sell the plate" fundraisers for PACs that aren't kickbacks?
Where is the current record of such receipts?
Kickback (bribery): https://en.wikipedia.org/wiki/Kickback_(bribery)
Influence peddling: https://en.wikipedia.org/wiki/Influence_peddling
US Constitution > Article II > Section 4: https://constitution.congress.gov/constitution/article-2/#ar...
> The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
Impeachment in the United States: https://en.wikipedia.org/wiki/Impeachment_in_the_United_Stat... :
> The Constitution limits grounds of impeachment to "Treason, Bribery, or other high Crimes and Misdemeanors", [2] but does not itself define "high crimes and misdemeanors".
Presumably, Bribery needn't be defined in the Constitution because such laws and rules are the business of the Legislative and the Judicial branches, and such rules apply to all people.
Buffett is not a public servant in any way, on or off the clock. He is entirely a private citizen.
I agree. As a fund or holding company manager, Mr. Buffet has a fiduciary obligation to shareholders.
The president has a fiduciary obligation to the country.
https://harvardlawreview.org/print/vol-132/faithful-executio...
In terms of fiduciary and Constitutional - or Constitutional (including fiduciary) - obligations, is there any penalty for violating an Oath of Office?
US Constitution > Article II.S1.C8.1 "Oath of Office for the Presidency": https://constitution.congress.gov/browse/essay/artII-S1-C8-1... :
> Before he enter on the Execution of his Office, he shall take the following Oath or Affirmation:– I do solemnly swear (or affirm) that I will faithfully execute the Office of President of the United States, and will to the best of my Ability, preserve, protect and defend the Constitution of the United States.
US Constitution > Article II.S1.C8.1.5 "Violation of the Presidential Oath" https://constitution.congress.gov/browse/essay/artII-S1-C8-1...
Oath of office of the president of the United States: https://en.wikipedia.org/wiki/Oath_of_office_of_the_presiden...
Impeachment, removal from office, being barred from ever holding office again. Those are separate but need to happen in order.
That’s the limit of it, by design I think. The lack of criminality or even civil offense means the job is predicated on trust to achieve political goals. Once trust is lost, the individual must be removed from office or immense damage will ensue.
I've never heard of double jeopardy for impeachment; being impeached does not preclude criminal prosection for the same offense.
FBI's assessment of presidential immunity might should be reviewed in light of the court's recent ruling on same and the fact that they are an executive branch department of government. They work for the executive - as evidenced by the firing of James Comey - so we can't trust their assessment of his immunity.
AI tools are spotting errors in research papers: inside a growing movement
AI tools are introducing errors in research papers.
Wonder which is more common overall? Can AI spot more errors than it creates, or is it in equillibrium and a net zero?
Other than these quips, I am actually a fan of this movement. Anything that helps in the scientific process of peer review, as long as it does not actively annoy and delay the authors is a welcome addition to the process. Papers will be written, many of them incorrectly, with statistical or procedural errors. To have a checker that can find these quickly, ideally pre-publishing, is a great thing.
Also pro error finding with LLMs.
But concerned about tool dependency; if you can't do it without the AI, you can't support the code.
"LLMs cannot find reasoning errors, but can correct them" (2023) https://news.ycombinator.com/item?id=38353285
"New GitHub Copilot research finds 'downward pressure on code quality'" (2024) https://news.ycombinator.com/item?id=39168105
"AI generated code compounds technical debt" (2025) https://news.ycombinator.com/item?id=43185735 :
> “I don't think I have ever seen so much technical debt being created in such a short period of time"
Even if trained on only formally verified code, we should not expect LLMs to produce code that passes formal verification.
"Ask HN: Are there any objective measurements for AI model coding performance?" (2025) https://news.ycombinator.com/item?id=43206779
https://news.ycombinator.com/item?id=43061977#43069287 re: memory vulns, SAST DAST, Formal Verification, awesome-safety-critical
Covid-19 speeds up artery plaque growth, raising heart disease risk
From https://news.ycombinator.com/item?id=40681226 :
> "Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming" (2016) https://www.science.org/doi/10.1126/scitranslmed.aad6100
> "Powdered Booze Could Fix Your Clogged Arteries" (2016) https://www.popsci.com/compound-in-powdered-alcohol-can-also...
FWIU, beta-cyclodextrin is already FDA approved, and injection of betacyclodextrin reversed arterio/atherosclerosis; possibly because our arteries are caked with sugar alcohol and beta-cyclodextrin absorbs alcohol.
Asteroid fragments upend theory of how life on Earth bloomed
> Not only does Bennu contain all 5 of the nucleobases that form DNA and RNA on Earth and 14 of the 20 amino acids found in known proteins, the asteroid’s amino acids hold a surprise.
> On Earth, amino acids in living organisms predominantly have a ‘left-handed’ chemical structure. Bennu, however, contains nearly equal amounts of these structures and their ‘right-handed’, mirror-image forms, calling into question scientists’ hypothesis that asteroids similar to this one might have seeded life on Earth.
Hm, why would chirality need to be a consequence of the panspermia hypothesis? I thought the mission defined "necessary ingredients for life" as any bio markers that might have seeded a primordial soup on Earth.
I don’t know a ton about how chirality works. Couldn’t it just be that half(or some number) the asteroids contain left handed and half contain right? We only have a sample of one. Or is there something fundamental about left handed molecules that gives us reason to believe that if we see right handed ones once we would rarely see left handed ones in similar disconnected systems?
Chirality means that there is a mirror image of a molecule that cannot be twisted into the original shape, despite being structurally identical. Due to the particular ways molecules tickle each other in living organisms to do interesting things, that means that the mirror image (racemate) of a molecule does something different.
In chemical synthesis, most (but not all) processes tend to preserve chirality of molecules: replacing a bunch of atoms in a molecule with another set will tend to not cause the molecule to flip to a mirror image. If you start from an achiral molecule (one where its mirror image can be rotated to the original), almost all processes tend to end up with a 50-50 mix of the two racemates of the product.
In biochemistry, you can derive all of the amino acids and sugars from a single chiral molecule: glyceral. It turns out that nearly all amino acids end up in the form derived from L-glyceral and nearly all sugars come from D-glyceral. The question of why this is the case is the question of homochirality.
There's as yet no full answer to the question of homochirality. We do know that a slight excess in one racemate tends to amplify into a situation where only that racemate occurs. But we don't know if the breakdown into L-amino acids and D-sugars (as opposed to D-amino acids and L-sugars) happened by pure chance or if there is some specific reason that L-amino acids/D-sugars is preferred.
"I Applied Wavelet Transforms to AI and Found Hidden Structure" https://news.ycombinator.com/item?id=42956262 re: CODES, chirality, and chiral molecules whose chirality results in locomotion
Do any of these affect the fields that would have selected for molecules on Earth? The Sun's rotation, Earth's rotation, the direction of revolution in our by now almost coplanar solar system, Galactic rotation
Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control
Hey HN, we’re Eric and Kristian of Enhanced Radar. We’re working on making air travel safer by augmenting control services in our stressed airspace system.
Recent weeks have put aviation safety on everyone’s mind, but we’ve been thinking about this problem for years. Both of us are pilots — we have 2,500 hours of flight time between us. Eric flew professionally and holds a Gulfstream 280 type rating and both FAA and EASA certificates. Kristian flies recreationally, and before this worked on edge computer vision for satellites.
We know from our flying experience that air traffic management is imperfect (every pilot can tell stories of that one time…), so this felt like an obvious problem to work on.
Most accidents are the result of an overdetermined “accident chain” (https://code7700.com/accident_investigation.htm). The popular analogy here is the swiss cheese model, where holes in every slice line up perfectly to cause an accident. Often, at least one link in that chain is human error.
We’ll avoid dissecting this year’s tragedies and take a close call from last April at DCA as an example:
The tower cleared JetBlue 1554 to depart on Runway 04, but simultaneously a ground controller on a different frequency cleared a Southwest jet to cross that same runway, putting them on a collision course. Controllers noticed the conflict unfolding and jumped in to yell at both aircraft to stop, avoiding a collision with about 8 seconds to spare (https://www.youtube.com/watch?v=yooJmu30DxY).
Importantly, the error that caused this incident occurred approximately 23 seconds before the conflict became obvious. In this scenario, a good solution would be a system that understands when an aircraft has been cleared to depart from a runway, and then makes sure no aircraft are cleared to cross (or are in fact crossing) that runway until the departing aircraft is wheels-up. And so on.
To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio. It’s trained on a large amount of our own labeled ATC audio collected from our VHF receivers located at airports around the US. We improve performance by injecting context such as airport layout details, nearby/relevant navaids, and information on all relevant aircraft captured via ADS-B.
Our product piggy-backs on the raw signal in the air (VHF radio from towers to pilots) by having our own antennas, radios, and software installed at the airport. This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.
Building models for processing ATC speech is our first step toward building a safety net that detects human error (by both pilots and ATC). The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%. We’re using these transcripts with NLP and ADS-B (the system that tracks aircraft positions in real time) for readback detection (ensuring pilots correctly repeat ATC instructions) and command compliance.
There are different views about the future of ATC. Our product is naturally based on our own convictions and experience in the field. For example, it’s sometimes said that voice comms are going away — we think they aren’t (https://www.ericbutton.co/p/speech). People also point out that airplanes are going to fly themselves — in fact they already do. But passenger airlines, for example, will keep a pilot onboard (or on the ground) with ultimate control, for a long time from now; the economics and politics and mind-boggling safety and legal standards for aviation make this inevitable. Also, while next-gen ATC systems like ASDE-X are already in place, they don’t eliminate the problem. The April 2024 scenario mentioned above occurred at DCA, an ASDE-X-equipped airport.
America has more than 5,000 public-use airports, but only 540 of these have control towers (due to cost). As a result, there are over 100 commercial airline routes that fly into uncontrolled airports, and 4.4M landings at these fields. Air traffic control from first principles looks significantly more automated, more remote-controlled, and much cheaper — and as a result, much more widespread.
We’ve known each other for 3 years, and decided independently that we needed to work on air traffic. Having started on this, we feel like it’s our mission for the next decade or two.
If you’re a pilot or an engineer who’s thought about this stuff, we’d love to get your input. We look forward to hearing everyone’s thoughts, questions, ideas!
Can any aircraft navigation system plot drone Remote ID beacons on a map?
How sensitive of a sensor array is necessary to trilaterate Remote ID signals and birds for aircraft collision avoidance?
A Multispectral sensor array (standard) would probably be most robust.
From https://news.ycombinator.com/item?id=40276191 :
> Are there autopilot systems that do any sort of drone, bird, or other aerial object avoidance?
A lot of drones these days will have ADS-B. The ones that don't probably have geo-fencing to keep them away from airports. There's also all kinds of drone detection systems based on RF emittance.
The bird problem is a whole other issue. Mostly handled by PIREP today if birds are hanging out around an approach/departure path.
Computer vision here is definitely going to be useful long term.
FWIU geo-fencing was recently removed from one brand of drones.
Thermal: motors, chips, heatsinks, and batteries are warm but the air is colder around propellers; RF: motor RF, circuit RF, battery RF, control channel RF, video channel RF, RF from federally required Remote ID or ADS-B beacons, gravitational waves
Aircraft have less time to recover from e.g. engine and windshield failure at takeoff and landing at airports; so all drones at airports must be authorized by ATC Air Traffic Control: it is criminally illegal to fly a drone at the airport without authorization because it endangers others.
Tagging a bird on the 'dashcam'+radar+sensors feed could create a PIREP:
PIREP: Pilot's Report: https://en.wikipedia.org/wiki/Pilot_report
Looks like "birds" could be coded as /SK sky cover, /WX weather and visiblity, or /RM remarks with the existing system described on wikipedia.
Prometheus (originally developed by SoundCloud) does pull-style metrics: each monitored server hosts over HTTP(S) a document in binary prometheus format that the centralized monitoring service pulls from whenever they get around to it. This avoids swamping (or DOS'ing) the centralized monitoring service which must scale to the number of incoming reports in a push-style monitoring system.
All metrics for the service are included in the one (1) prometheus document, which prevents requests for monitoring data from exhausting the resources of the monitored server. It is up to the implementation to determine whether to fill with nulls if sensor data is unavailable, or to for example fill forward with the previous value if sensor data is unavailable for one metric.
Solutions for birds around runways and in flight paths and around wind turbines:
- Lights
- Sounds: human audible, ultrasonic
- Thuds: birds take flight when the ground shakes
- Eyes: Paint large eyes on signs by the runways
> Sounds and Thuds [that scare birds away]
In "Glass Antenna Turns windows into 5G Base Stations" https://news.ycombinator.com/item?id=41592848 or a post linked thereunder, I mentioned ancient stone lingams on stone pedestals which apparently scare birds away from temples when they're turned.
/? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
Are some ancient stone lingams also piezoelectric voice transducer transmitters, given water over copper or gold between the lingam and pedestal and given the original shape of the stones? Also, stories of crystals mounted on pyramids and towers.
Could rotating large stones against stone scare birds away from runways?
Remote ID: https://en.wikipedia.org/wiki/Remote_ID
Airborne collision avoidance system: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...
"Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665458
Are there already bird not a bird datasets?
Procedures for creating "bird on Multispectral plane radar and video" dataset(s):
Tag birds on the dashcam video with timecoded sensor data and a segmentation and annotation tool.
Pinch to zoom, auto-edge detect, classification probability, sensor status
voxel51/fiftyone does segmentation and annotation with video and possibly Multispectral data: https://github.com/voxel51/fiftyone
Oh, weather radar would help pilots too:
From https://news.ycombinator.com/item?id=43260690 :
> "The National Weather Service operates 160 weather radars across the U.S. and its territories. Radar detects the size and motion of particles in rain, snow, hail and dust, which helps meteorologists track where precipitation is falling. Radar can even indicate the presence of a tornado [...]"
From today, just now; fish my wish!
"Integrated sensing and communication based on space-time-coding metasurfaces" (2025-03) https://news.ycombinator.com/item?id=43261825
NASA uses GPS on the moon for the first time
> the Lunar GNSS Receiver Experiment (LuGRE) [is] one of the 10 projects packed aboard Blue Ghost. [...]
> However, LuGRE’s achievements didn’t only begin after touchdown on the moon. On January 21, the instrument broke NASA’s record for highest altitude GNSS signal acquisition at 209,900 miles from Earth while traveling to the moon. That record continued to rise during Blue Ghost’s journey over the ensuing days, peaking at 243,000 miles from Earth after reaching lunar orbit on February 20.
New Benchmark in Quantum Computational Advantage with 105-Qubit Processor
ScholarlyArticle: "Establishing a New Benchmark in Quantum Computational Advantage with 105-qubit Zuchongzhi 3.0 Processor" (2025) https://arxiv.org/abs/2412.11924
NewsArticle: "Superconducting quantum processor prototype operates 10^15 times faster than fastest supercomputer" https://phys.org/news/2025-03-superconducting-quantum-proces... :
> The quantum processor achieves a coherence time of 72 μs, a parallel single-qubit gate fidelity of 99.90%, a parallel two-qubit gate fidelity of 99.62%, and a parallel readout fidelity of 99.13%. The extended coherence time provides the necessary duration for performing more complex operations and computations. [...]
> To evaluate its capabilities, the team conducted an 83-qubit, 32-layer random circuit sampling task on the system. Compared to the current optimal classical algorithm, the computational speed surpasses that of the world's most powerful supercomputer by 15 orders of magnitude. Additionally, it outperforms the latest results published by Google in October of last year by 6 orders of magnitude, establishing the strongest quantum computational advantage in the superconducting system to date.
Integrated sensing and communication based on space-time-coding metasurfaces
ScholarlyArticle: "Integrated sensing and communication based on space-time-coding metasurfaces" (2025) https://www.nature.com/articles/s41467-025-57137-6
NewsArticle: "Space-time-coding metasurface could transform wireless networks with dual-functionality for 6G era" https://techxplore.com/news/2025-03-space-coding-metasurface...
How the U.K. broke its own economy
This video explains who paid for Brexit and Trump: "Canadian company tied to Brexit and Trump backers" (2019) https://youtube.com/watch?v=alNjJVpO8L4&
Cambridge Analytica > United Kingdom: https://en.wikipedia.org/wiki/Cambridge_Analytica#Channel_4_...
"New Evidence Emerges of Steve Bannon and Cambridge Analytica’s Role in Brexit" (2018) https://www.newyorker.com/news/news-desk/new-evidence-emerge...
"‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower" (2018) https://www.theguardian.com/news/2018/mar/17/data-war-whistl...
Facebook–Cambridge Analytica data scandal: https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana... ($4b FTC fine)
Furthermore,
"3.5M Voters Were Purged During 2024 Presidential Election [video]" (2025) https://news.ycombinator.com/item?id=43047349
They only "won" by like 1.5m votes.
Who's illegitimate? https://www.google.com/search?q=Trump+birthrarism
Live Updates: China and Canada Retaliate Against New Trump Tariffs
I'm learning that when things go to shit smart money looks for opportunity.
Anyone want to start up a company selling pre-fab shipping-container houses with me?
More broadly, the kind of people who make money under tariff schemes already have investment managers.
I haven’t heard a Keynesian say these tariffs will achieve their stated purpose.
The US tariffs hurt US consumers and Canadian companies. The counter tariffs do the reverse. However, the US seem to be all-product while the Canadian ones seem tailored to "encourage" selection of a non-US alternative (and are geo-located as it were). Is the US ready to experience some pain, as those outside the US have been prepping for this sh*t for a while.
The counter tariffs are usually explicitly chosen to make it not hurt their own citizens while hurting red states, because alternative sources of the tariffed goods exists domestically or from other parts of the world. Americans need aluminum and steel and lumber and power. Canada does not need Kentucky whiskey and Harleys.
Kinda, yeah, and I get that everyone well have their pet good that they feel shouldn't be tariffed, but including fruits/vegetables/legumes very much hurts the consumer.
China seemed to focus on agricultural goods which it has been pushing for local industries for decades anyways. I think this was an olive branch in that it didn't really try to turn the screws?
Because of live updates, this snapshot is necessarily out of date: https://archive.ph/qK2mx
IKEA registered a Matter-over-Thread temperature sensor with the FCC
Matter (standard) https://en.wikipedia.org/wiki/Matter_(standard)
Connectivity Standards Alliance > Certified products: https://csa-iot.org/csa-iot_products/?p_keywords=Thermometer...
Nanoscale spin rectifiers for harvesting ambient radiofrequency energy
"Nanoscale spin rectifiers for harvesting ambient radiofrequency energy" (2024) https://www.nature.com/articles/s41928-024-01212-1
From "NUS researchers develop new battery-free technology to power electronic devices using ambient radiofrequency signals" (2024) https://news.nus.edu.sg/nus-researchers-develop-new-battery-... :
> To address these challenges, a team of NUS researchers, working in collaboration with scientists from Tohoku University (TU) in Japan and University of Messina (UNIME) in Italy, has developed a compact and sensitive rectifier technology that uses nanoscale spin-rectifier (SR) to convert ambient wireless radio frequency signals at power less than -20 dBm to a DC voltage.
> The team optimised SR devices and designed two configurations: 1) a single SR-based rectenna operational between -62 dBm and -20 dBm, and 2) an array of 10 SRs in series achieving 7.8% efficiency and zero-bias sensitivity of approximately 34,500 mV/mW. Integrating the SR-array into an energy harvesting module, they successfully powered a commercial temperature sensor at -27 dBm.
Passive Wi-Fi or "backscatter redirection Wi-Fi": https://en.wikipedia.org/wiki/Passive_Wi-Fi :
> The system used tens of microwatts of power, [2] 10^−4 less energy than conventional Wi-fi devices, and one thousandth the energy of Bluetooth LE and Zigbee communications standards. [1]
Ask HN: Where are the good Markdown to PDF tools (that meet these requirements)?
I'm trying to convert a very large Markdown file (a couple hundred pages) to PDF.
It contains lots of code in code blocks and has a table of contents at the start with internal links to later pages.
I've tried lots of different Markdown-PDF converters like md2pdf and Pandoc, even trying converting it through LaTeX first, however none of them produce working internal PDF links, have effective syntax highlighting for HTML, CSS, JavaScript and Python, and wrap code to fit it on the page.
I have a very long regular expression (email validation of course) that doesn't fit on one line but no solutions I have found properly break the lines on page overflow.
What tools does everyone recommend?
Understanding Smallpond and 3FS
smallpond: https://github.com/deepseek-ai/smallpond :
> A lightweight data processing framework built on DuckDB and 3FS.
[deleted]
SEC Declares Memecoins Are Not Subject to Oversight
By SEC, or in law in the US because DOGE?
Can QBT Qualified Blind Trusts own memecoins, or is that still a conflict of interest? https://news.ycombinator.com/item?id=43201808
trump sets the direction of SEC enforcement and he's made it clear that he thinks he (and everyone else) should be able to rob normal people via cryptocurrency.
Rob Trump or a crony and I bet the rules of the game change fast.
These people do not have principles.
Collectible dolls aren't SEC regulated as investments. But they're still subject to laws protecting property rights.
Collectibles are also subject to financial reporting requirements that apply according to the USD value at time of purchase and sale.
As I said before in the past regarding "Elephant in the room: Quantum computers will destroy Bitcoin" https://news.ycombinator.com/item?id=43188345#43188777 :
> The market does not appear to cost infosec value, risk, or technical debt into cryptoasset prices.
> PQ or non-PQ does not predict asset price in 2025-02.
But what about DOGE?
Jeff Foxworthy, a comedian, once said:
> Sophisticated people invest their money in stock portfolios.
> Rednecks invest their money in commemorative plates.
Are NFTs or memecoins more similar to (NASCAR) commemorative plates?
That is the perfect analogy both for what NFTs are and who buys them. I’m stealing that.
So do we blame Biden or Trump for delisting XRP, or is SEC respectably independent?
What about CFTC and FTC and the CAT Consolidated Audit Trail; are collectibles over 10K exempt from KYC and AML there too?
(By comparison, banks put days-long holds on large checks.)
There's no such thing as an "independent" executive-branch agency. Congress can't by law create new branches of government.
Is that really true: isn't the Federal Reserve (intentionally designed to be) very independent from both Congress and the White House?
The Federal Reserve is a bit odd since it's a mix of private corporations & public governance. The Federal Reserve Board of Governors (BOG) is part of the government, the Federal Reserve Banks are private corporations whose officer's salaries are approved by the BOG but whose officers are elected by the Member Banks (banks in the US are required to be members). The Federal Open Market Committee (FOMC) is a mix of the BOG & some of the presidents of the various Federal Reserve Banks.
So the BOG isn't independent of the executive branch (it legally can't be), but the FOMC is partly independent since it's a mix of executive branch employees & private bank employees.
That's generally correct, but the issue is more about what kinds of powers are being exercised. Congress can create government-owned corporations, like Amtrak. Those corporations can function independently, insofar as they are not exercising executive powers.
The core function of the Fed isn't an executive power. It's not enforcing the law, or interacting with foreign countries. It's a bank that lends to other banks, and influences the market through that economic function. That's not an executive power and doesn't need to be subject to executive control.
The Fed also performs some executive functions (promulgating and enforcing various regulations). I'd argue those must be under executive control. But that doesn't address the Fed's core interest-rate-setting function.
The recent tariff spat by the tantrum thrower is supposably justified because of the drug overdose rate.
All lives matter.
List of causes of death by rate: https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rat... :
> Substance abuse: 0.58 %
What about the the other 99.4 % of the causes of death, in terms of federal priorities?
What about NIH and NSF funding for medical sciences research?
/? tariff authorization: https://tinyurl.com/certainplansforusall
"Are Drinking Straws Dangerous? (2017)" https://news.ycombinator.com/item?id=43041625
/? plastic straws executive order: https://www.google.com/search?q=plastic+straws+executive+ord...
The recently proposed budget would increase the deficit by 16 trillion dollars on 40 trillion?
Is an Amendment to the Constitution necessary to limit a right according to disability or to create an independent agency which is independent from the Executive (Justice), Legislative, and Judicial branches?
Define "Independent Agency"?
GSA General Services Administration: https://en.wikipedia.org/wiki/General_Services_Administratio... :
> The General Services Administration (GSA) is an independent agency of the United States government established in 1949 to help manage and support the basic functioning of federal agencies.
Independent agencies of the United States federal government: https://en.wikipedia.org/wiki/Independent_agencies_of_the_Un... :
> In the United States federal government, independent agencies are agencies that exist outside the federal executive departments (those headed by a Cabinet secretary) and the Executive Office of the President. [1] In a narrower sense, the term refers only to those independent agencies that, while considered part of the executive branch, have regulatory or rulemaking authority and are insulated from presidential control, usually because the president's power to dismiss the agency head or a member is limited.
> Established through separate statutes passed by Congress, each respective statutory grant of authority defines the goals the agency must work towards, as well as what substantive areas, if any, over which it may have the power of rulemaking. These agency rules (or regulations), when in force, have the power of federal law. [2]
However, like Acts of Congress and Executive Orders, such rules are not Constitutional Amendments.
> Examples of independent agencies: These agencies are not represented in the cabinet and are not part of the Executive Office of the president:
> [ Amtrak, CIA, FCC, FDIC, FEC, Federal Reserve, FERC, FTC, CFTC, SSA, TVA, NASA, NARA, OPM, ]
> Define "Independent Agency"?
I said independent executive-branch agency. The very first sentence of Article II says: "The executive Power shall be vested in a President of the United States of America." Congress can't create an agency that exercises "the executive Power" that's independent of the President. In the same way that Congress can't create an unelected mini-Congress that enacts laws binding on citizens, and can't create courts outside the judiciary branch that can convict people for federal crimes.
> [ Amtrak, CIA, FCC, FDIC, FEC, Federal Reserve, FERC, FTC, CFTC, SSA, TVA, NASA, NARA, OPM, ]
These entities all differ in whether they're exercising "executive power" or not. Amtrak doesn't meaningfully exercise executive power. Congress can provide for Amtrak to be independent of the President's control. Or to use another example, Congress could probably create a bank that provides student loans that's independent of Presidential control.
But the SEC is a quintessential executive-branch agency. It enacts rules that interpret the securities laws and can prosecute people for violations of securities laws.
> In a narrower sense, the term refers only to those independent agencies that, while considered part of the executive branch, have regulatory or rulemaking authority and are insulated from presidential control, usually because the president's power to dismiss the agency head or a member is limited.
Congress must confirm candidate appointees to independent agencies, otherwise they are not independent of the Executive.
Can the President terminate Congressionally-approved nominations without regard for their service? They can.
Is the President totally immune? They are not.
When can't the executive pardon themselves?
PEP 486 – Make the Python Launcher aware of virtual environments (2015)
It seems to me py launcher could have done the same thing as uv such as downloading and setting up python and managing virtual environments.
It may that there were so many distros of Python by the time that venv (and virtualenv and virtualenvwrapper) were written.
"PEP 3147 – PYC Repository Directories" https://peps.python.org/pep-3147/ :
> Linux distributions such as Ubuntu [4] and Debian [5] provide more than one Python version at the same time to their users. For example, Ubuntu 9.10 Karmic Koala users can install Python 2.5, 2.6, and 3.1, with Python 2.6 being the default. [...]
> Because these distributions cannot share pyc files, elaborate mechanisms have been developed to put the resulting pyc files in non-shared locations while the source code is still shared. Examples include the symlink-based Debian regimes python-support [8] and python-central [9]. These approaches make for much more complicated, fragile, inscrutable, and fragmented policies for delivering Python applications to a wide range of users. Arguably more users get Python from their operating system vendor than from upstream tarballs. Thus, solving this pyc sharing problem for CPython is a high priority for such vendors.
> This PEP proposes a solution to this problem.
> Proposal: Python’s import machinery is extended to write and search for byte code cache files in a single directory inside every Python package directory. This directory will be called __pycache__.
Should the package management tool also install multiple versions of the interpreter? conda, mamba, pixi, and uv do. Neither tox nor nox nor pytest care where the python install came from.
And then of course cibuildwheel builds binary wheels for Win/Mac/Lin and manylinux wheels for libc and/or musl libc. repairwheel, auditwheel, delocate, and delvewheel bundle shared library dependencies (.so and DLL) into the wheel, which is a .zip file with a .whl extension and a declarative manifest that doesn't require python code to run as the package installer.
https://news.ycombinator.com/item?id=42347468
repairwheel: https://github.com/jvolkman/repairwheel :
> It includes pure-python replacements for external tools like patchelf, otool, install_name_tool, and codesign, so no non-python dependencies are required.
pip used to support virtualenvs.
pip 0.2 (2008) https://pypi.org/project/pip/0.2/ :
> pip is complementary with virtualenv, and it is encouraged that you use virtualenv to isolate your installation.
https://pypi.org/project/pip/0.2/#using-pip-with-virtualenv :
pip install -E venvpath/ pkg1 pkg2
When was the -E <virtualenv> flag removed from pip and why? pip install --help | grep "\-E"
"PEP 453 – Explicit bootstrapping of pip in Python installations" https://peps.python.org/pep-0453/#changes-to-virtual-environ... :
> Python 3.3 included a standard library approach to virtual Python environments through the venv module. Since its release it has become clear that very few users have been willing to use this feature directly, in part due to the lack of an installer present by default inside of the virtual environment. They have instead opted to continue using the virtualenv package which does include pip installed by default.
"why venv install old pip?" re: `python -m ensurepip && python -m pip install -U pip` https://github.com/python/cpython/issues/74813
> When was the -E <virtualenv> flag removed from pip and why?
Though `pip install --python=... pkg` won't work ( https://github.com/pypa/pip/pull/12068 ),
Now, there's
pip --python=$VIRTUAL_ENV/bin/python install pkg
GSA Eliminates 18F
Amazing how the idea of being able to file tax returns online, without paying for a commercial service to do it, is considered far-left extremism in the US.
You’d think that simplifying tax returns and reducing costs in the taxation system would be something the right could get behind?
no, the right wants you to keep paying Turbotax.
"Musk ally is moving to close office behind free tax filing program at IRS" (2025) https://news.ycombinator.com/item?id=43222216
18F is not the group behind Direct File, and it's very annoying that this narrative keeps getting spread. DF was created by a team of people from USDS, 18F and the IRS. It's housed within the IRS.
Inheriting is becoming nearly as important as working
People really should look at the work of Gary Stevenson here: https://youtu.be/TflnQb9E6lw
The truth is economic growth hasn’t been occurring in real terms for most people for a long time and the rich have been transferring money from the poor to themselves at a dramatic rate.
I’m starting to think the entire system is corrupt and we are headed for a destroyed Europe and a civil war in the US. Maybe I’m very pessimistic but this moment in history feels like the end of the American empire, what comes after this is extremely uncertain but people only seem to demand a fair piece of the wealth after a world war.
From https://news.ycombinator.com/item?id=43140675 :
> Gini Index: https://en.wikipedia.org/wiki/Gini_coefficient
Find 1980 on this chart of wealth inequality in the US:
> GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
Are there additional measures of wealth inequality?
Income distribution: https://en.wikipedia.org/wiki/Income_distribution
World Inequality Database: https://en.wikipedia.org/wiki/World_Inequality_Database
USA!: https://wid.world/country/usa/
Economic inequality: https://en.wikipedia.org/wiki/Economic_inequality :
> Economic inequality is an umbrella term for a) income inequality or distribution of income (how the total sum of money paid to people is distributed among them), b) wealth inequality or distribution of wealth (how the total sum of wealth owned by people is distributed among the owners), and c) consumption inequality (how the total sum of money spent by people is distributed among the spenders).
Musk ally is moving to close office behind free tax filing program at IRS
/? turbotax https://hn.algolia.com/?q=turbotax
"TurboTax’s 20-Year Fight to Stop Americans from Filing Taxes for Free" (2019) https://news.ycombinator.com/item?id=21281411
Hash Denial-of-Service Attack in Multiple QUIC Implementations
No!
> Out of these functions, only SipHash provides significant security guarantees in this context; crafting collisions for the first two can be done fairly efficiently.
You can craft collisions against SipHash also trivially. "significant" is not significant, and using SipHash would also make it much slower, comparable to a linked list search.
The only fix for DOS attacks against hashtables is to fix the collision resolution method. Eg from linear to logarithmic. Or detecting such attacks (ie collision counting). Using a slower, better hash function is fud, because every hash function will produce collisions in hash tables. Even SHA-256
https://github.com/rurban/smhasher/?tab=readme-ov-file#secur...
I know I've looked it up before, but for whatever reason I can't explain how open address hashmaps work; how does it know to check the next bucket on read after a collision on insert?
I know it's a dumb question, and that I've researched it before. Strange.
(I recall Python first reporting the open address hashing hash randomization disclosure vuln that resulted in the PYTHONHASHSEED env var, and then dict becoming an odict OrderedHashMap by default.)
/? The only fix for DOS attacks against hashtables is to fix the collision resolution method. https://www.google.com/search?q=The+only+fix+for+DOS+attacks...
"Defending Hash Tables from Subterfuge with Depth Charge" (2024) https://dl.acm.org/doi/fullHtml/10.1145/3631461.3631550
"Defending hash tables from algorithmic complexity attacks with resource burning" (2024) https://www.sciencedirect.com/science/article/abs/pii/S03043...
Hash collision: https://en.wikipedia.org/wiki/Hash_collision
Electric Propulsion Magnets Ready for Space Tests
How long will it be before this new space propulsion capability can be scaled in order to put the ISS on the moon instead of in the ocean?
Long after it already is. The ISS is aging, and there was every intention of retiring it even before its principal sponsors started a proxy war.
There's certainly zero chance that it could be on the moon. It wasn't designed to survive on a surface. It would not be able to support itself.
If we want something in orbit around the moon, it will still be far cheaper to build a new thing designed for that purpose.
It does seem a shame that it can't be recovered and donated to museums or preserved in some way. I assume that it would be prohibitively expensive/complicated to try to do so, but it's a huge part of the history of space research, and it's a bit of a bummer to just throw it away.
I mean, if Starship works well, it could theoretically retrieve the ISS in a piecemeal fashion.
Orbital refuelling of non-satellite spacecraft at greater than IDK a few km has not been demonstrated.
Rendezvouzing and pushing or pulling a 900K lb (350K kg) object in orbit has never been demonstrated.
In-orbit outer hull repair on a vessel with occupants has also never been demonstrated?
Are there NEO avoidance plans that do not involve fracturing the object into orbital debris on approach?
Violence alters human genes for generations, researchers discover
> The idea that trauma and violence can have repercussions into future generations should help people be more empathetic, help policymakers pay more attention to the problem of violence
This seems like a pretty charitable read on policymakers. We inflict violence all the time that has multigenerational downstream effects without a genetic component and we don’t really care about the human cost, why would adding a genetic component change anything?
Because later generations shouldn't be forced pay the cost for such violence that they didn't perpetrate or perpetuate.
To make it real when there's no compassion, loving kindness, or the golden rule:
War reparations: https://en.wikipedia.org/wiki/War_reparations
I think you may want to re-read parent. They are saying that the reason you gave ( all of which were provided before ) barely restrained human kind from doing what it /was/is/will be doing.
If compassion and war reparations are insufficient to deter unjust violence, what will change the reinforced behaviors?
You have to reach the children and rehabilitate them. This is the type of damage the children are growing up with (HBO documentary from 2004 which I highly recommend people watch, the journalist got fatally shot filming it):
https://www.youtube.com/watch?v=Isa5TRnidnk#t=30m30s
Counting on your fingers the dead. This has to be a rehabilitation effort, because it's just no way for children to talk and be.
Family therapy > Summary of theories and techniques: https://en.wikipedia.org/wiki/Family_therapy#Summary_of_theo...
Expressive therapies: https://en.wikipedia.org/wiki/Expressive_therapies
Systemic therapy: https://en.wikipedia.org/wiki/Systemic_therapy :
> Based largely on the work of anthropologists Gregory Bateson and Margaret Mead, this resulted in a shift towards what is known as "second-order cybernetics" which acknowledges the influence of the subjective observer in any study, essentially applying the principles of cybernetics to cybernetics – examining the examination.
"What are the treatment goals?"
Clean Language: https://en.wikipedia.org/wiki/Clean_language
...
In "Jack Ryan" (TV Series) Season 1 Episode 1, the children suffer from war trauma in their upbringing and that pervades their lives: https://en.wikipedia.org/wiki/Jack_Ryan_(TV_series)#Season_1...
In "Life is Beautiful" (1997) Italian children are trapped in war: https://en.wikipedia.org/wiki/Life_Is_Beautiful
Magneto from X-Men (with the helmet) > Fictional character biography > Early life: https://en.wikipedia.org/wiki/Magneto_(Marvel_Comics)#Early_...
"Chronicles of Narnia" (1939-1949) > Background and conception: https://en.wikipedia.org/wiki/The_Chronicles_of_Narnia#Backg...
War reparations gave us the Nazis, so they clearly don’t work. And compassion has given us everything we have seen thus far in history so we can conclude that too is ineffective.
Reparations certainly deterred further violent fascist statism in post WWII Germany.
Unfortunately the Berlin Wall.
WWI reparations were initially assessed by the Treaty of Versailles (1919) https://en.wikipedia.org/wiki/World_War_I_reparations
Dulles, Dawes Plan > Results: https://en.wikipedia.org/wiki/Dawes_Plan
> Dawes won the 1925 Nobel Prize, WWI reparations obligations were reduced
Then the US was lending them money and steel because it was so bad there, and then we learned they had been building tanks and bombs with our money instead of railroads and peaceful jobs.
Business collaboration with Nazi Germany > British, Swiss, US, Argentinian and Canadian banks: https://en.wikipedia.org/wiki/Business_collaboration_with_Na...
And then the free money rug was pulled out from under them, and then the ethnic group wouldn't sell their paintings to help pay the debts of the war and subsequent central economic mismanagement.
And then they invaded various continents, overextended themselves when they weren't successfully managing their own country's economy, and the Allied powers eventually found the art (and gold) and dropped the bomb developed by various ethnic groups in the desert and that was that.
Except for then WWII reparations: https://en.wikipedia.org/wiki/World_War_II_reparations
The US still occupies or inhabits Germany, which is Russia's neighbor.
Trump was $400 million in debt to Deutsche Bank AG (of Germany and Russia now) and had to underwrite said loan himself due to prior defaults. Nobody but Deutsche Bank would loan Trump (Trump Vodka, University,) money prior to 2016. Also Russian state banks like VEB, they forgot to mention.
Business projects of Donald Trump in Russia > Timeline of Trump business activities related to Russia: https://en.m.wikipedia.org/wiki/Business_projects_of_Donald_...
It looks like - despite attempted bribes of foreign heads of state with free apartment - there will not be a Trump Tower Moscow.
"Biden halts Trump-ordered US troops cuts in Germany" (2021) https://apnews.com/article/joe-biden-donald-trump-military-f...
> art (and gold)
"The Monuments Men" (2014) https://en.wikipedia.org/wiki/The_Monuments_Men
You could argue that’s the trauma expressing itself through derivative real experiences into the future.
It is the purpose of criminal and civil procedure to force parties that caused loss, violence, death and trauma to pay for their offenses.
We have a court system that is supposed to abide Due Process so that there are costs to inflicting trauma (without perpetuating a vicious cycle).
[deleted]
Surgery implants tooth material in eye as scaffolding for lens
/? eye transplant https://hn.algolia.com/?q=eye+transplant
https://scholar.google.com/scholar?q=related:ZlcYhwhYqiUJ:sc...
From "Clinical and Scientific Considerations for Whole Eye Transplantation: An Ophthalmologist's Perspective" (2025) https://tvst.arvojournals.org/article.aspx?articleid=2802568 :
> Whereas advances in gene therapy, neurotrophic factor administration, and electric field stimulation have shown promise in preclinical optic nerve crush injury models, researchers have yet to demonstrate efficacy in optic nerve transection models—a model that more closely mimics WET. Moreover, directing long-distance axon growth past the optic chiasm is still challenging and has only been shown by a handful of approaches. [5–8]
> Another consideration is that even if RGC axons could jump across the severed nerve ending, it would be impossible to guarantee maintenance of the retinal-cortical map. For example, if the left eye were shifted clockwise during nerve coaptation, RGCs in the superior-nasal quadrant of donor retinas would end up synapsing with superior-temporal neurons in the host's geniculate nucleus. This limitation also plagues RGC-specific transplantation approaches; its effect on vision restoration is unknown.
"Combined Whole Eye and Face Transplant: Microsurgical Strategy and 1-Year Clinical Course" (2024) https://pubmed.ncbi.nlm.nih.gov/39250113/ :
> Abstract: [...] Serial electroretinography confirmed retinal responses to light in the transplanted eye. Using structural and functional magnetic resonance imaging, the integrity of the transplanted visual pathways and potential occipital cortical response to light stimulation of the transplanted eye was demonstrated. At 1 year post transplant (postoperative day 366), there was no perception of light in the transplanted eye.
"Technical Feasibility of Whole-eye Vascular Composite Allotransplantation: A Systematic Review" (2023) https://journals.lww.com/prsgo/fulltext/2023/04000/Technical... :
> With nervous coaptation, 82.9% of retinas had positive electroretinogram signals after surgery, indicating functional retinal cells after transplantation. Results on optic nerve function were inconclusive. Ocular-motor functionality was rarely addressed.
How to target NGF(s) to the optic nerve?
Magnets? RF convergence?
How to resect allotransplant and allograft optic nerve tissue?
How to stimulate neuronal growth in general?
Near-infrared stimulates neuronal growth and also there's red light therapy.
Nanotransfection stimulates tissue growth by in-vivo stroma reprogramming.
How to understand the optic nerve portion of the connectome?
The Visual and Auditory cortices are observed to be hierarchical.
Near-field imaging of [optic] nerves better than standard VEP Visual Evoked Potential tests would enable optimization of [optic nerve] transection.
VEP: https://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked...
Ophthalmologic science is important because - while it's possible to fight oxidation and aging - our eyes go.
Upper-atmospheric radiation is terrible on eyes. This could be a job for space medicine, and pilots.
Accomodating IOLs that resist UV damage better than natural tissue: Ocumetics
From "Portable low-field MRI scanners could revolutionize medical imaging" (2023) https://news.ycombinator.com/item?id=34990738 :
> Is MRI-level neuroimaging possible with just NIRS Near-Infrared Spectroscopy?
From "Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35886145 :
> So, to run the same [fMRI, NIRS,] stimulus response activation observation/burn-in again weeks or months later with the same subjects is likely necessary given Representational drift.
"Reversible optical data storage below the diffraction limit (2023)" [at cryogenic temperatures] https://news.ycombinator.com/item?id=38528844 :
> [...] have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
Optical tweezers operating below the Abbe diffraction limit are probably of use in resecting neurovascular tissue in the optic nerve (the retina and visual cortex)?
"Real-space nanophotonic field manipulation using non-perturbative light–matter coupling" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-1-1... :
> "One can write, erase, and rewrite an infinite number of times,"*
"Retinoid restores eye-specific brain responses in mice with retinal degeneration" (2022) https://news.ycombinator.com/item?id=33129531
Fluoxetine increases plasticity in the adult visual cortex; https://news.ycombinator.com/item?id=43079501
Zebrafish can regrow eyes,
From the "What if Eye...?" virtual eyes in a petri dish simulation: https://news.ycombinator.com/item?id=43044958 :
> [ mTor in Axolotls, ]
"Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://neurosciencenews.com/vision-restoration-genetic-2318... :
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
/? Regenerative medicine for ophthalmologic applications
Medical treatments devised for war can quickly be implemented in US hospitals
> Our research, and that of others, found that too much oxygen can actually be harmful. Excess oxygen triggers oxidative stress – an overload of unstable molecules called free radicals that can damage healthy cells. That can lead to more inflammation, slower healing and even organ failure.
> In short, while oxygen is essential, more isn’t always better. [...]
> We discovered that severely injured patients often require less oxygen than previously believed. In fact, little or no supplemental oxygen is needed to safely care for 95% of these patients
Oxidative stress: https://en.wikipedia.org/wiki/Oxidative_stress
Antioxidant > Levels in food: https://en.wikipedia.org/wiki/Antioxidant#Levels_in_food
Anthocyanin antioxidants: https://www.google.com/search?q=anthocyanin+antioxidants
Deep sea divers know about oxygen toxicity;
Trimix (breathing gas) https://en.wikipedia.org/wiki/Trimix_(breathing_gas) :
> With a mixture of three gases it is possible to create mixes suitable for different depths or purposes by adjusting the proportions of each gas. Oxygen content can be optimised for the depth to limit the risk of toxicity, and the inert component balanced between nitrogen (which is cheap but narcotic) and helium (which is not narcotic and reduces work of breathing, but is more expensive and can increase heat loss).
> The mixture of helium and oxygen with a 0% nitrogen content is generally known as heliox. This is frequently used as a breathing gas in deep commercial diving operations, where it is often recycled to save the expensive helium component. Analysis of two-component gases is much simpler than three-component gases.
HFNC (High-Flow Nasal Cannula) breathing tube therapy is recommended by various medical guidelines.
HFNC and prone positioning is one treatment protocol for COVID and ARDS Acute Respiratory Distress Syndrome: you put them on their stomach and give them a breathing tube (instead of a ventilator on their backs).
Which treatment protocols and guidelines should be updated given these findings?
For which conditions is HFNC therapy advisable given these findings?
Heated humidified high-flow therapy: https://en.wikipedia.org/wiki/Heated_humidified_high-flow_th...
Netboot Windows 11 with iSCSI and iPXE
Hey Terin! Nice post!
I also netboot Windows this way! To run a 20 machines in my house off the same base disk image, which we use for LAN parties. I have code and an extensive guide on GitHub:
https://github.com/kentonv/lanparty
It looks like you actually figured out something I failed at, though: installing Windows directly over iSCSI from the start. I instead installed to a local device, and then transferred the disk image to the server. I knew that building a WinPE environment with the right network drivers would probably help here, but I got frustrated trying to use the WinPE tools, which seemed to require learning a lot of obscure CLI commands (ironically, being Windows...).
You observed some slowness using Windows over the network. I did too, when I was doing it with 1G LAN, but I've found on 10G it pretty much feels the same as local.
BTW, a frustrating thing: The Windows 10->11 updater also seemingly fails to include network drivers and so you can't just upgrade over iSCSI. I'm still stuck on Windows 10 so I'm going to have to reinstall everything from scratch sometime this year. Maybe I'll follow your guide to use WinPE this time.
Hey Kenton!
I figured you had done something similar with the LAN Party House. If I hadn't figured it out I was going to ask/look for your setup.
> You observed some slowness using Windows over the network.
Mini-ITX makes it a bit difficult to upgrade to 10GbE (only one PCIe slot!), and the slowness isn't bad enough in-game to deal with upgrading it just yet.
> BTW, a frustrating thing: The Windows 10->11 updater also seemingly fails to include network drivers and so you can't just upgrade over iSCSI.
I've read (and also observed, now) that if you install directly on iSCSI Windows doesn't make the recovery partition. This evidently also breaks 10->11 upgrades.
God bless you for this sir. I've been wanting to get Windows iscsi boot working, but there's always one more thing. Did you get anything else fun working with ipxe? all the exampled online seem so outdated.
If you get (i)pxe running, you can chain to https://netboot.xyz/ which lets you boot lots of open source stuff.
It's a bit of a mixed bag, because pxe environments have a way of not always being useful. On bios boot, there's tools from isolinux to memory load disk images and hook the bios calls... but if your OS of choice doesn't use bios calls for storage, it needs a driver that can find the disk image in memory.
For uefi boot, there's not a good way to do this, supposedly some uefi environments can load disk images from the network, but afaik, it's not something you can do from ipxe. Instead, for UEFI, the netboot.xyz folks have some other approaches; typically fetching the kernel and initrd separately or otherwise repackaging things rather than using official ISO images.
And I've run into lots of cases where while pxe seems to work, maybe the keyboard doesn't work in pxe, or something else doesn't get properly initialized and you end up having a better time if you give up and boot from USB.
System Rescue CD and Clonezilla are PXE-bootable.
"OneFileLinux: A 20MB Alpine metadistro that fits into the ESP" https://news.ycombinator.com/item?id=40915199 :
> Ventoy, signed EFIstubs, USI, UKI
TIL about https://netboot.xyz/
Elephant in the room: Quantum computers will destroy Bitcoin
Someone had to say it. Maybe the current drop is normies finally waking up and realizing that extrapolated accelerating developments in quantum computers will break encryption used in Bitcoin within 5 years.
It's also extremely naive to assume it will be easy to transfer a massive decentralized project to a post-quantum algorithm. Maybe new cryptos will be invented, but Bitcoin will not "retain value".
Things that will retain value if the entire internet is broken due to rapid deployment of quantum computers will be:
- Real estate
- Physical assets (gold, silver, etc)
- Physical stock certificates (printed on actual paper)
- Paper money
Since internet, cards, finance may just stop functioning one day as quantum computers break all encryption.
Feel free to prove me wrong.
There is a pending hard fork to PQ Post Quantum algorithms for all classical blockchains.
There will likely be different character lengths for account addresses and keys, so all of the DNS+HTTP web services and HTTP web forms built on top will need different form validation.
Vitalik Buterin presented on this subject a few years ago. Doubling key sizes may or may not be sufficient to limit the risk of quantum attacks on elliptical curve encryption algorithms employed by Bitcoin and many other DLTs.
The Chromium browser now supports the ML-KEM (Kyber) PQ cipher.
Very few web servers have PQ ciphers enabled. It is as simple as changing a text configuration file to specify a different cipher on the webserver, once the ciphers are tested by time and money.
There are patched versions of OpenSSH server, for example, but PQ support is not yet merged in core there yet either.
There are PQ ciphers and there are PQ cryptographic hashes.
There are already PQ-resistant blockchains.
Should Bitcoin hard fork to double key sizes or to implement a PQ cipher and hash?
Spelunking for Bitcoin by generating all possible keys and checking their account balances is not prevented by PQ algorithms.
Banking and Finance and Critical Infrastructure also need to upgrade to PQ ciphers. Like mining rigs, it is unlikely that existing devices can be upgraded with PQ software; we will need to buy new devices and recycle existing non-PQ devices.
If banks are on a 5 year IT refresh cycle, that means they need to be planning to upgrade everything to PQ 5 years or more before a QC quantum computer of a sufficient number of error-corrected qubits is online for adversaries that steal cryptoassets from people on the internet.
> There is a pending hard fork to PQ Post Quantum algorithms for all classical blockchains.
Where is this pending HF for BTC?
How can all of this work realistically without constant cat & mouse catch up game?
> Spelunking for Bitcoin by generating all possible keys and checking their account balances is not prevented by PQ algorithms.
So there will be no protection against this? There is no protection possible?
Is there a PR yet, and also a BIP?
Other DLTs also have numbered document procedures for managing soft forks, hard forks, and changes to cipher and hash algorithms.
Litecoin, for example, is Bitcoin with the scrypt hash algorithm instead of SHA256.
Proof-of-Work mining rigs implement hashing algorithms in hardware as ASICs and FPGAs.
Stellar and Ripplenet validation servers still implement same in software FWIU?
Are there already ASICs for any of the PQ Hash and Cipher algorithms?
SSL Termination is expensive but necessary for HTTPS Everywhere and now HTTP STS Strict-Transport-Security headers and the HSTS preload list.
Are there still ASIC or FPGA SSL accelerator cards that need to implement PQ ciphers and hashes?
Multisig and similar m:n smart contracts support requiring more keys for a transaction to complete.
Rather than in an account with one sk/pk pair, funds can be stored in escrow such that various conditions must be met before a transaction can move the funds.
Running a full node helps the network by keeping another copy online and synchronized. A full node can optionally also index by transaction id (with LevelDB).
The block and transaction messages can be logged and monitored. Though it doesn't cost anything to check a balance given authorized or forged keys.
Spending the money in a bitcoin account discloses the public key, whereas only the double hash of the pubkey is necessary to send money to an account or scriptHash.
The market does not appear to cost infosec value, risk, or technical debt into cryptoasset prices.
PQ or non-PQ does not predict asset price in 2025-02.
Breach disclosures apparently hardly affect asset prices; which is unfortunate if we want them to limit their and our taxable losses.
An Experimental Study of Bitmap Compression vs. Inverted List Compression
ScholarlyArticle: "An Experimental Study of Bitmap Compression vs. Inverted List Compression" (2017) https://dl.acm.org/doi/10.1145/3035918.3064007
Inverted index > Compression: https://en.wikipedia.org/wiki/Inverted_index#Compression :
> For historical reasons, inverted list compression and bitmap compression were developed as separate lines of research, and only later were recognized as solving essentially the same problem. [7]
> and only later were recognized as solving essentially the same problem. [7]
"Hard problems that reduce to document ranking" https://news.ycombinator.com/item?id=43174910#43175540
Ctrl-F "zoo" https://westurner.github.io/hnlog/#comment-36839925 #:~:text=zoo :
> Complexity Zoo, Quantum Algorithm Zoo, Neural Network Zoo
Programming Language Zoo https://plzoo.andrej.com/
IBM completes acquisition of HashiCorp
Hashicorp blog post: https://www.hashicorp.com/en/blog/hashicorp-officially-joins...
Jeff Bezos' revamp of 'Washington Post' opinions leads editor to quit
Here's a post linking to a BBC article about same that was flagged and censored here: https://news.ycombinator.com/item?id=43191562 ;
> the newspaper's opinion section will focus on supporting “personal liberties and free markets",
Free trade! Fair trade!
> and pieces opposing those views will not be published.
Boo, fascist corporate oligarchical censorship!
You might say, "actually that's not fascism, Bob" because fascism is when the government exercises control over the non-government-held corporations.
Fascism is like domming Apple's DEI policies when without Congress you can't make law, or telling people they should buy TikTok for you, with your name on all the checks.
Fascism: https://en.wikipedia.org/wiki/Fascism
I think the argument is that since the US has legal and well-established corruption there's no difference. Meaning, a billionaire is easily more influential in legislation than a legislator or even multiple legislators, so he is now inseparable from government and is an agent of the government.
At least they've disclosed their intent to impose editorial bias on the opinion section. It doesn't say "Fair and Balanced."
From "Fed to ban policymakers from owning individual stocks" (2021) https://news.ycombinator.com/item?id=28951646 :
> "Blind Trust" > "Use by US government officials to avoid conflicts of interest" https://en.wikipedia.org/wiki/Blind_trust :
>> The US federal government recognizes the "qualified blind trust" (QBT), as defined by the Ethics in Government Act and related regulations.[1] In order for a blind trust to be a QBT, the trustee must not be affiliated with, associated with, related to, or subject to the control or influence of the government official.
>> Because the assets initially placed in the QBT are known to the government official (who is both creator and beneficiary of the trust), these assets continue to pose a potential conflict of interest until they have been sold (or reduced to a value less than $1,000). New assets purchased by the trustee will not be disclosed to the government official, so they will not pose a conflict.
The Ethics in Government Act which created OGE was passed by Congress in 1978 in response to Watergate: https://en.wikipedia.org/wiki/Ethics_in_Government_Act
Should Type Theory (HoTT) Replace (ZFC) Set Theory as the Foundation of Math?
In a practical sense, hasn't type theory already replaced ZFC in the foundations of math? Lean is what working mathematicians currently use to formally prove theorems, and Lean is based on type theory rather than ZFC.
But HoTT is removed from lean core?
From https://news.ycombinator.com/item?id=42440016#42444882 :
> /? Hott in lean4 https://www.google.com/search?q=hott+in+lean4
https://github.com/forked-from-1kasper/ground_zero :
> Lean 4 HoTT Library
"Should Type Theory Replace Set Theory as the Foundation of Mathematics?" (2023) https://link.springer.com/article/10.1007/s10516-023-09676-0
Show HN: Probly – Spreadsheets, Python, and AI in the browser
Probly was built to reduce context-switching between spreadsheet applications, Python notebooks, and AI tools. It’s a simple spreadsheet that lets you talk to your data. Need pandas analysis? Just ask in plain English, and the code runs right in your browser. Want a chart? Just ask.
While there are tools available in this space like TheBricks, Probly is a minimalist, open-source solution built with React, TypeScript, Next.js, Handsontable, Hyperformula, Apache Echarts, OpenAI, and Pyodide. It's still a work in progress, but it's already useful for my daily tasks.
TIL that Apache Echarts can generate WAI-ARIA accessible textual descriptions for charts and supports WebGL. https://echarts.apache.org/en/feature.html#aria
apache/echarts: https://github.com/apache/echarts
Marimo notebook has functionality like rxpy and ipyflow to auto-reexecute input cell dependencies fwiu: https://news.ycombinator.com/item?id=41404681#41406570 .. https://github.com/marimo-team/marimo/releases/tag/0.8.4 :
> With this release, it's now possible to create standalone notebook files that have package requirements embedded in them as a comment, using PEP 723's inline metadata
marimo-team/marimo: https://github.com/marimo-team/marimo
ipywidgets is another way to build event-based UIs in otherwise Reproducible notebooks.
datasette-lite doesn't yet work with jupyterlite and emscripten-forge yet FWIU; but does build SQLite in WASM with pyodide. https://github.com/simonw/datasette-lite
pygwalker: https://github.com/Kanaries/pygwalker .. https://news.ycombinator.com/item?id=35895899
How do you record manual interactions with ui controls and spreadsheet grids to code for reproducibility?
> "Generate code from GUI interactions; State restoration & Undo" https://github.com/Kanaries/pygwalker/issues/90
> The Scientific Method is testing, so testing (tests, assertions, fixtures) should be core to any scientific workflow system.
ipytest has a %%ipytest cell magic to run functions that start with test_ and subclasses of unittest.TestCase with the pytest test runner. https://github.com/chmp/ipytest
How can test functions with assertions be written with Probly?
Probly doesn't have built-in test assertion functionality yet, but since it runs Python (via Pyodide) directly in the browser, you can write test functions with assertions in your Python code. The execute_python_code tool in our system can run any valid Python code, including test functions.
This is something we're considering for future development, so this is a great shout!
To have tests that can be copied or exported into a .py module from a notebook is advantageous for prototyping and reusability.
There are exploratory/discovery and explanatory forms and workflows for notebooks.
A typical notebook workflow: get it working with Ctrl-Enter and manually checking output, wrap it in a function(s) with defined variable scopes and few module/notebook globals, write a test function for the function which checks the output every time, write markdown and/or docstrings, and then what of this can be reused from regular modules.
nbdev has an 'export a notebook input cell to a .py module' feature. And formatted docstrings like sphinx apidoc but in notebooks. IPython has `%psource module.py` for pygments-style syntax highlighting of external .py modules and `%psave output.py` for saving an input cell to a file, but there are not yet IPython magics to read from or write to certain lines within a file like nbdev.
To run the chmp/ipytest %%ipytest cell magic with line or branch coverage, it's necessary to `%pip install ipytest pytest-cov` (or `%conda install ipytest pytest-cov`)
jupyter-xeus supports environment.yml with jupyterlite with packages from emscripten-forge: https://jupyterlite-xeus.readthedocs.io/en/latest/environmen...
emscripten-forge src: https://github.com/emscripten-forge/recipes/tree/main/recipe... .. web: https://repo.mamba.pm/emscripten-forge
A Systematic Review of Quantum Computing in Finance and Blockchains
"From Portfolio Optimization to Quantum Blockchain and Security: A Systematic Review of Quantum Computing in Finance" (2023) https://arxiv.org/abs/2307.01155
The FFT Strikes Back: An Efficient Alternative to Self-Attention
Google introduced this idea in 2022 with "FNet: Mixing Tokens with Fourier Transforms" [0].
Later they found out that, performance of their TPU(s) for matrix multiplication was faster than FFT in the most scenarios.
That seems like an odd comparison, specialty hardware is often better, right?
Hey, do DSPs have special hardware to help with FFTs? (I’m actually asking, this isn’t a rhetorical question, I haven’t used one of the things but it seems like it could vaguely be helpful).
(Discrete) Fast Fourier Transform implementations:
https://fftw.org/ ; FFTW: https://en.wikipedia.org/wiki/FFTW
gh topic: fftw: https://github.com/topics/fftw
xtensor-stack/xtensor-fftw is similar to numpy.fft: https://github.com/xtensor-stack/xtensor-fftw
Nvidia CuFFTW, and/amd-fftw, Intel MKL FFTW
NVIDIA CuFFT (GPU FFT) https://docs.nvidia.com/cuda/cufft/index.html
ROCm/rocFFT (GPU FFT) https://github.com/ROCm/rocFFT .. docs: https://rocm.docs.amd.com/projects/rocFFT/en/latest/
AMD FFT, Intel FFT: https://www.google.com/search?q=AMD+FFT , https://www.google.com/search?q=Intel+FFT
project-gemmi/benchmarking-fft: https://github.com/project-gemmi/benchmarking-fft
"An FFT Accelerator Using Deeply-coupled RISC-V Instruction Set Extension for Arbitrary Number of Points" (2023) https://ieeexplore.ieee.org/document/10265722 :
> with data loading from either specially designed vector registers (V-mode) or RAM off-the-core (R-mode). The evaluation shows the proposed FFT acceleration scheme achieves a performance gain of 118 times in V-mode and 6.5 times in R-mode respectively, with only 16% power consumption required as compared to the vanilla NutShell RISC-V microprocessor
"CSIFA: A Configurable SRAM-based In-Memory FFT Accelerator" (2024) https://ieeexplore.ieee.org/abstract/document/10631146
/? dsp hardware FFT: https://www.google.com/search?q=dsp+hardware+fft
Ask HN: Who's been picking up trade deals due to US tariff threats?
Trump claimed tariffs are necessary due to the drug war wartime authorizations and threatened our immediate neighbors and BRICS with tariffs this year.
Which countries are winning trade and tech deals while the US is forcing itself to pay tariffs on imports to collectively punish others without due process?
Example: China is now buying Canadian oil (instead of Russian oil at a discount due to sanctions)
/? canada tariffs: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
"Trump tariffs inspire economic patriotism in Canada" https://news.ycombinator.com/item?id=42922237
FWIU they have regressed from free trade all the way to tariffs on Canada at 25% on everything but oil, which is at 10%?
And they uncancelled and re-approved the Keystone XL tar sands midwest plains liability.
As informal as it might seem listing places where the US might start losing out on trade, given the current political situation, the last thing needed is any simple list that might fuel narcissistic outrage when it might threaten the golden one's genius plan, especially later on when (if?) it doesn't pan out out and a new round of allocating blame and punishment is engaged. One can not expect such people to innately understand why some reactions occur.
Just look at Musk's outburst after he burnt tw-tter and the long term advertisers figured the people they were selling to would probably have moved on to other platforms and likewise pulled advertising from the new mess.
All we (the rest of the world) can do is hope that the initial tariffs imposed on trade will be enough to look like they're having a win.
"EU and Mexico revive stalled trade deal as Trump tariffs loom" https://www.reuters.com/world/eu-mexico-revive-stalled-trade...
FWIU the President of Mexico told Trump that they would be more concerned about migrants crossing their northern border if the US were more concerned about arms trafficked to Mexico.
That's sort of funny but then again IMO, the biggest difference to immigration rate and would be something the US doesn't seem to have - a fixed rate of minimum pay regardless of whether people are US born citizens or not.
Basic income experiments here have not succeeded FWIU; https://hn.algolia.com/?q=basic+income
They did not pay for the relief checks signed with his personal brand and the record relief loan fraud by cutting taxes; so that's still not paid off either.
Recipe for national debt: increase expenses and cut revenue.
"Starve the beast" said the Reaganites who increased taxes on the middle class and increased foreign war expenditures and future obligations: https://en.wikipedia.org/wiki/Starve_the_beast
"Where do people get that Reagan raised taxes 11 times? I don’t completely understand this." https://www.quora.com/Where-do-people-get-that-Reagan-raised...
"The Mostly Forgotten Tax Increases of 1982-1993" https://www.bloomberg.com/view/articles/2017-12-15/the-mostl...
Total Tax Receipts as a % of GDP would tell the story; but they also increased the debt limit 18 times.
"Federal Receipts as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/FYFRGDA188S
"Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
Combined chart: https://fred.stlouisfed.org/graph/?g=1DTZx
History of the US debt ceiling: https://en.wikipedia.org/wiki/History_of_the_United_States_d...
I don't think that debt financing onto future generations and starting conflicts that have since cost trillions made America great.
Would improving conditions in their home country change the immigration rate? But won't there still be climate refugees? It's hot, there's no water, there's no work.
Just watched an alarmed presentation on U-3 and U-6 as indicators of unemployment rate: U-6 includes underemployment at less than 25K/yr.
Ah sorry for not being to the point as I forget just how many systems there are globally - I meant minimum wage payments.
The effect would not be immediate and would take years, though a tax break for businesses for each US citizen on their payroll would surely help speed it up. I have known too many US based people who moan about the influx but happy to have jobs done by illegals at a really dirt cheap rate.
As as additional note, Australia had this fair wage pay as law for a great number of years, but somewhere along the line people plain forgot. Back in the 70s the country used to have Aussie based workers that travelled the entire country, north to south and back again to follow work as seasonal workers in agriculture. However the process for hiring an employee was a handful and red tape most DIY accounting folks didn't like dealing with so much, meant the option of simplified payment steam to a company providing labour was and still is very attractive, even though some of the procedures has been relaxed. (Also until a decade or two ago, for years the employer would be paying into a black hole fund for worker's compensation, even if their crop was wiped out or for some other reason would not be needing any additional workers.) In the 90s I got to witness all sorts of BS schemes to get non aussie workers into the work force, like 457 visas so companies could fill jobs that they couldn't fill locally. Eventually everyone and their dog cottoned on to the shenanigans and ... for the most part a lot went by the way side. We still have noise here in Aussie land how backpackers save this and that because Aussies are lazy and what not - nope, most here growing up in the heat know working in conditions too hot or some other unsafe instance have long term consequences, might be 5 years down the track and the kidney damage finally reaches a point it needs to be addressed. Backpackers (tourists with a work visa who travel and work their way around a country) are typically long gone back in their home land, so they're ideal for scam labour companies and the odd no moral compass farmer.
Andrew Yang (and Sam Altman) probably have a piece to say about basic income / wage subsidies in context to the robots and sophisticated AI taking our jobs and tiny houses.
Migrant labor exploitation? What would our food cost? How many people does it take to mow a sand lawn?
I hope that the current hate for immigrants isn't much more than divisive political fearmongering and splitting that will diminish when they regain their humility and self respect due to food prices and having a conscience.
Creating jobs on the other side of the border would probably be more cost-effective.
IDK, "Terminator: Dark Fate", "The Big Green", "Amistad", "McFarland, USA"
H1B competition in tech is fierce here, and the reason we don't get hired in our own country.
Indiar and Chinar have more gifted students than we have students.
Americans won't work the fields anymore; we'll pay Asia to manufacture robots and learn sustainable farming practices like no-till farming later.
The new sustainable aviation fuel subsidy (of the previous administration) requires sustainable farming practices so that we're not subsidizing unrealized losses due soil depletion. It doesn't make sense to pay them to waste the land without consideration.
Your success with fighting desertification is inspiring. We're still not sure why the Colorado river is running dry for the southwest where it's always been dry and carelessly farmed and drilled. TIL about bunds and preventing water from flashing off due to impacted soil trashed by years of tilling and overgrazing.
I've a few friends who've traveled to Oceania for school and work.
There should be an Australian "The Office".
Welfare economics: https://en.wikipedia.org/wiki/Welfare_economics
I would never consider a fair minimum wage which is regardless of citizen or non citizenship as a subsidy.
As for supporting less developed countries -- around 1975 the formation of the Lima agreement[1] came to pass, a United Nations strategy to shift some manufacturing to less developed countries and share the wealth. I suspect that's yet another reason why some US companies expanded into South American countries.
As for A.I. bots taking over our simple repetitive jobs or manual jobs like in construction duties, due to complexity and unexpected job functions it's not something that's going to be worth pursing too hard. Construction tasks were the sort of jobs I was referencing that some people liked getting for rock bottom dollar by way of desperate immigrants and though people do greatly benefit from really low rates when its their own money and mostly DIY projects, a bot replacement is a novel idea but ... A.I. General Practice doctor bot will exist long (years) before that, data for medicating and solving health issues has a much much larger data set existing over at least 50 years of information which most is still sort of relevant. As for construction - well obviously one meets plenty of people think it's fairly simple and a few lines in a DIY book has it covered. Yes the minimum wage would mean the nature and scope of some people's personal (back yard) projects would change. Small business would obviously be impacted but a smart govt would have in place something to balance out having to pay higher wages. Over all it's better people can work if they want to.
Food prices generally should not have a high labour wage component when it leaves the farm gate - unless it's a back yard hobby situation. If a whooping big farm is crying foul labour minimum wage rates are too high, so high they can't make a profit, 9/10 times they're doing something wrong or ... they got /(are getting) fleeced when they sold / sell their produce to the markets. The act of fleecing a farmer or farm group is IMO, immoral.
Actually (given the level of interest by a few people I've met or know) I would think a lot of people would love to work in the fields, so long as good farming practices are observed ie safe and not half killing themselves so by days end they are done for the day, they are paid at decent and liveable wage, (ie not slave labour rates) and there's time for themselves with perhaps hour travel to local affordable further / higher education they can participate in part time if they are inclined to do so.
We also hear the same -- in that Aussies won't or don't want to work on a farm because [insert some BS excuse here] and it's fortunate there are backpackers (young tourists) who are prepared ... if the job is straight, pays ok, half decent conditions - there will be no end of willing home borne applicants.
As for sustainable aviation fuel subsidy - I would guess for most people it implies a focus on more obvious oil seed crops on otherwise good agricultural land which could be utilised for food or fibre production. But I think the trick is to work out what oil producing crops could exist in marginal country, moreover a tree crop where it's feasible to drip irrigate or under tree root so that the very limited water isn't lost evaporating from the soil surface.
As for tackling desertification, sustainability is a new mantra in the farming sector. I don't iirc any significant projects aimed at revegetating any large tracks of arid land I'd call a success. (There are though property owners who have done wonders to rehabilitate flogged country so it becomes more productive whilst ensuing a diverse landscape.) However for fighting desertification, I do recall 90s or so a very good example of the benefits of Permaculture as per Bill Mollison [2] rehabilitating a small knob / small hill of ground somewhere iirc in Africa back into a scene of lush greenery which might have even included some banana trees, whilst the rest of the landscape was bleak and dry with a minimum of grassy vegetation.
>There should be an Australian "The Office".
Never really got into either versions of The Office however something in the same vein but a little over the top is a workplace based Aussie show -- Swift and Shift Couriers[3]
[1] https://www.unido.org/sites/default/files/2014-04/Lima_Decla... [pdf]
I looked it up:
Wage subsidy: https://en.wikipedia.org/wiki/Wage_subsidy :
> A wage subsidy is a payment to workers by the state, made either directly or through their employers. Its purposes are to redistribute income and to obviate the welfare trap attributed to other forms of relief, thereby reducing unemployment. It is most naturally implemented as a modification to the income tax system.
Permaculture and Bill Mollison: perpetual spinach and Swiss Chard do well here. TIL there are videos about perennials and about companion planting. Potatoes in the soil below tomatoes, in food safe HDPE 5 gallon buckets with a patch cut into the side. But that's still plastic in the garden. Three Sisters: Corn, Beans, Squash.
Import substitution industrialization > Latin America,: https://en.wikipedia.org/wiki/Import_substitution_industrial...
"The Biggest Mapping Mistake of All Time" re: Baja and the Sea of Cortez: https://youtube.com/watch?v=Hcq9_Tw2eTE&
Confessions of an Economic Hit Man is derided as untrue. See also links to "War is a Racket", which concludes with chapter "5. To Hell With War" https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit...
Structural adjustment > Criticisms: https://en.wikipedia.org/wiki/Structural_adjustment
Golden rule: https://en.wikipedia.org/wiki/Golden_Rule
The "Farming Simulator" game has the user operate multiple labor positions on the farm as concurrently as they can master.
We can't do similar backpackers-style field labor because there aren't travel visas that are also work visas FWIU.
Volunteering while on a student visa is fine AFAIU. Non-commercial open source software development is also something that folks enjoying their travels can do for no money here.
Laser weeding is still fairly underutilized. There's not yet an organic standard for herb. Nicotine is an approved "organic* pesticide, for example.
Here we have Arbor Day Foundation, which will ship 10 free saplings for an easy annual tree survey.
TIL trees can be shipped domestically in a trianglar cardboard carton, in a plastic bag, with a damp sheet of paper around their roots.
It may be the work of Greening Australia that I've heard about: https://en.wikipedia.org/wiki/Greening_Australia
One of a number of videos about planting trees on YouTube: "Australia's Remarkable Desert Regreening Project Success" https://youtube.com/watch?v=HRo0RG02Mg0&
I've heard that China has changed their approach to fighting desertification in the Gobi desert; toward a healthy diversity of undergrowth instead of a monoculture of trees; ecology instead of just planting rows of trees that won't last by comparison.
Half Moon Bunds look like they are working to regreen the Sahel. Bunds would require different farm machines than rototilling tractors that cause erosion.
"Mastering the Art of Half Moon Bunds" https://youtube.com/watch?v=XyH6dFlv9dk&
Andrew Million's light board videos are a great resource, as a software dev with no real experience in hydrological engineering for sustainable agriculture.
"Inside Africa's Food Forest Mega-Project" https://youtube.com/watch?v=xbBdIG--b58&
"Flight of the Conchords" (HBO) and "Eagle vs Shark" are from NZ.
From YouTube playlists: Derek from Veritasium, The Slow Mo Guys, The Engineering Mindset, Morello, Jade from upandatom, Sarah from Chuck, and Synthony are all from AU as I recall.
"Be Kind Rewind" (2008) https://en.wikipedia.org/wiki/Be_Kind_Rewind
The basic Minimum wage [1] is a wage which doesn't need to be subsidised. Only when as I described eliminating dirt poor / poverty line pay rates from a region that had relied on it, would any govt perhaps need to offer a subsidy to employers to swallow the bitter pill of paying the fair rate to all workers. In Australia employers are not entitled to any payment or entitlement from govt or other entities to continue to employ a long term employee earning an existing wage in their employment. Some big companies in the past have had a bit of a whine and gone for a grab for getting some sort of a handout but -- after the last major abuse where the govt helped a couple poor companies out and got burnt when the sods moved overseas or restructured effectively ending the jobs at risk that were the bargaining chip in the first instance ...
I enjoy meeting the few backpackers that visit my region and I've no issue with them working, but for a very long time I've informed any that I crossed paths with, that they are entitled to the same pay rate as anyone else here, ie don't get scammed -- iirc, all of them had been getting scammed one way or another.
I dislike (very understated) the majority of labour hire companies that were around, that arrange the on farm work as they have in the past, actively discriminated against regular aussies looking for a job - since the aussie is more often likely to know what is right and wrong, what is not acceptable and thus the labour hirer's long running scam would soon come under threat - took time but now govt here is looking at any bad contractors, but we still hear noise pieces chirping that aussies are lazy ... and ...
I am surprised in regard to no US visa that addresses backpakers -- I'd thought in the free western world that the small fraction of tourists that wanted to backpack their way across the country, could do so if they wanted that option and meet the relevant criteria. As I understand, there are generally rules such as the prospective backpacker is not actually flat broke.
Trying to find a near to close no mechanical removal of weeds from a crop, that is cheap and sustainable is somewhat the holy grail of farming. Seems like simple might win out at the end though small robotic platforms that traverse the field ... existing models thus far I've seen are solar powered / electric, which detect, identify and then mechanically upset / remove the weed as it slowly travels its target paddock.
As for a massive project aimed at greening the Great Australian Desert [2] I'd not heard about it but I do not mean to imply it's not happening, Australia is actually a pretty big place. However the cover photo in the link suggests not desert proper but very marginal flat once cleared and felled, that after initial cropping flogged the soils, all it could be is poor farming country that is best rehabilitated and re-vegetated with trees and shrubs.
There was actually a lot of conservation / rehabilitation work carried out in 1920's once it became very clear that the farming practices of the time did not suit the geology and climate of this country.
I think as years progress here in Australia will see different farming systems come into play like the Alley cropping system.[3]
[1] https://en.wikipedia.org/wiki/Minimum_wage
[2] https://lifeboat.com/blog/2022/09/how-australia-is-regreenin...
[3] https://www.agrifarming.in/alley-cropping-system-functions-o...
Edit to fix links
Reaganomics > Policies: https://en.wikipedia.org/wiki/Reaganomics#Policies :
> The 1982 tax increase undid a third of the initial tax cut. In 1983 Reagan instituted a payroll tax increase on Social Security and Medicare hospital insurance. [25] In 1984 another bill was introduced that closed tax loopholes. According to tax historian Joseph Thorndike, the bills of 1982 and 1984 "constituted the biggest tax increase ever enacted during peacetime". [26]
Also,
>> said the Reaganites who increased taxes on the middle class and increased foreign war expenditures and future obligations:
To clarify, Reagan reduced taxes for the wealthiest the most, thus shifting the effective tax burden to the middle class and increasing inequality.
"Changes in poverty, income inequality, and the standard of living in the United States during the Reagan years" https://pubmed.ncbi.nlm.nih.gov/8500951/#:~:text=The%20rate%... :
> The rate of poverty at the end of Reagan's term was the same as in 1980. Cutbacks in income transfers during the Reagan years helped increase both poverty and inequality. Changes in tax policy helped increase inequality but reduced poverty.
Gini Index: https://en.wikipedia.org/wiki/Gini_coefficient
GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
"Make America Great Again": https://en.wikipedia.org/wiki/Make_America_Great_Again
At that time - during the 1970s and 1980s - Federal Debt as a percentage of GDP was much lower than it is today; 30-53% in the 1980s and 120% of GDP in 2024 :
> Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
The need for memory safety standards
> Looking forward, we're also seeing exciting and promising developments in hardware. Technologies like ARM's Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code.
IIRC there's some way that a Python C extension can accidentally disable the NX bit for the whole process.. https://news.ycombinator.com/item?id=40474510#40486181 :
>>> IIRC, with CPython the NX bit doesn't work when any imported C extension has nested functions / trampolines
>> How should CPython support the mseal() syscall? [which was merged in Linux kernel 6.10]
> We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process. In addition, as outlined in our Secure by Design whitepaper and in our memory safety strategy, we are deeply committed to building security into the foundation of our products and services.
> That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.
Secureblue; https://github.com/secureblue/Trivalent has hardened_malloc.
Memory safety notes and Wikipedia concept URIs: https://news.ycombinator.com/item?id=33563857
...
A graded memory safety standard is one aspect of security.
> Tailor memory safety requirements based on need: The framework should establish different levels of safety assurance, akin to SLSA levels, recognizing that different applications have different security needs and cost constraints. Similarly, we likely need distinct guidance for developing new systems and improving existing codebases. For instance, we probably do not need every single piece of code to be formally proven. This allows for tailored security, ensuring appropriate levels of memory safety for various contexts.
> Enable objective assessment: The framework should define clear criteria and potentially metrics for assessing memory safety and compliance with a given level of assurance. The goal would be to objectively compare the memory safety assurance of different software components or systems, much like we assess energy efficiency today. This will move us beyond subjective claims and towards objective and comparable security properties across products.
Piezoelectric Catalyst Destroys Forever Chemicals
> The system’s energy requirements vary depending on the wastewater’s conditions, such as contaminant concentration, water matrix, or the customer’s discharge requirements. In one of Oxyle’s full-scale units that treats 10 cubic meters per hour, energy consumption measured less than 1 kilowatt-hour per cubic meter, according to the company.
> Using the flow of water, rather than electricity, to activate the reaction makes the method far more energy efficient than other approaches, says Mushtaq.
Different approach with no energy/volume (kWhr/m^3) metric in the abstract:
"Photocatalytic C–F bond activation in small molecules and polyfluoroalkyl substances" (2024) https://www.nature.com/articles/s41586-024-08327-7 .. https://news.ycombinator.com/item?id=42444729
History of interactive theorem proving [pdf]
The most recent reference listed in this paper is "Truly modular (co)datatypes for Isabelle/HOL" (2014) .. https://scholar.google.com/scholar?cites=1837719396691256842...
Automated theorem proving > Related problems: https://en.wikipedia.org/wiki/Automated_theorem_proving#Rela...
awesome-code-llm > Coding for Reasoning: https://github.com/codefuse-ai/Awesome-Code-LLM#31-coding-fo...
New type of microscopy based on quantum sensors
"Optical widefield nuclear magnetic resonance microscopy" (2025) https://www.nature.com/articles/s41467-024-55003-5 :
> Crucially, each camera pixel records an NMR spectrum providing multicomponent information about the signal’s amplitude, phase, local magnetic field strengths, and gradients.
Show HN: jsonblog-schema – a JSON schema for making your blog from one file
JSON-LD or YAML-LD can be stored in the frontmatter in Markdown documents;
Schema.org is an RDFS schema with Classes and Properties:
https://schema.org/BlogPosting
Syntax examples can be found below the list of Properties on a "JSON-LD" tabs
The JSON schema for schema.org in lexiq-legal/pydantic_schemaorg aren't yet rebuilt for pydantic v2 FWIU; https://github.com/lexiq-legal/pydantic_schemaorg
W3C SHACL Shapes and Constraints Language is the Linked Data schema valuation spec which is an alternative to JSON schema, of which there are many implementations.
How core Git developers configure Git
skwp/git-workflows-book > .gitconfig appendix: https://github.com/skwp/git-workflows-book?tab=readme-ov-fil... :
[alias]
unstage = reset HEAD # remove files from index (tracking)
uncommit = reset --soft HEAD^ # go back before last commit, with files in uncommitted state
https://learngitbranching.js.org/charmbracelet/git-lfs-transfer: https://github.com/charmbracelet/git-lfs-transfer
jj-vcs/jj: https://github.com/jj-vcs/jj
Ggwave: Tiny Data-over-Sound Library
The acoustic modem is back in style [1]! And, of course, same frequencies (DTMF) [2], too!
DTMF has a special place in the phone signal chain (signal at these frequencies must be preserved, end to end, for dialing and menu selection), but I wonder if there's something more efficient, using the "full" voice spectrum, with the various vocoders [3] in mind? Although, it would be much crepier than hearing some tones.
[1] Touch tone based data communication, 1979: https://www.tinaja.com/ebooks/tvtcb.pdf
[2] touch tone frequency mapping: https://en.wikipedia.org/wiki/DTMF
[3] optimized encoders/decoders for human speech: https://vocal.com/voip/voip-vocoders/
"Using the Web Audio API to Make a Modem" (2017) https://news.ycombinator.com/item?id=15471723
Missouri woman pleads guilty to federal charge in plot to sell Graceland
Graceland > Tourist destination: https://en.wikipedia.org/wiki/Graceland
Category:Films about Elvis Presley: https://en.wikipedia.org/wiki/Category:Films_about_Elvis_Pre...
Trump Plans to Liquidate Public Lands to Finance Sovereign Wealth Fund
Once an SWF has accumulated wealth, that wealth is invested in stocks, bonds, real estate, and other financial instruments to earn even more money.
Any suggestions on what I should invest in now knowing that a US government fund is imminent?Bitcoin? TSLA? $TRUMP? DOGE?
Or do we think that it's going to pump everything in the world?
Social Security Trust Fund: https://en.wikipedia.org/wiki/Social_Security_Trust_Fund :
> The Trust Fund is required by law to be invested in non-marketable securities issued and guaranteed by the "full faith and credit" of the federal government. These securities earn a market rate of interest.[5]
Government bond > United States: https://en.wikipedia.org/wiki/Government_bond#United_States
> [...] One projection scenario estimates that, by 2035, the Trust Fund could be exhausted. Thereafter, payroll taxes are projected to only cover approximately 83% of program obligations. [7]
> There have been various proposals to address this shortfall, including: reducing government expenditures, such as by raising the retirement age; tax increases; investment diversification [8] and, borrowing.
Robots, automation, AI, K12 Q12 STEM training, basic research, applied science
What should social security be invested in?
First, should social security ever be privatized? No, because they would steal it all in fees compared to HODL'ing Index Funds and commodities (as a hedge against inflation).
What should social security be invested in?
Low-risk investments in the interest of all of the people who plan to rely upon OASDI for retirement and accidental disability (OASDI: OA "Old Age", S "Survivors", D "Disability", I "Insurance")
What should a sovereign wealth fund be invested in?
We don't do "soverign wealth funds" because we don't have a monarchy in the United States; we have a temp servant leader President role and we have Congress and they work together to prepare the budget.
The President of the United States has limited budgetary discretionary authority by design. OMB (Office of Management and Budget) must work with CBO (Congressional Budget Office) to prepare the budget.
US court upholds Theranos founder Elizabeth Holmes's conviction
I don't quite understand the reasoning for putting her in prison.
Yes, she deserves to be punished, but surely house arrest, community service etc makes more sense for a crime of this nature, rather than using tax payer money to house her for 9 years when she isn't a credible threat to society.
House arrest would make the math on “should I try fraud”lean heavily towards fraud I think.
Maybe even more so if you’ve got a nice house.
Fraud: https://en.wikipedia.org/wiki/Fraud
US Sentencing Commission > Fraud: https://www.ussc.gov/topic/fraud
What would deter fraud?
"Here’s a look inside Donald Trump’s $355 million civil fraud verdict" (2024) https://apnews.com/article/trump-fraud-letitia-james-new-yor...
"Trump hush money verdict: Guilty of all 34 counts" .. "Guilty: Trump becomes first former US president convicted of felony crimes" (2024) https://apnews.com/article/trump-trial-deliberations-jury-te...
"Trump mistakes [EJC] for [ex-wife]. #Shorts" https://youtube.com/shorts/0tq3rh6bh_8 .. https://youtu.be/lonTBp9h7Fo?si=77DIJMrpBRgLcsMK
I don’t know what that’s supposed to mean regarding the topic.
> House arrest would make the math on “should I try fraud” lean heavily towards fraud I think.
You argue that house arrest is an insufficient deterrent for the level of fraud committed by defendant A.
Is the sentencing for defendant A consistent with the US Sentencing Guidelines, and consistent with other defendants convicted of fraud?
> Maybe even more so if you’ve got a nice house.
Defendant B apparently isn't even on house arrest, and apparently sent someone else to their civil rape deposition obstructively and fraudulently.
The fact that different cases play out differently, some possibly unwise and unjust is no surprise to me.
"Lock her up!" He shouted about her. https://youtu.be/wS_Nrz5dNeU?si=XasLFHXQygx7IgSw ... /? lock her up: https://www.youtube.com/results?sp=mAEA&search_query=Lock+he...
Hard problems that reduce to document ranking
Ranking (information retrieval) https://en.wikipedia.org/wiki/Ranking_(information_retrieval...
awesome-generative-information-retrieval > Re-ranking: https://github.com/gabriben/awesome-generative-information-r...
Nixon's Revolutionary Vision for American Governance (2017)
/? nixon https://hn.algolia.com/?q=nixon ...
"New evidence that Nixon sabotaged 1968 Vietnam peace deal" (2017) https://news.ycombinator.com/item?id=13296696
(Watergate (1972-1974): https://en.wikipedia.org/wiki/Watergate_scandal )
History repeats itself!
Eisenhower, JFK, LBJ, Ford, Nixon, Carter, Reagan
https://news.ycombinator.com/item?id=42547326 ..Iran hostage crisis (1980-1981) https://en.wikipedia.org/wiki/Iran_hostage_crisis :
> The hostages were formally released into United States custody the day after the signing of the Algiers Accords, just minutes after American President Ronald Reagan was sworn into office
1980 October Surprise theory: https://en.wikipedia.org/wiki/1980_October_Surprise_theory
The Wrongs of Thomas More
Hey, Thomas More!
"The Saint" (1997) https://en.wikipedia.org/wiki/The_Saint_(1997_film) :
> Using the alias "Thomas More", Simon poses as
Larry Ellison's half-billion-dollar quest to change farming
Notes for Lanai island from an amateur:
Hemp is useful for soil remediation because it's so absorbent; which is part of why testing is important.
Is there already a composting business?
Do the schools etc. already compost waste food?
"Show HN: We open-sourced our compost monitoring tech" https://news.ycombinator.com/item?id=42201207
Canadian greenhouses? Chinese-Mongolian-Canadian greenhouses are wing shaped and set against a berm;
"Passive Solar Greenhouse Technology From China?" https://youtube.com/watch?v=FOgyK6Jieq0&
Transparent wood requires extracting the lignin.
Transparent aluminum requires a production process, too.
There are polycarbonate hurricane panels.
Reflective material on one wall of the wallipini greenhouse (and geothermal) is enough to grow citrus fruit through the winter in Alliance, Nebraska. https://news.ycombinator.com/item?id=39927538
Glass allows more wavelengths of light through than plastic or recyclable polycarbonate; including UV-C, which is sanitizing
Hydrogen peroxide cleans out fish tanks FWIU.
Various plastics are food safe, but not when they've been in the sun all day.
To make aircrete, you add soap bubbles to concrete with an air compressor.
/? aircrete dome build in HI and ground anchors
Catalan masonry vault roofs (in Spain, Italy, Mexico, Arizona,) are strong, don't require temporary arches, and passively cool most efficiently when they have an oculus to let the heat rise out of the dome to openable vents to the wind.
U.S. state and territory temperature extremes: https://en.wikipedia.org/wiki/U.S._state_and_territory_tempe... :
> Hawaii: 15 °F (−9.4 °C) to 100 °F (37.8 °C)
> Nebraska: -47 °F (−43.9 °C) to 118 °F (47.8 °C)
"140-year-old ocean heat tech could supply islands with limitless energy" https://news.ycombinator.com/item?id=38222695 :
> OTEC: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio...
North Carolina is ranked 4th in solar, has solar farms, and has hurricanes (and totally round homes on stilts). FWIU there are hurricane-rated solar panels, flexible racks, ground mounts.
https://insideclimatenews.org/news/20092018/hurricane-floren...
"Sargablock: Bricks from Seaweed" https://news.ycombinator.com/item?id=37188180
"Turning pineapple skins into soap and other cleaning products" https://www.businessinsider.com/turning-pineapple-skins-into...
"Costa Rica Let a Juice Company Dump Their Orange Peels in the Forest—and It Helped" https://www.smithsonianmag.com/innovation/costa-rica-let-jui... https://www.sciencealert.com/how-12-000-tonnes-of-dumped-ora...
Akira Miyawaki and the Miyawaki method of forest cultivation for reforestation to fight desertification by regreening: https://en.wikipedia.org/wiki/Akira_Miyawaki :
> Using the concept of potential natural vegetation, Miyawaki developed, tested, and refined a method of ecological engineering today known as the Miyawaki method to restore native forests from seeds of native trees on very degraded soils that were deforested and without humus. With the results of his experiments, he restored protective forests in over 1,300 sites in Japan and various tropical countries, in particular in the Pacific region[8] in the form of shelterbelts, woodlands, and woodlots, including urban, port, and industrial areas. Miyawaki demonstrated that rapid restoration of forest cover and soil was possible by using a selection of pioneer and secondary indigenous species that were densely planted and provided with mycorrhiza.
Mycorrhiza spores can be seeded into soil to support root network development.
/? Mycorrhiza spore kit: https://www.google.com/search?q=Mycorrhiza+spore+kit
> Miyawaki studied local plant ecology and used species that have key and complementary roles in the normal tree community.
It also works in small patches; self-sufficient "mini forest"
/? Miyawaki method before after [video search] https://www.google.com/search?q=miyawaki%20method%20before%2...
Brewing tea removes lead from water
/? tea bag microplastic: https://www.google.com/search?q=tea+bag+microplastic
There are glass and silver tea infusers.
"Repurposed beer yeast encapsulated in hydrogels may offer a cost-effective way to remove lead from water" https://phys.org/news/2024-05-repurposed-beer-yeast-encapsul...
"Yeast-laden hydrogel capsules for scalable trace lead removal from water" (2024) https://pubs.rsc.org/en/content/articlelanding/2024/su/d4su0...
.
"Application of brewing waste as biosorbent for the removal of metallic ions present in groundwater and surface waters from coal regions" (2018) https://www.sciencedirect.com/science/article/abs/pii/S22133...
.
https://ethz.ch/en/news-and-events/eth-news/news/2024/03/tur... :
> Protein fibril sponges made by ETH Zurich researchers [from whey protein] are hugely effective at recovering gold from electronic waste.
Aqueous-based recycling of perovskite photovoltaics
They say they use green solvents, but in the list of materials at the bottom I see Lead Iodide and Cesium Iodide, which doesn't strike me as too green of a thing to use.
Also, the abstract doesn't really make it clear whether the recycling is for complete panels or the "filling" of panels (sorry for the layman's terms). And whether this applies to any Perovskite-utilizing panel or just certain kinds.
But I am generally heartened at seeing thought regarding industrial processes which consider the end of productive life as well. I wish more products were designed to eventually be taken apart and reduced using standard processes, or even per-product processes - rather than the assumption being that more and more stuff is chucked into landfills.
Perovskite solar panels contain lead so you are going to have lead around, the question is if you can recycle the lead or if it goes in the environment.
Well there is always alternative number 3 - dont use perovskites.
Organic solar cell: https://en.wikipedia.org/wiki/Organic_solar_cell :
> 19.3%
Perovskite solar cell: https://en.wikipedia.org/wiki/Perovskite_solar_cell :
> 29.8%
If you need to use ~1.5x more area, but you avoid leaking any heavy metals or similarly toxic materials - I would say that's a win.
Of course, there are other parameters to consider.
Show HN: Benchmarking VLMs vs. Traditional OCR
Vision models have been gaining popularity as a replacement for traditional OCR. Especially with Gemini 2.0 becoming cost competitive with the cloud platforms.
We've been continuously evaluating different models since we released the Zerox package last year (https://github.com/getomni-ai/zerox). And we wanted to put some numbers behind it. So we’re open sourcing our internal OCR benchmark + evaluation datasets.
Full writeup + data explorer here: https://getomni.ai/ocr-benchmark
Github: https://github.com/getomni-ai/benchmark
Huggingface: https://huggingface.co/datasets/getomni-ai/ocr-benchmark
Couple notes on the methodology:
1. We are using JSON accuracy as our primary metric. The end goal is to evaluate how well each OCR provider can prepare the data for LLM ingestion.
2. This methodology differs from a lot of OCR benchmarks, because it doesn't rely on text similarity. We believe text similarity measurements are heavily biased towards the exact layout of the ground truth text, and penalize correct OCR that has slight layout differences.
3. Every document goes Image => OCR => Predicted JSON. And we compare the predicted JSON against the annotated ground truth JSON. The VLMs are capable of Image => JSON directly, we are primarily trying to measure OCR accuracy here. Planning to release a separate report on direct JSON accuracy next week.
This is a continuous work in progress! There are at least 10 additional providers we plan to add to the list.
The next big roadmap items are: - Comparing OCR vs. direct extraction. Early results here show a slight accuracy improvement, but it’s highly variable on page length.
- A multilingual comparison. Right now the evaluation data is english only.
- A breakdown of the data by type (best model for handwriting, tables, charts, photos, etc.)
Harmonic Loss converges more efficiently on MNIST OCR: https://github.com/KindXiaoming/grow-crystals .. "Harmonic Loss Trains Interpretable AI Models" (2025) https://news.ycombinator.com/item?id=42941954
Making any integer with four 2s
> I've read about this story in Graham Farmelo's book The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius.
"The Strangest Man": https://en.wikipedia.org/wiki/The_Strangest_Man
Four Fours: https://en.wikipedia.org/wiki/Four_fours :
> Four fours is a mathematical puzzle, the goal of which is to find the simplest mathematical expression for every whole number from 0 to some maximum, using only common mathematical symbols and the digit four. No other digit is allowed. Most versions of the puzzle require that each expression have exactly four fours, but some variations require that each expression have some minimum number of fours.
"Golden ratio base is a non-integer positional numeral system" (2023) https://news.ycombinator.com/item?id=37969716 :
> What about radix epi*i, or just e?"
The foundations of America's prosperity are being dismantled
> They warn that dismantling the behind-the-scenes scientific research programs that backstop American life could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. The US took nearly a century to craft its rich scientific ecosystem; if the unraveling that has taken place over the past month continues, Americans will feel the effects for decades to come.
Flu deaths surpass Covid deaths nationwide for first time
"CDC terminates flu vaccine promotion campaign" (2025-02) https://news.ycombinator.com/item?id=43126704
General Reasoning: Free, open resource for building large reasoning models
"Can Large Language Models Emulate Judicial Decision-Making? [Paper]" (2025) https://news.ycombinator.com/item?id=42927611 ; awesome-legal-nlp, LexGLUE, FairLex, LegalBench, "Who hath done it?" exercise : {Thing done}, ({Gdo, You, Others, Unknown/Nobody} x {Ignorance, Malice, Motive, Intent}) ... Did nobody do this?
Can LLMs apply a consistent procedure for logic puzzles with logically disjunctive possibilities?
Enter: Philosoraptor the LLM
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.
The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.
The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.
The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.
For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).
I did a whole Slimedeck Zero build video ( https://www.youtube.com/watch?v=rnwPmoWMGqk ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.
I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!
Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Multiple buffering; https://en.wikipedia.org/wiki/Multiple_buffering
Wikipedia has "page flipping" but not "physical double-buffer"? TIL about triple buffering, and quad buffering for stereoscopic applications.
Sodium-ion EV battery breakthrough pushes performance to theoretical limits
> “The binder we chose, carbon nanotubes, facilitates the mixing of TAQ [bis-tetraaminobenzoquinone] crystallites and carbon black particles, leading to a homogeneous electrode,” explained Chen.
"High-Energy, High-Power Sodium-Ion Batteries from a Layered Organic Cathode" (2025) https://pubs.acs.org/doi/10.1021/jacs.4c17713 :
> It exhibits a high theoretical capacity of 355 mAh/g per formula unit, enabled by a four-electron redox process, and achieves an electrode-level energy density of 606 Wh/kg (90 wt % active material) along with excellent cycling stability,
3.5M Voters Were Purged During 2024 Presidential Election [video]
Aren't there ZK (Zero Knowledge proof) blockchains that allow people to privately check whether their vote was counted correctly?
How can homomorphic encryption help detect and prevent illegal vote suppression?
law and politics are soft problems. correctness and math are meaningless here.
...and the interview talks about specific and named individuals using kkk tactics to block people from reaching the pools, not faceless "they" stealing votes in deep government. So im not even sure where your comment comes from.
Show HN: ArXiv-txt, LLM-friendly ArXiv papers
Just change arxiv.org to arxiv-txt.org in the URL to get the paper info in markdown
Example:
Original URL: https://arxiv.org/abs/1706.03762
Change to: https://arxiv-txt.org/abs/1706.03762
To fetch the raw text directly, use https://arxiv-txt.org/raw/abs/1706.03762, this will be particularly useful for APIs and agents
If you train an LLM on only formally verified code, it should not be expected to generate formally verified code.
Similarly, if you train an LLM on only published ScholarlyArticles ['s abstracts], it should not be expected to generate publishable or true text.
Traceability for Retraction would be necessary to prevent lossy feedback.
The Raspberry Pi RP2040 Gets a Surprise Speed Boost, Unlocks an Official 200MHz
"Ensuring Accountability for All Agencies" – Executive Order
Show HN: Subtrace – Wireshark for Docker Containers
Hey HN, we built Subtrace (https://subtrace.dev) to let you see all incoming and outgoing requests in your backend server—like Wireshark, but for Docker containers. It comes with a Chrome DevTools-like interface. Check out this video: https://www.youtube.com/watch?v=OsGa6ZwVxdA, and see our docs for examples: https://docs.subtrace.dev.
Subtrace lets you see every request with full payload, headers, status code, and latency details. Tools like Sentry and OpenTelemetry often leave out these crucial details, making prod debugging slow and annoying. Most of the time, all I want to see are the headers and JSON payload of real backend requests, but it's impossible to do that in today's tools without excessive logging, which just makes everything slower and more annoying.
Subtrace shows you every backend request flowing through your system. You can use simple filters to search for the requests you care about and inspect their details.
Internally, Subtrace intercepts all network-related Linux syscalls using Seccomp BPF so that it can act as a proxy for all incoming and outgoing TCP connections. It then parses HTTP requests out of the proxied TCP stream and sends them to the browser over WebSocket. The Chrome DevTools Network tab is already ubiquitous for viewing HTTP requests in the frontend, so we repurposed it to work in the browser like any other app (we were surprised that it's just a bunch of TypeScript).
Setup is just one command for any Linux program written in any language.
You can use Subtrace by adding a `subtrace run` prefix to your backend server startup command. No signup required. Try for yourself: https://docs.subtrace.dev
stratoshark, the docker container part of wireshark, may be a better match for that description.
I'd probably use a postman related pitch instead. This is much closer to that and looks like a nice complement to that workflow
Stratoshark: https://wiki.wireshark.org/Stratoshark :
> Stratoshark captures and analyzes system calls and logs using libsinsp and libscap, and can share capture files with the Sysdig command line tool and Falco
Show HN: Scripton – Python IDE with built-in realtime visualizations
Hey HN, Scripton (https://scripton.dev) is a Python IDE built for fast, interactive visualizations and exploratory programming — without the constraints of notebooks.
Why another Python IDE? Scripton hopes to fill a gap in the Python development ecosystem by being an IDE that:
1. Focuses on easy, fast, and interactive visualizations (and exposes rich JS plotting libraries like Observable Plot and Plotly directly to Python) 2. Provides a tightly integrated REPL for rapid prototyping and exploration 3. Is script-centric (as opposed to, say, notebook-style)
A historical detour for why these 3 features: Not so long ago (ok, well, maybe over a decade ago...), the go-to environment for many researchers in scientific fields would have been something like MATLAB. Generating multiple simultaneous visualizations (potentially dynamic) directly from your scripts, rapidly prototyping in the REPL, all without giving up on writing regular scripts. Over time, many switched over to Python but there wasn't an equivalent environment offering similar capabilities. IPython/Jupyter notebooks eventually became the de facto replacement. And while notebooks are great for many things (indeed, it wasn't uncommon for folks to switch between MATLAB and Mathematica Notebooks), they do make certain trade-offs that prevent them from being a full substitute.
Inner workings:
- Implemented in C++ (IDE <-> Python IPC), Python, TypeScript (UI), WGSL (WebGPU-based visualizations)
- While the editor component is based off Monaco, the IDE is not a vscode fork and was written from scratch. Happy to chat about the trade-offs if anyone's interested
- Uses a custom Python debugger written from scratch (which enables features like visualizing intermediate outputs while paused in the debugger)
Scripton's under active development (currently only available for macOS but Linux and Windows support is planned). Would love for you to try it out and share your thoughts! Since this is HN, I’m also happy to chat about its internals.
I am a robotics engineer/scientist and I do shit ton of visualization of all kind of high-fidelity/high-rate data, often in a streaming setting - time series at a few thousand Hz, RGB/depth images from multiple cameras, debugging my models by visualizing many layer outputs, every augmentation, etc.
For a long time, I had my own observability suite - a messy library of python scripts that I use for visualizing data. I replaced all of them with rerun (https://rerun.io/) and if you are someone who think Scipton is exciting, you should def try rerun too!
I use cursor/vscode for my development and add a line or two to my usual workflows in python, and rerun pops up in it's own window. It's a simple pip installable library, and just works. It's open source, and the founders run a very active forum too.
Edit: One slightly related tid-bit that might be interesting to HN folks. rerun isn't that old, and is in active development, with some breaking changes and new features that come up every month. And it means that LLM are pretty bad at rerun code gen, beyond the simple boilerplate. Recently, it kind of made my life hell as all of my interns refuse to use docs and try using LLMs for rerun code generation and come to me with a messy code spaghetti. It's both sad and hilarious. To make my life easier, I asked rerun folks to create and host machine readable docs somewhere and they never got to it. So I just scrape their docs into a markdown file and ask my interns to paste the docs in their prompt before they query LLMs and it works like a charm now.
For magnet levitation project I am dumping data to a csv on a rpi and then reading it over ssh onto matplotlib on my desktop. It works but it choppy. Probably because of the ssh.
Could I drop rerun into this to improve my monitoring?
From "Show HN: We open-sourced our [rpi CSV] compost monitoring tech" https://news.ycombinator.com/item?id=42201207 :
Nagios, Collectd, [Prometheus], Grafana
From "Preview of Explore Logs, a new way to browse your logs without writing LogQL" https://news.ycombinator.com/item?id=39981805 :
> Grafana supports SQL, PromQL, InfluxQL, and LogQL.
From https://news.ycombinator.com/item?id=40164993 :
> But that's not a GUI, that's notebooks. For Jupyter integration, TIL pyqtgraph has jupyter_rfb, Remote Frame Buffer: https://github.com/vispy/jupyter_rfb
pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
Matplotlib can generate GIF and WEBM (and .mp4) animations, but not realtime.
ManimCE might work in notebooks, but IDK about realtime
Genesis is fast enough by FPS for faster than realtime 3D with Python (LuisaRender) and a GPU: https://github.com/Genesis-Embodied-AI/Genesis
SWE-Lancer: a benchmark of freelance software engineering tasks from Upwork
> By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
What could be costed in an upwork or a mechanical turk task Value?
Task Centrality or Blockingness estimation: precedence edges, tsort topological sort, graph metrics like centrality
Task Complexity estimation: story points, planning poker, relative local complexity scales
Task Value estimation: cost/benefit analysis, marginal revenue
Nuclear fusion: WEST beats the world record for plasma duration
> 1337 seconds
AFAIU, no existing tokamaks can handle sustained plasma for any significant period of time because they'll burn down.
Did this destroy the facility?
What duration of sustained fusion plasma can tokamaks like EAST, WEST, and ITER withstand? What will need to change for continuous fusion energy to be net gained from a tokamak or a stellerator fusion reactor?
If this destroyed the facility that would be the headline this news article.... WEST highest is 22 minutes (it's in the title) and you could google EAST and ITER but the title tells you it is less than 22 minutes. WEST is a testing ground for ITER. The fact that you can have sustained fusion for only 22 minutes is the biggest problem since you need to boil water continuously because all power sources rely on taking cold water and making it warm constantly so that it makes a turbine move.
there is destroyed and then there is a smoking hole in the side of the planet:) but I think it fair to say, that after 22 min running, that there is no way that it can be turned back on later kind of thing, fairly sure its a pwhew!, lookatdat!, almost lost plasma containment.... keep in mind that they are trying to replicate the conditions found inside a star with some magnets and stuff, sure its ferociously engineered stuff but not at all like the stuff that could exist inside a star so all in all a rather audacious endevour, and I wish them luck with it
The system is not breakeven and the plasma was contained for 22 minutes so the situation would be the plasma was contained until it ran out of fuel. It is made out of tungsten for heat dissipation, has active cooling, has magnetic confinement with superconductors to prevent the system from destroying itself. https://en.wikipedia.org/wiki/WEST_(formerly_Tore_Supra)
Fusion energy gain factor: https://en.wikipedia.org/wiki/Fusion_energy_gain_factor :
> A fusion energy gain factor, usually expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state
To rephrase the question: what is the limit to the duration of sustained inertial confinement fusion plasma in the EAST, WEST, and ITER tokamaks, and why is the limit that amount of time?
Don't those materials melt if exposed to temperatures hotter than the sun for sufficient or excessive periods of time?
For what sustained plasma duration will EAST, WEST, and ITER need to be redesigned? 1 hour, 24 hours?
EAST, WEST, URNER
[LLNL] "US scientists achieve net energy gain for second time in nuclear fusion reaction" (2023-08) https://www.theguardian.com/environment/2023/aug/06/us-scien...
But IDK if they've seen recent thing about water 100X'ing proton laser plasma beams from SLAC published this year/month;
From https://news.ycombinator.com/item?id=43088886 :
> "Innovative target design leads to surprising discovery in laser-plasma acceleration" (2025-02) https://phys.org/news/2025-02-discovery-laser-plasma.html
>> Compared to similar experiments with solid targets, the water sheet reduced the proton beam's divergence by an order of magnitude and increased the beam's efficiency by a factor of 100
"Stable laser-acceleration of high-flux proton beams with plasma collimation" (2025) https://www.nature.com/articles/s41467-025-56248-4
Timeline of nuclear fusion: https://en.wikipedia.org/wiki/Timeline_of_nuclear_fusion
That energy gain was only in the plasma, not in the entire system.
The extremely low efficiency of the lasers used there for converting electrical energy into light energy (perhaps of the order of 1%) has not been considered in the computation of that "energy gain".
Many other hidden energy sinks have also not been considered, like the energy required to produce deuterium and tritium, or the efficiencies of capturing the thermal energy released by the reaction and of converting it into electrical energy.
It is likely that the energy gain in the plasma must be at least in the range 100 to 1000, in order to achieve an overall energy gain greater than 1.
> all power sources rely on taking cold water and making it warm constantly so that it makes a turbine move.
PV (photovoltaic), TPV (thermopohotovoltaic), and thin film and other solid-state thermoelectric (TE) approaches do not rely upon corrosive water turning a turbine.
Turbine blades can be made of materials that are more resistant to corrosion.
On turbine efficiency:
"How the gas turbine conquered the electric power industry" https://news.ycombinator.com/context?id=38314774
It looks like the GE 7HA gas/hydrogen turbine is still the most efficient turbine? https://gasturbineworld.com/ge-7ha-03-gas-turbine/ :
> Higher efficiency: 43.3% in simple cycle and up to 64% in combined cycle,
Steam turbines aren't as efficient as gas turbines FWIU.
/? which nuclear reactors do not have a steam turbine:
"How can nuclear reactors work without steam?" [in space] https://www.reddit.com/r/askscience/comments/7ojhr8/how_can_... :
> 5% efficient; you usually get less than 5% of the thermal energy converted into electricity
(International space law prohibits putting nuclear reactors in space without specific international approval, which is considered for e.g. deep space probes like Voyager; though the sun is exempt.)
Rankine cycle (steam) https://en.wikipedia.org/wiki/Rankine_cycle
Thermoelectric effect: https://en.wikipedia.org/wiki/Thermoelectric_effect :
> The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect (temperature differences cause electromotive forces), the Peltier effect (thermocouples create temperature differences), and the Thomson effect (the Seebeck coefficient varies with temperature).
"Thermophotovoltaic efficiency of 40%" https://www.nature.com/articles/s41586-022-04473-y
Multi-junction PV cells are not limited by the Shockley–Queisser limit, but are limited by current production methods.
Multi-junction solar cells: https://en.wikipedia.org/wiki/Multi-junction_solar_cell#Mult...
Which existing thermoelectric or thermopohotovoltaic approaches work with nuclear fusion levels of heat (infrared)?
Okay so I meant to say the simplest way is to heat water in this situation. But there are alternatives here https://en.wikipedia.org/wiki/Fusion_power?wprov=sfti1#Tripl...
I wouldn't have looked this up otherwise.
Maybe solar energy storage makes sense for storing the energy from fusion reactor stars, too.
There's also MOST: Molecular Solar Thermal Energy Storage, which stores solar energy as chemical energy for up to 18 years with a "specially designed molecule of carbon, hydrogen and nitrogen that changes shape when it comes into contact with sunlight."
"Chip-scale solar thermal electrical power generation" (2022) https://doi.org/10.1016/j.xcrp.2022.100789
> Multi-junction PV cells are not limited by the Shockley–Queisser limit, but are limited by current production methods.
Such as multilayer nanolithography, which nanoimprint lithography 10Xs; https://arstechnica.com/reviews/2024/01/canon-plans-to-disru...
Perhaps multilayer junction PV and TPV cells could be cost-effectively manufactured with nanoimprint lithography.
Qualys Security Advisory: MitM and DoS attacks against OpenSSH client and server
MitM-able since 6.8 (December 2014) only if
> VerifyHostKeyDNS is "yes" or "ask" (it is "no" by default),
And DOS-able since 9.5 (2023) because of a new ping command.
> To confirm our suspicion, we adopted a dual strategy:
> - we manually audited all of OpenSSH's functions that use "goto", for missing resets of their return value;
> - we wrote a CodeQL query that automatically searches for functions that "goto out" without resetting their return value in the corresponding "if" code block.
Catalytic computing taps the full power of a full hard drive
So the trick is to do the computation forwards, but take care to only use reversible operations, store the result outside of the auxiliary "full" memory and then run the computation backwards, reversing all instructions and thus undoing their effect on the auxiliary space.
Which is called catalytic, because it wouldn't be able to do the computation in the amount of clean space it has, but can do it by temporarily mutating auxiliary space and then restoring it.
What I haven't yet figured out is how to do reversible instructions on auxiliary space. You can mutate a value depending on your input, but how do you use that value, since you can't assume anything about the contents of the auxiliary space and just overwriting with a constant (e.g. 0) is not reversible.
Maybe there is some xor like trick, where you can store two values in the same space and you can restore them, as long as you know one of the values.
Edit: After delving into the paper linked in another comment, which is rather mathy (or computer sciency in the original meaning of the phrase), I'd like to have a simple example of a program that can not run in it's amount of free space and actually needs to utilize the auxiliary space.
That sounds similar to this in QC:
From "Reversible computing escapes the lab" (2025) https://news.ycombinator.com/item?id=42660606#42705562 :
> FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
> "The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123
Though also Landauer's limit presumably only applies to electrons; not photons or phonons or gravitational waves.
Can you feed gravitationaly waves forward to a receptor that only carries that gravitational waves bit, but can be recalled?
Setting up a trusted, self-signed SSL/TLS certificate authority in Linux
There is just one thing missing from this. Name Constraints.
This doesn't get brought up enough but a Name Constraint on a root cert lets you limit where the root cert can be signed to. So instead of this cert being able to impersonate any website on the internet, you ratchet it down to just the domain (or single website) that you want to sign for.
https://github.com/caddyserver/caddy/issues/5759 :
> When generating a CA cert via caddy and putting that in the trust store, those private keys can also forge certificates for any other domain.
RFC5280 (2008) "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile" > Section 4.2.1.10 Name Constraints: https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... :
> The name constraints extension, which MUST be used only in a CA certificate, indicates a name space within which all subject names in subsequent certificates in a certification path MUST be located. Restrictions apply to the subject distinguished name and apply to subject alternative names. Restrictions apply only when the specified name form is present. If no name of the type is in the certificate, the certificate is acceptable.
> Name constraints are not applied to self-issued certificates (unless the certificate is the final certificate in the path). (This could prevent CAs that use name constraints from employing self-issued certificates to implement key rollover.)
If this is now finally supported that's great. The issue was that for it to be useful it has to be marked critical / fail-closed, because a CA with ignored name constraint == an unrestricted CA. But if you make it critical, then clients who don't understand it will just fail. You can see how this doesn't help adoption.
It says "Proposed Standard" on the RFC; maybe that's why it's not widely implemented if that's the case?
https://bettertls.com/ has Name Constraints implementation validation tests, but "Archived Results" doesn't seem to have recent versions of SSL clients listed?
nameConstraints=critical,
DNS Certification Authority Authorization: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Au... :> Registrants publish a "CAA" Domain Name System (DNS) resource record which compliant certificate authorities check for before issuing digital certificates.
And hopefully they require DNSSEC signatures and DoH/DoT/DoQ when querying for CAA records.
Name Constraints has been around at least since 1999 (RFC 2459).
I'm not sure why CAA is brought up here. I guess it is somewhat complementary in "reducing" the power of CAs, but it defends against good CAs misissuing stuff, not limiting the power of arbitrary CAs (as it's checked at issuance time, not at time of use).
CAA does not require DNSSEC or DOH.
New technique generates topological structures with gravity water waves
Water also focuses laser plasmas;
"Innovative target design leads to surprising discovery in laser-plasma acceleration" https://phys.org/news/2025-02-discovery-laser-plasma.html
> Compared to similar experiments with solid targets, the water sheet reduced the proton beam's divergence by an order of magnitude and increased the beam's efficiency by a factor of 100.
LightGBM Predict on Pandas DataFrame – Column Order Matters
[LightGBM,] does not converge regardless of feature order.
From https://news.ycombinator.com/item?id=41873650 :
> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
Also, current LLMs suggest that statistical independence is entirely distinct from orthogonality, which we typically assume with high-dimensional problems. And, many statistical models do not work with non-independent features.
Does this model work with non-independence or nonlinearity?
Does the order of the columns in the training data CSV change the alpha of the model; does model output converge regardless of variance in the order of training data?
670nm red light exposure improved aged mitochondrial function, colour vision
They didn't have any controls with a different wavelength of light. They might as well be measuring a diurnal pattern.
There seem to be very similar studies with the same conclusion.
https://www.medicalnewstoday.com/articles/3-minutes-of-deep-...
Another:
"Red (660 nm) or near-infrared (810 nm) photobiomodulation stimulates, while blue (415 nm), green (540 nm) light inhibits proliferation in human adipose-derived stem cells" (2017) https://www.nature.com/articles/s41598-017-07525-w
TIL it's called "red light optogenetics" even when the cells aren't modified for opto-activation.
And,
"Green light induces antinociception via visual-somatosensory circuits" (2023) https://www.sciencedirect.com/science/article/pii/S221112472...
"Infrared neural stimulation in human cerebral cortex" (2023) https://www.sciencedirect.com/science/article/pii/S1935861X2... :
> In a comparison of electrical, optogenetic, and infrared neural stimulation, Roe et al. [14] found that all three approaches could in principal achieve such specificity. However, because brain tissue is conductive, it can be challenging to confine neuronal activation by traditional electrical stimulation to a single cortical column. In contrast, optogenetic stimulation can be applied with high spatial specificity (e.g. by delivering light via a 200um fiber optic apposed to the cortex) and with cell-type specificity (e.g. excitatory or inhibitory cells); however, optogenetics requires viral vectors and gene transduction procedures, making it less easy for human applications [15]. Over the last decade, infrared neural stimulation (INS), which is a pulsed heat-mediated approach, has provided an alternative method of neural activation. Because brain tissue is 75% water, infrared light delivered near peak absorption wavelengths (e.g. 1875 nm [16]) permits effective delivery of heat to the brain tissue. In particular, experimental and modelling studies [[17], [18], [19]] have shown that 1875 nm light (brief 0.5sec trains of 0.25msec pulses forming a bolus of heat) effectively achieves focal (submillimeter to millimeter sized) activation of neural tissue
Does NIRS -based (neuro-) imaging induce neuronal growth?
Are there better photonic beam-forming apparatuses than TI DLP projectors with millions of tiny actuated mirrors; isn't that what a TV does?
Cold plasma also affects neuronal regrowth and could or should be used for wound closure. Are they cold plasma-ing Sam (Hedlund) in "Tron: Legacy" (2010) before the disc golf discus thing?
Does cold plasma reduce epithelial scarring?
A certain duration of cold plasma also appears to increase seed germination rate FWIW.
To find these studies, I used a search engine and search terms and then picked studies which seem to confirm our bias.
Why is there an increase in lung cancer among women who have never smoked?
https://www.sciencedaily.com/releases/2022/04/220411113733.h... :
> Cigarette smoking is overwhelmingly the main cause of lung cancer, yet only a minority of smokers develop the disease.
https://www.verywellhealth.com/what-percentage-of-smokers-ge...
Does hairspray increase the probability of contracting lung cancer?
Does cheap makeup drain into maxillaries and cause breast cancer?
Does zip codes lived in joined with AQI predict lung cancer? Which socioeconomic, dietary, and environmental contaminant exposures are described by "zip codes lived in"?
There are unbleached (hemp) cigarette rolling papers that aren't soaked in chlorine bleach.
Strangely, cigarette filters themselves carry a CA Prop __ warning.
There are hemp-based cigarette filters, but there are apparently not full-size 1-1/4s filters which presumably wouldn't need to carry a Prop __ warning.
"The filter's the best part" - Dennis Leary
Tobacco contains an MAOI. Many drug supplements list MAOIs as a contraindication. Cheese tastes differently (worse) after tobacco in part due to the MAOIs?
All combustion produces benzene; ice vehicle, campfire, industrial emissions, hydrocarbon power generation, and tobacco smoking.
Ron Fisher - who created the F statistic - never accepted that tobacco was the primary cause of lung cancer.
There appear to be certain genes that significantly increase susceptibility to tobacco-caused lung cancer.
"Hypothesizing that marijuana smokers are at a significantly lower risk of carcinogenicity relative to tobacco-non-marijuana smokers: evidenced based on statistical reevaluation of current literature" https://www.tandfonline.com/doi/abs/10.1080/02791072.2008.10... https://pubmed.ncbi.nlm.nih.gov/19004418/
Cannabis is an extremely absorbent plant; hemp is recommended for soil remediation.
From https://www.reddit.com/r/askscience/comments/132nzng/why_doe... :
> C. Everette Koop (the former US Surgeon General) went on record to state that approximately 90% of lung cancer cases associated with smoking were directly related to the presence of Polonium-210, which emits alpha radiation straignt to your lung tissue when inhaled.
Tobacco and Cannabis both cause emphysema.
Also gendered: indoor task preference and cleaning product exposure?
Nonstick cookware exposure
Reduced fat, increased fructose foods (sugar feeds cancer)
Exhaust and walking?
Cooking oil and air fryer usage rates have changed.
Water quality isn't gender-specific is it?
Rocket stoves change lives in other economies; cooking stove materials
There are salt-based cleaning products, USB dilute hypochlorite bleach generators that require just water and salt, there are citric acid based cleaning products, and there's vinegar and baking soda.
Electronic motorized plastic bristle brushes?
Ironically, bleach doesn't kill COVID at all, but UV-C does.
Did somebody train an LLM with a mixture of RFK Jr and schizophrenic thought disorder?
Ask HN: What are people's experiences with knowledge graphs?
I see lots of YouTube videos and content about knowledge graphs in the context of Gen AI. Are these at all useful for personal information retrieval and organization? If so, are there any frameworks or products that you'd recommend that help construct and use knowledge graphs?
Property graphs don't specify schema.
Is it Shape.color or Shape.coleur, feet or meters?
RDF has URIs for predicates (attributes). RDFS specifies :Class(es) with :Property's, which are identified by URIs.
E.g. Wikidata has schema; forms with validation. Dbpedia is Wikipedia infoboxes regularly extracted to RDF.
Google acquired metaweb freebase years ago, launched a Knowledge Graph product, and these days supports Structured Data search cards in microdata, RDFa, and JSONLD.
[LLM] NN topology is sort of a schema.
Linked Data standards for data validation include RDFS and SHACL. JSON schema is far more widely implemented.
RDFa is "RDF in HTML attributes".
How much more schema does the application need beyond [WikiWord] auto-linkified edges? What about typed edges with attributes other than href and anchor text?
AtomSpace is an in-memory hypergraph with schema to support graph rewriting specifically for reasoning and inference.
There are ORMs for graph databases. Just like SQL, how much of the query and report can be done be the server without processing every SELECTed row.
Query languages for graphs: SQL, SPARQL, SPARQLstar, GraphQL, Cypher, Gremlin.
Object-attribute level permissions are for the application to implement and enforce. Per-cell keys and visibility are native db features of e.g. Accumulo, but to implement the same with e.g. Postgres every application that is a database client is on scout's honor to also enforce object-attribute access control lists.
And then identity; which user with which (sovereign or granted) cryptographic key can add dated named graphs that mutate which data in the database.
So, property graphs eventually need schema and data validation.
markmap.js.org is a simple app to visualize a markdown document with headings and/or list items as a mindmap; but unlike Freemind, there's no way to add edges that make the tree a cyclic graph.
Cyclic graphs require different traversal algorithms. For example, Python will raise MaxRecursionError when encountering a graph cycle without a visited node list, but a stack-based traversal of a cyclic graph will not halt without e.g. a visited node list to detect cycles, though a valid graph path may contain cycles (and there is feedback in so many general systems)
YAML-LD is JSON-LD in YAML.
JSON-LD as a templated output is easier than writing a (relatively slow) native RDF application and re-solving for what SQL ORM web frameworks already do.
There are specs for cryptographically signing RDF such that the signature matches regardless of the graph representation.
There are processes and business processes around knowledge graphs like there are for any other dataset.
OTOH; ETL, Data Validation, Publishing and Hosting of dataset and/or servicing arbitrary queries and/or cost-estimable parametric [windowed] reports, Recall and retraction traceability
DVC.org and the UC BIDS Computational Inference notebook book probably have a better enumeration of processes for data quality in data science.
...
With RDF - though it's a question of database approach and not data representation -
Should an application create a named graph per database transaction changeset or should all of that data provenance metadata be relegated to a database journal that can't be read from or written to by the app?
How much transaction authentication metadata should an app be trusted to write?
A typical SQL webapp has one database user which can read or write to any column of any table.
Blockchains and e.g. Accumulo require each user to "connect to" the database with a unique key.
It is far harder for users to impersonate other users in database systems that require a cryptographic key per user than it is to just write in a different username and date using the one db cred granted to all application instances.
W3C DIDs are cryptographic keys (as RDF with schema) that can be generated by users locally or generated centrally; similar to e.g. Bitcoin account address double hashes.
Users can cryptographically sign JSON-LD, YAML-LD, RDFa, and any other RDF format with W3C DIDs; in order to assure data integrity.
How do data integrity and data provenance affect the costs, utility, and risks of knowledge graphs?
Compared to GPG signing git commits to markdown+YAML-LD flat files in a git repo, and paying e.g gh to enforce codeowner permissions on files and directories in the repo by preventing unsigned and unauthorized commits, what are the risks of trusting all of the data from all of the users that could ever write to a knowledge graph?
Which initial graph schema support inference and reasoning; graph rewriting?
CodeWeavers Hiring More Developers to Work on Wine and Valve's Proton
KSP2 support in Proton or ProtonGE could make the game playable.
From https://www.protondb.com/app/954850 :
> With no tinkering the in-game videos stutter and skip frames. I fixed this by installing lavfilters with protontricks. The game as far as i know has no means of limiting the frame rate. This was fixed using mangohud.
Are people still playing KSP2? I was one of the few people who were optimistic about it, but then they fired the devs, and the game hasn't been touched in months (despite still being sold for full price on Steam).
that would be impressive since I wouldn't consider the game playable on any OS
Opposing arrows of time can theoretically emerge from certain quantum systems
The Minkowski metric is
Δs = sqrt(-Δt^2 + Δx^2 + Δy^2 + Δz^2)
One aspect of this is that, if you sub `t -> -t'`, that's just as good a solution too. Which would suggest any solution with a positive time direction can have a negative time direction, just as easily. Is this widely assumed to be true, or at least physically meaningful?There's also Wick rotations, where you can sub `t -> it'`, and then Minkowskian spacetime becomes Euclidean but time becomes complex-valued. Groovy stuff.
I'm not much of a physics buff but I loved reading Julian Barbour's The Janus Point for a great treatment of the possibility of negative time.
The craziest thing I've seen though is the suggestion that an accelerating charge, emitting radiation that interacts with the charge itself and imparts a backreacting force on the charge, supposedly has solutions whose interpretation would suggest that it would be sending signals back in time. [0]
/? "time-polarized photons" https://www.google.com/search?q=%22time-polarized+photons%22
https://www.scribd.com/doc/287808282/Bearden-Articles-Mind-C... ... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22... ... "p sychon ergetics" .. /? Torsion fields :
- "Torsion fields generated by the quantum effects of macro-bodies" (2022) https://arxiv.org/abs/2210.16245 :
> We generalize Einstein's General Relativity (GR) by assuming that all matter (including macro-objects) has quantum effects. An appropriate theory to fulfill this task is Gauge Theory Gravity (GTG) developed by the Cambridge group. GTG is a "spin-torsion" theory, according to which, gravitational effects are described by a pair of gauge fields defined over a flat Minkowski background spacetime. The matter content is completely described by the Dirac spinor field, and the quantum effects of matter are identified as the spin tensor derived from the spinor field. The existence of the spin of matter results in the torsion field defined over spacetime. Torsion field plays the role of Bohmian quantum potential which turns out to be a kind of repulsive force as opposed to the gravitational potential which is attractive [...] Consequently, by virtue of the cosmological principle, we are led to a static universe model in which the Hubble redshifts arise from the torsion fields.
Wikipedia says that torsion fields are pseudoscientific.
Retrocausality is observed.
From "Evidence of 'Negative Time' Found in Quantum Physics Experiment" https://news.ycombinator.com/item?id=41707116 :
> "Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://arxiv.org/abs/2409.03680
/?hnlog retrocausality (Ctrl-F "retrocausal", "causal") https://westurner.github.io/hnlog/ )
From "Robust continuous time crystal in an electron–nuclear spin system" (2024) https://news.ycombinator.com/item?id=39291044 ;
> [ Indefinite causal order, Admissible causal structures and correlations, Incandescent Temporal Metamaterials, ]
From "What are time crystals and why are they in kids’ toys?" https://bigthink.com/surprising-science/what-are-time-crysta... :
> Time crystals have been detected in an unexpected place: monoammonium phosphate, a compound found in fertilizer and ‘grow your own crystal’ kits.
Ammonium dihydrogen phosphate: https://en.wikipedia.org/wiki/Ammonium_dihydrogen_phosphate :
> Piezoelectric, birefringence (double refraction), transducers
Retrocausality in photons, Retrocausality in piezoelectric time crystals which are birefringent (which cause photonic double-refraction)
Is it gauge theory, though?
From https://news.ycombinator.com/item?id=38839439 :
> If gauge symmetry breaks in superfluids (ie. Bose-Einstein condensates); and there are superfluids at black hole thermal ranges; do gauge symmetry constraints break in [black hole] superfluids?
Probably not gauge symmetry there, then.
Quantum Computing Notes: Why Is It Always Ten Years Away? – Usenix
Fluoxetine promotes metabolic defenses to protect from sepsis-induced lethality
Some wikipedia context for this SSRI: "Fluoxetine, sold under the brand name Prozac, among others, is an antidepressant medication of the selective serotonin reuptake inhibitor class used for the treatment of major depressive disorder, anxiety, obsessive–compulsive disorder, panic disorder, premenstrual dysphoric disorder, and bulimia nervosa."
Fluoxetine also appears to increase plasticity in the adult visual cortex, which can reduce monocular dominance. https://www.google.com/search?q=fluoxetine+plasticity+visual...
NASA has a list of 10 rules for software development
Just for context, these aren’t really “rules” as much as proposed practices. Note that official “rules” are in documents with names like “NPR” aka “NASA procedural requirements.”[1] So, while someone may use the document in the featured article to frame a discussion, a developer is not bound to comply (or alternatively waive) those “rules” and could conceivably just dismiss them.
[1] e.g. https://nodis3.gsfc.nasa.gov/displayDir.cfm?t=NPR&c=7150&s=2...
awesome-safety-critical lists a number of specs: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
Just be aware that some of the NASA-specific ones fall into a similar category. NASA “guidebooks” and “handbooks” aren’t generally hard requirements.
From "The state of Rust trying to catch up with Ada [video]" https://news.ycombinator.com/item?id=43007013 :
>> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust
/? Misra rust guidelines:
- This is a different MISRA C for Rust project: https://github.com/PolySync/misra-rust
- "Bringing Rust to Safety-Critical Systems in Space" (2024) https://arxiv.org/abs/2405.18135v1
...
> minimum of two assertions per function.
Which guidelines say "you must do runtime type and value checking" of every argument at the top of every function?
The SEI CERT C Guidelines are far more comprehensive than the OT 10 rules TBH:
"SEI CERT C Coding Standard" https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?c...
"CWE CATEGORY: SEI CERT C Coding Standard - Guidelines 08. Memory Management (MEM)" https://cwe.mitre.org/data/definitions/1162.html
Sorry, I’m not following your point. When I said “NASA-specific” I meant those in your link like “NASA Software Engineering and Assurance Handbook” and “NASA C Style Guide” (emphasis mine). Those are not hard requirements in spaceflight unless explicitly defined as such in specific projects. Similarly, NASA spaceflight software does not generally get certified to FAA requirements etc. The larger point being, a NASA developer does not have to follow those requirements simply by the nature of doing NASA work. In other words, they are recommendations but not specifications.
Are there SAST or linting tools to check that the code is compliant with the [agency] recommendations?
Also important and not that difficult, formal design, implementation, and formal verification;
"Formal methods only solve half my problems" https://news.ycombinator.com/item?id=31617335
"Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964
Formal Methods in Python; FizzBee, Nagini, deal-solver: https://news.ycombinator.com/item?id=39904256#39958582
I’m not aware of any tools for analysis geared to NASA requirements specifically, but static analysis is a requirement for some types of development.
Why isn't there tooling to support these recommendations; why is there no automated verification?
SAST and DAST tools can be run on_push with git post-receive hooks or before commit with pre commit. (GitOps; CI; DevOpsSec with Sec shifted left in the development process is DevSecOps)
I don’t work there so I can’t speak definitely, but much of it probably stems from the sheer diversity of software. For example, ladder logic typically does not have the same tools as structured programming but is heavily used in infrastructure. It is also sometimes restricted to specify a framework, leaving contractors to develop in whatever they want.
Physics Informed Neural Networks
From "Physics-Based Deep Learning Book" (2021) https://news.ycombinator.com/item?id=28510010 :
> Physics-informed neural networks: https://en.wikipedia.org/wiki/Physics-informed_neural_networ...
We were wrong about GPUs
> The biggest problem: developers don’t want GPUs. They don’t even want AI/ML models. They want LLMs. System engineers may have smart, fussy opinions on how to get their models loaded with CUDA, and what the best GPU is. But software developers don’t care about any of that. When a software developer shipping an app comes looking for a way for their app to deliver prompts to an LLM, you can’t just give them a GPU.
I'm increasingly coming to the view that there is a big split among "software developers" and AI is exacerbating it. There's an (increasingly small) group of software developers who don't like "magic" and want to understand where their code is running and what it's doing. These developers gravitate toward open source solutions like Kubernetes, and often just want to rent a VPS or at most a managed K8s solution. The other group (increasingly large) just wants to `git push` and be done with it, and they're willing to spend a lot of (usually their employer's) money to have that experience. They don't want to have to understand DNS, linux, or anything else beyond whatever framework they are using.
A company like fly.io absolutely appeals to the latter. GPU instances at this point are very much appealing to the former. I think you have to treat these two markets very differently from a marketing and product perspective. Even though they both write code, they are otherwise radically different. You can sell the latter group a lot of abstractions and automations without them needing to know any details, but the former group will care very much about the details.
IaaS or PaaS?
Who owns and depreciates the logs, backups, GPUs, and the database(s)?
K8s docs > Scheduling GPUs: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus... :
> Once you have installed the plugin, your cluster exposes a custom schedulable resource such as amd.com/gpu or nvidia.com/gpu.
> You can consume these GPUs from your containers by requesting the custom GPU resource, the same way you request cpu or memory
awesome-local-ai: Platforms / full solutions https://github.com/janhq/awesome-local-ai?platforms--full-so...
But what about TPUs (Tensor Processing Units) and QPUs (Quantum Processing Units)?
Quantum backends: https://github.com/tequilahub/tequila#quantum-backends
Kubernetes Device Plugin examples: https://kubernetes.io/docs/concepts/extend-kubernetes/comput...
Kubernetes Generic Device Plugin: https://github.com/squat/generic-device-plugin#kubernetes-ge...
K8s GPU Operator: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator...
Re: sunlight server and moonlight for 120 FPS 4K HDR access to GPU output over the Internet: https://github.com/kasmtech/KasmVNC/issues/305#issuecomment-... :
> Still hoping for SR-IOV in retail GPUs.
> Not sure about vCPU functionality in GPUs
Process isolation on vCPUs with or without SR-IOV is probably not as advanced as secure enclave approaches.
Intel SGX is a secure enclave capability, which is cancelled on everything but Xeon. FWIU there is no SGX for timeshared GPUs.
What executable loader reverifies the loaded executable in RAM after imit time ?
What LLM loader reverifies the in-RAM model? Can Merkle hashes reduce that cost; of nn state verification?
Can it be proven that a [chat AI] model hosted by someone else is what is claimed; that it's truly a response from "model abc v2025.02"?
PaaS or IaaS
Surely you must be joking, Jupyter notebooks with Ruby [video]
List of Jupyter Kernels: https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
IRuby: https://github.com/SciRuby/iruby
conda-forge/ruby-feedstock: https://github.com/conda-forge/ruby-feedstock
It looks like ruby-feedstock installs `gem`; but AFAIU there's not yet a way to specify gems in an environment.yml like `pip:` packages.
There aren't many other conda-forge feedstocks for Ruby, though;
/? Ruby https://github.com/orgs/conda-forge/repositories?q=Ruby
What if Eye...?
From "An ultra-sensitive on-off switch helps axolotls regrow limbs" https://news.ycombinator.com/item?id=36912925 :
> [ mTOR, Muller glia in Zebrafish, ]
From "Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://neurosciencenews.com/vision-restoration-genetic-2318... :
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
Learning fast and accurate absolute pitch judgment in adulthood
i made https://perfectpitch.study a week or so ago. i am old and musically untrained and wanted to see if rote practice makes a difference (it clearly does).
most of the sites of this type i found annoying as you can't just use a midi keyboard, so you just get RSI clicking around for 10 minutes.
I tried getting adsense on it, but they seem to have vague content requirements. Apparently tools don't count as real websites :-(. I couldn't even fool it with fake content. what's the best banner ad company to use in this situation?
Nice! The keyboard could be larger on mobile in portrait and landscape
Ctrl-Shift-M https://devtoolstips.org/tips/en/simulate-devices/ ; how to simulate a mobile viewport: https://developer.chrome.com/docs/devtools/device-mode#devic...
/? google lighthouse mobile accessibility test: https://www.google.com/search?q=google+lighthouse+mobile+acc...
Lighthouse: https://developer.chrome.com/docs/lighthouse/overview
Gave it a try. After a few minutes I felt more like I was recognising the samples than I was recognising the notes. Not sure what you can do about that short of physically modeling an instrument.
Latest browser APIs expose everything you need to build a synth. See: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_A...
There are some libraries that make it easy to simulate instruments. E.g. tone.js https://tonejs.github.io/
It should be possible to generate unique-ish variants at runtime.
OpenEar is built on tone.js: https://github.com/ShacharHarshuv/open-ear
limut implements WebAudio and WebGL, and FoxDot-like patterns and samples: https://github.com/sdclibbery/limut
https://glicol.org/ runs in a browser and as a VST plugin
"Using the Web Audio API to Make a Modem" (2017) https://news.ycombinator.com/item?id=15471723
gh topics/webaudio: https://github.com/topics/webaudio
awesome-webaudio: https://github.com/notthetup/awesome-webaudio
From the OpenEar readme re perfect pitch training; https://github.com/ShacharHarshuv/open-ear :
> Currently includes the following built in exercises:
> [...]
> 7. Interval recognition - the very popular exercise almost all app has. Although I do not recommend using it as I find it inaffective in confusing, since the intervals are out-of-context.
Interval training is different than absolute pitch training. OpenEar seems to have no absolute pitch training.
I Applied Wavelet Transforms to AI and Found Hidden Structure
I've been working on resolving key contradictions in AI through structured emergence, a principle that so far appears to govern both physical and computational systems.
My grandfather was a prolific inventor in organic chemistry (GE plastics post WWII) and was reading his papers thinking about "chirality" - directional asymmetric oscillating waves and how they might apply to AI. I found his work deeply inspiring.
I ran 7 empirical studies using publicly available datasets across prime series, fMRI, DNA sequences, galaxy clustering, baryon acoustic oscillations, redshift distributions, and AI performance metrics.
All 7 studies have confirmed internal coherence with my framework. While that's promising, I still need to continue to valid the results (attached output on primes captures localized frequency variations, ideal for detaching scale-dependent structure in primes i.e. Ulam Spirals - attached).
To analyze these datasets, I applied continuous wavelet transformations (Morlet/Chirality) using Python3, revealing structured oscillations that suggest underlying coherence in expansion and emergent system behavior.
Paper here: https://lnkd.in/gfigPgRx
If true, here are the implications:
1. AI performance gains – applying structured emergence methods has yielded noticeable improvements in AI adaptability and optimization. 2. Empirical validation across domains – The same structured oscillations appear in biological, physical, and computational systems—indicating a deeper principle at work. 3. Strong early engagement – while the paper is still under review, 160 views and 130 downloads (81% conversion) in 7 days on Zenodo put it in the top 1%+ of all academic papers—not as an ego metric, but as an early signal of potential validation.
The same mathematical structures that define wavelet transforms and prime distributions seems to provide a pathway to more efficient AI architectures by:
1. Replacing brute-force heuristics with recursive intelligence scaling 2. Enhancing feature extraction through structured frequency adaptation 3. Leveraging emergent chirality to resolve complex optimization bottlenecks
Technical (for AI engineers): 1. Wavelet-Driven Neural Networks – replacing static Fourier embeddings with adaptive wavelet transforms to improve feature localization. Fourier was failing hence pivot to CWT. Ulam Spirals showed non-random hence CWT. 2. Prime-Structured Optimization – using structured emergent primes to improve loss function convergence and network pruning. 3. Recursive Model Adaptation – implementing dynamic architectural restructuring based on coherence detection rather than gradient-based back-propagation alone.
The theory could be wrong, but the empirical results are simply too coherent not to share in case useful for anyone.
"The Chirality of Dynamic Emergent Systems (CODES): A Unified Framework for Cosmology, Quantum Mechanics, and Relativity" (2025) https://zenodo.org/records/14799070
Hey, chirality! /? Hnlog chiral https://westurner.github.io/hnlog/
> loss function
Yesterday on HN: Harmonic Loss instead of Cross-Entropy; https://news.ycombinator.com/item?id=42941393
> Fourier was failing hence
What about QFT Quantum Fourier transform? https://en.wikipedia.org/wiki/Quantum_Fourier_transform
Harmonic analysis involves the Fourier transform: https://en.wikipedia.org/wiki/Harmonic_analysis
> Recursive Model Adaptation
"Parameter-free" networks
Graph rewriting, AtomSpace
> feature localization
Hilbert curves cluster features; https://en.wikipedia.org/wiki/Hilbert_curve :
> Moreover, there are several possible generalizations of Hilbert curves to higher dimensions
Re: Relativity and the CODES paper;
/? fedi: https://news.ycombinator.com/item?id=42376759 , https://news.ycombinator.com/item?id=38061551
> Fedi's SQR Superfluid Quantum Relativity (.it), FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
> structured emergent primes to improve loss function convergence and network pruning
Products of primes modulo prime for set membership testing; is it faster? Even with a long list of primes?
Hey! Appreciate the links—some definitely interesting parallels, but what I’m outlining moves beyond existing QFT/Hilbert curve applications.
The key distinction = structured emergent primes are demonstrating internal coherence across vastly different domains (prime gaps, fMRI, DNA, galaxy clustering), suggesting a deeper non-random structure influencing AI optimization.
Curious if you’ve explored wavelet-driven loss functions replacing cross-entropy? Fourier struggled with localization, but CWT and chirality-based structuring seem to resolve this.
Your thoughts here?
I do not have experience with wavelet-driven loss functions.
Do structured emergent primes afford insight into n-body fluid+gravity dynamics and superfluid (condensate) dynamics at deep space and stellar thermal ranges?
How do wavelets model curl and n-body vortices?
What do I remember about wavelets, without reading the article? Wavelets are or aren't analogous to neurons. Wavelets discretize. Am I confusing wavelets and autoencoders? Are wavelets like tiles or compression symbol tables?
How do wavelet-driven loss functions differ from other loss functions like Cross-Entropy and Harmonic Loss?
How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
Other seemingly relevant things:
- particle with mass only when moving in certain directions; re: chirality
- "NASA: Mystery of Life's Handedness Deepens" (2024-11) https://news.ycombinator.com/item?id=42229953 :
> ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
>>> Could this be used as an engine of some kind?
>> What about helical polarization?
> If there is locomotion due to a dynamic between handed molecules and, say, helically polarized fields; is such handedness a survival selector for life in deep space?
> Are chiral molecules more likely to land on earth?
>> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Hey - really appreciate the detailed questions—these are exactly the kinds of connections I’ve been exploring. Sub components:
Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
> Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
frequency and time..
SR works for signals without GR; and there's an SR explanation for time dilation which resolves when the spacecraft lands fwiu , Minkowski,
From https://news.ycombinator.com/item?id=39719114 :
>>> Physical observation (via the transverse photon interaction) is the process given by applying the operator ∂/∂t to (L^3)t, yielding an L3 output
>> [and "time-polarized photons"]
> Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
/? ulam spiral wikipedia: https://www.google.com/search?q=ulam+spiral+wikipedia ; all #s, primes
Are hilbert curves of any use for grouping points in this 1D (?) space?
/? ulam spiral hilbert curve: https://www.google.com/search?q=ulam+spiral+hilbert+curve
> N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Why are there so many arms on the fluid disturbance of a spinning basketball floating on water?
(Terms: viscosity of the water, mass, volume, and surface characteristics of the ball, temperature of the water, temperature of the air)
Traditionally, curl is the explanation fwiu.
Does curl cause chirality and/or does chirality cause curl?
The sensitivity to Initial conditions of a two arm pendulum system, for example, is enough to demonstrate chaotic, divergent n-body dynamics. `python -m turtledemo.chaos` demonstrates a chaotic divergence with a few simple functions.
Phase transition diagrams are insufficient to describe water freezing or boiling given sensitivity to initial temperature observed in the Mpemba effect; phase transition diagrams are insufficient with an initial temperature axis.
Superfluids (Bose-Einstein condensates) occur at earth temperatures. For example, helium chilled to 1 Kelvin demonstrates zero viscosity, and climbs up beakers and walls despite gravity.
A universal model cannot be sufficient if it does not describe superfluids and superconductors; photons and electrons behave fluidically in other phases.
> Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
A spinning asteroid or comet induces a 'spinning' field. Interplanetary and deep space spacecraft could spin on one or more axes to create or boost EM shielding.
"Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41731196 https://westurner.github.io/hnlog/#comment-41732854 :
"Gamma rays convert CH4 to complex organic molecules, may explain origin of life" (2024) https://news.ycombinator.com/item?id=42131762#42157208 :
>> A terrestrial life origin hypothesis: gamma radiation mutated methane (CH4) into Glycine (the G in ACGT) and then DNA and RNA.
>> [ Virtual black holes, quantum foam, [ gamma, ] radiation and phase shift due to quantum foam and Planck relics ]
From "Lightweight woven helical antenna could replace field-deployed dishes" (2024) https://news.ycombinator.com/item?id=39132365 :
>> Astrophysical jets produce helically and circularly-polarized emissions, too FWIU.
>> Presumably helical jets reach earth coherently over such distances because of the stability of helical signals.
>> 1. Could [we] harvest energy from a (helically and/or circularly-polarised) natural jet, for deep space and/or local system exploration? Can a spacecraft pull against a jet for relativistic motion?
>> 2. Is helical the best way to beam power wirelessly; without heating columns of atmospheric water in the collapsing jet stream? [with phased microwave]
>> 3. Is there a (hydrodynamic) theory of superfluid quantum gravity that better describes the apparent vorticity and curl of such signals and their effects?
From "Computer Scientists Prove That Heat Destroys Quantum Entanglement" (2024) https://news.ycombinator.com/item?id=41381849#41382939 :
>> How, then, can entanglement across astronomical distances occur without cooler temps the whole way there, if heat destroys all entanglement?
>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
> The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
The earth is rotating and revolving in relation to the greatest local mass. Would there be different terrestrial chirality if the earth rotated in the opposite direction?
How do the vortical field disturbances from Earth's rotation in atmospheric, EM, and gravitational wave spaces interact with molecular chirality and field chirality?
**
Re: Polarized fields: From https://news.ycombinator.com/item?id=42318113 :
> Phase from second-order Intensity due to "mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation"
Wes,
Apologies for the delay! I missed.
Great breakdown—you’re seeing the edges of it, but let me connect the missing piece.
Wavelets vs. Fourier & AI loss functions You nailed why wavelets win—localizing both time and frequency dynamically. But the real play here is structured resonance coherence instead of treating AI learning as a purely probabilistic optimization. Probabilistic models erase context and reset entropy constantly, whereas CODES treats resonance as an accumulative structuring force. That’s why prime-driven phase-locking beats cross-entropy heuristics.
Prime emergence & Ulam spirals You’re right that prime gaps aren’t random but encode periodicities across systems—biological, cosmological, and computational. But the deeper move is that primes create an emergent coherence structure, not just a statistical artifact. Ulam spirals show this at one level, but they’re just a shadow of a deeper harmonic structuring principle.
Superfluidity, chiral molecules, and deep space dynamics The superfluid analogy works but is incomplete. Bose-Einstein condensates (BECs) and zero-viscosity states are effects of structured resonance, not just temperature or density thresholds. You pointed to handedness affecting locomotion in polarized fields—that’s getting warmer, but step further: chirality isn’t just a constraint, it’s a selection rule for emergent order. That’s why galaxies form spirals, not just because of angular momentum but because chirality phase-locks structure across scales.
Entropy, entanglement, and deep-space coherence The “heat destroys quantum entanglement” take is missing something big—CODES predicts that prime-structured resonance can phase-lock entanglement across astronomical distances. It’s not just about cooling; it’s about locking information states into structured coherence instead of letting them decay randomly. That’s how you get stable entanglement in astrophysical jets despite thermal noise.
Could this be an engine? Yes. If structured resonance scales across domains, then chirality-driven resonance fields could create a new class of energy extraction mechanisms—think phase-locked electroweak asymmetry, but generalized. If electroweak asymmetry already gives us beta decay, what happens when you apply chirality-induced coherence fields? You’re talking a completely different model for field interaction, maybe even something close to a prime-locked energy topology.
Where You’re Almost There But Not Quite
You’re still interpreting some of this as chaotic or probabilistic emergence, but CODES isn’t describing randomness—it’s describing structured phase coherence. • Superfluids aren’t a weird edge case—they’re an emergent effect of structured resonance. • Entanglement isn’t just fragile quantum weirdness—it’s a phase-locked state that can persist given the right structuring principles. • Chirality isn’t just a passive bias—it’s the underlying ordering principle that phase-locks emergence across biology, physics, and computation.
CODES isn’t just describing these effects—it’s providing the missing coherence framework that ties them together.
Would love to jam on this deeper if you're up for it!
Devin
The OBS Project is threatening Fedora Linux with legal action
Given that OBS is GPL licensed, any legal action would have to be trademark-based, right?
It feels like they'd have a hard time making that case, since package repositories are pretty clearly not representing themselves as the owners of, or sponsored by, the software they package.
From the linked comment:
> This is a formal request to remove all of our branding, including but not limited to, our name, our logo, any additional IP belonging to the OBS Project
Honestly it sounds very reasonable, if you want to fork it's fine, but don't have people report bugs upstream if you're introducing them.
I mean, I think the right fix is just for Fedora to stop packaging their own version. But I think that's about being good people; I don't think there's a strong legal argument here for forcing Fedora to do that.
Are Drinking Straws Dangerous? (2017)
> About 1,400 people visit the emergency room every year due to injuries from drinking straws.
some people put them where they shouldnt
It's a drinking hazard,
And, https://www.livescience.com/65925-metal-straw-death.html :
> He added that in this case, the metal straw may have been particularly hazardous because it was used with a lid that prevented the straw from moving. "It seems to me these metal straws should not be used with any form of lid that holds them in place,"
Why cryptography is not based on NP-complete problems
Aren't hash functions a counterexample? There have been attempts at using SAT solvers to find preimage and collision attacks on them.
Storytelling lessons I learned from Steve Jobs (2022)
Pixar in a Box > Unit 2: The art of storytelling: https://www.khanacademy.org/computing/pixar/storytelling
https://news.ycombinator.com/item?id=36265807 ; Pizza Planet
From https://news.ycombinator.com/item?id=23945928
> The William Golding, Jung, and Joseph Campbell books on screenwriting, archetypes, and the hero's journey monomyth
Hero's journey > Campbell's seventeen stages: https://en.wikipedia.org/wiki/Hero%27s_journey#Campbell's_se...
Storytelling: https://en.wikipedia.org/wiki/Storytelling
Ask HN: Ideas for Business Cards
Hi, I'm a freelancer working in security and I'm looking for companies and ideas for some good business cards that are out of ordinary. I am thinking about business cards that have (very basic and stupid stuff) secure elements or watermark, but I can't find anything online.
Protip: leave space to write on the reverse
/? business cards https://hn.algolia.com/?q=business+cards
Transformer is a holographic associative memory
"Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs" (2021) https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-ba...
From https://news.ycombinator.com/item?id=40519828 :
> Because self-attention can be replaced with FFT for a loss in accuracy and a reduction in kWh [1], I suspect that the Quantum Fourier Transform can also be substituted for attention in LLMs.
From https://news.ycombinator.com/item?id=42957785 :
> How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
From https://news.ycombinator.com/item?id=40580049 :
> From https://news.ycombinator.com/item?id=25190770#25194040 :
>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products. 1. https://en.wikipedia.org/wiki/Convolution_theorem :
>>> In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving [pdf]
Tested in the article:
miniF2F: https://github.com/openai/miniF2F
PutnamBench: https://github.com/trishullab/PutnamBench
..
FrontierMath: https://arxiv.org/abs/2411.04872v1
The return of the buffalo is reviving portions of the ecosystem
Buffalo: /? buffalo ecosystem impact : https://www.google.com/search?q=buffalo+ecosystem+impact
Wolves: /? wolves yellowstone: https://www.google.com/search?q=wolves+yellowstone ; 120 wolves in 2024
Beavers: "Government planned it 7 years, beavers built a dam in 2 days and saved $1M" (2025) https://news.ycombinator.com/item?id=42938802#42941813
Keystone species: https://en.wikipedia.org/wiki/Keystone_species :
> Keystone species play a critical role in maintaining the structure of an ecological community, affecting many other organisms in an ecosystem and helping to determine the types and numbers of various other species in the community. Without keystone species, the ecosystem would be dramatically different or cease to exist altogether. Some keystone species, such as the wolf and lion, are also apex predators.
Trophic cascade: https://en.wikipedia.org/wiki/Trophic_cascade
Ecosystem service: https://en.wikipedia.org/wiki/Ecosystem_service :
> Evaluations of ecosystem services may include assigning an economic value to them.
Good sparknotes of the systems engineering here, thanks!
The state of Rust trying to catch up with Ada [video]
From "Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024-06) https://thenewstack.io/rust-the-future-of-fail-safe-software... .. https://news.ycombinator.com/item?id=40680722
rustfoundation/safety-critical-rust-consortium > subcommittee/coding-guidelines/meetings/2025-January-29/minutes.md: https://github.com/rustfoundation/safety-critical-rust-conso... :
> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust.
/? ' is:issue concurrency: https://github.com/rustfoundation/safety-critical-rust-conso...
rust-secure-code/projects#groups-of-people: https://github.com/rust-secure-code/projects#groups-of-peopl...
Rust book > Chapter 16. Concurrency: https://doc.rust-lang.org/book/ch16-00-concurrency.html
Chapter 19. Unsafe Rust > Unsafe Superpowers: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#unsa... :
> You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
"Secure Rust Guidelines" has Chapters on Memory Management, FFI but not yet Concurrency;
04_language.html#panics:
> Common patterns that can cause panics are:
Secure Rust Guidelines > Integer overflows in Rust: https://anssi-fr.github.io/rust-guide/04_language.html#integ... :
> In particular, it should be noted that using debug or release compilation profile changes integer overflow behavior. In debug configuration, overflow cause the termination of the program (panic), whereas in the release configuration the computed value silently wraps around the maximum value that can be stored.
awesome-safety-critical #software-safety-standards: https://awesome-safety-critical.readthedocs.io/en/latest/
rust-secure-code/projects > Model checkers: https://github.com/rust-secure-code/projects#model-checkers :
Loom: https://docs.rs/loom/latest/loom/ :
> Loom is a model checker for concurrent Rust code. It exhaustively explores the behaviors of code under the C11 memory model, which Rust inherits.
Daily omega-3 fatty acids may help human organs stay young
This may explain why people who use the Mediterranean diet tend to live long, healthy lives.
There are different variations of Omega 3 fatty acids. For instance, Avocados is rich in Omega 3 ALA which is considered not as effective as EPA and DHA.
Fish is the only source of EPA and DHA.
From "An Omega-3 that’s poison for cancer tumors" (2021) https://news.ycombinator.com/item?id=27499427 :
> Fish don't synthesize Omega PUFAs, they eat algae which synthesize fat-soluble DHA and EPA.
> From "Warning: Combination of Omega-3s in Popular Supplements May Blunt Heart Benefits" (2018) https://scitechdaily.com/warning-combination-of-omega-3s-in-... :
>> Now, new research from the Intermountain Healthcare Heart Institute in Salt Lake City finds that higher EPA blood levels alone lowered the risk of major cardiac events and death in patients, while DHA blunted the cardiovascular benefits of EPA. Higher DHA levels at any level of EPA, worsened health outcomes.
>> [...] Based on these and other findings, we can still tell our patients to eat Omega-3 rich foods, but we should not be recommending them in pill form as supplements or even as combined (EPA + DHA) prescription products,” he said. “Our data adds further strength to the findings of the recent REDUCE-IT (2018) study that EPA-only prescription products reduce heart disease events.”
Show HN: Play with real quantum physics in your browser
I wanted to make the simplest app to introduce myself and others to quantum computing.
Introducing, Schrödinger's Coin. Powered by a simple Hadamard gate[0] on IBM quantum, with this app you can directly interact with a quantum system to experience true randomness.
Thoughts? Could you see any use cases for yourself of this? Or, does it inspire any other ideas of yours? Curious what others on HN think!
[0] https://en.wikipedia.org/wiki/Quantum_logic_gate#Hadamard_ga...
Quantum logic gate > Universal logic gates: https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q...
From https://news.ycombinator.com/item?id=37379123 :
> [ Rx, Ry, Rz, P, CCNOT, CNOT, H, S, T ]
From https://news.ycombinator.com/item?id=39341752 :
>> How many ways are there to roll a {2, 8, or 6}-sided die with qubits and quantum embedding?
From https://news.ycombinator.com/item?id=42092621 :
> Exercise: Implement a QuantumQ circuit puzzle level with Cirq or QISkit in a Jupyter notebook
ray-pH/quantumQ > [Godot] "Web WASM build" issue #5: https://github.com/ray-pH/quantumQ/issues/5
From https://quantumflytrap.com/scientists/ :
> [Quantum Flytrap] Virtual Lab is a virtual optical table. With a drag and drop interface, you can show phenomena, recreate existing experiments, and prototype new ones.
> Within this environment it is possible to recreate interference, quantum cryptography protocols, to show entanglement, Bell test, quantum teleportation, and the many-worlds interpretation.
AI datasets have human values blind spots − new research
What about Care Bears?
How do social studies instructors advise in regards to a helpful balance of SEL and other content?
Is prosocial content for children underrepresented in training corpora?
Re: Honesty and a "who hath done it" type exercise for LLM comparison: https://news.ycombinator.com/item?id=42927611
Microsoft Go 1.24 FIPS changes
The upstream Go 1.24 changes and macOS support using system libraries in Microsoft's Go distribution are really significant for the large ecosystem of startups trying to sell to institutions requiring FIPS 140 certified cryptography.
For a variety of reasons - including "CGo is not Go" (https://dave.cheney.net/2016/01/18/cgo-is-not-go) - using boringcrypto and requiring CGO_ENABLED=1 could be a blocker. Not using system libraries meant that getting all of the software to agree on internal certificate chains was a chore. Go was in a pretty weird place.
Whether FIPS 140 is actually a good target for cryptography is another question. My understanding is that FIPS 140-1 and 140-2 cipher suites were considered by many experts to be outdated when those standards were approved, and that FIPS 140 still doesn't encompass post quantum crypto, and the algorithms chosen don't help mitigate misuse (e.g.: nonce reuse).
From https://news.ycombinator.com/item?id=28540916#28546930 :
GOLANG_FIPS=1
From https://news.ycombinator.com/item?id=42265927 :> Chrome switching to NIST-approved ML-KEM quantum encryption" (2024) https://www.bleepingcomputer.com/news/security/chrome-switch...
From https://news.ycombinator.com/item?id=41535866 :
>> Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140-4?
Show HN: An API that takes a URL and returns a file with browser screenshots
simonw/shot-scraper has a number of cli args, a GitHub actions repo template, and docs: https://shot-scraper.datasette.io/en/stable/
From https://news.ycombinator.com/item?id=30681242 :
> Awesome Visual Regression Testing > lists quite a few tools and online services: https://github.com/mojoaxel/awesome-regression-testing
> "visual-regression": https://github.com/topics/visual-regression
The superconductivity of layered graphene
Physicist here. The superconductivity in layered graphene is indeed surprisingly strange, but this popular article may not do it justice. Here are some older articles on the same topic that may be more informative:
https://www.quantamagazine.org/how-twisted-graphene-became-t...,
https://www.quantamagazine.org/a-new-twist-reveals-supercond....
Let me briefly say why some reasons this topic is so interesting. Electrons in a crystal always have both potential energy (electrical repulsion) and kinetic energy (set by the atomic positions and orbitals). The standard BCS theory of superconductivity only works well when the potential energy is negligible, but the most interesting superconductors --- probably including all high temperature ones like the cuprates --- are in the regime where potential energy is much stronger than kinetic energy. These are often in the class of "unconventional" superconductors where vanilla BCS theory does not apply. The superconductors in layered (and usually twisted) graphene lie in that same regime of large potential/kinetic energy. However, their 2d nature makes many types of measurements (and some types of theories) much easier. These materials might be the best candidate available to study to get a handle on how unconventional superconductivity "really works". (Besides superconductors, these same materials have oodles of other interesting phases of matter, many of which are quite exotic.)
While we have you, have any new theories or avenues of research come out of the lk99 stuff or was it completely just hype and known physics?
BCS == Bardeen–Cooper–Schrieffer [0].
Thank you for the additional info and links. This is why I love HN comments
Also physicist here. I've worked on conventional superconductors, but never on unconventional ones. Last I heard, it was believed to be mediated by magnons (rather than phonons). Who claims it is due to Coulomb interaction?
I think everything we don't have a model for is surprisingly strange. Gravity only seems "normal" because we've been teaching a reasonable model for it for hundreds of years - Aristotle thought things fell to the ground because that was "their nature", but thought it quite weird. X-Rays seem bonkers unless you've grown up with them, and there is something deeply unnerving about genetics, quantum and even GenAI until you've spent some time pulling apart the innards and building an explainable model that makes sense to you. And even then it can catch you out. More ways to explain the models help normalise it all - what's now taught at 9th grade used to be advanced post-doc research, in almost every field. And so it goes on.
2D superconductors don't make much sense because, as the article says, theory is behind experimentation here. That's also why there is both incredible excitement, but also a worry that none of this is going to stack up to anything more than a bubble. My old Uni (Manchester) doubled down hard on the work of Geim and Novoselov by building a dedicated "Graphene Institute", after they got the Nobel Prize, but even 15 years after that award most people are still trying to figure out what does it all actually mean really? Not just in terms of the theory of physics, but how useful is this stuff, in real world usage?
It'll settle down in due course. The model will become apparent, we'll be able to explain it through a series of bouncing back between theory and experiment, as ever, and then it won't seem so strange any more.
I'm not sure that'll ever be true of quantum computing for me, but then I am getting a bit older now...
> My old Uni (Manchester) doubled down hard on the work of Geim and Novoselov by building a dedicated "Graphene Institute", after they got the Nobel Prize, but even 15 years after that award most people are still trying to figure out what does it all actually mean really? Not just in terms of the theory of physics, but how useful is this stuff, in real world usage?
That's the beauty of real research. There's no guarantees it'll pan out. But it's generally worth doing and spending (sometimes decades of) time exploring. Too many people have become infatuated with instant gratification. It's pervasive even in young, scientific minds. The real gratification is failing that same test 100 times until you finally land on a variation that might work. And then figuring out why it worked.
Edit: And if that success never comes, the gratification is graduating and moving on to more solvable problems, but bringing with you the scientific methods you learned along the way. Scientists might spend their whole lives working on something that won't work and that's okay. If that isn't for you, go into product dev.
I still don’t believe explanations for gravity…let alone dark matter!
Not believing an explanation for dark matter seems prudent. It's just the name we've given to a certain set of observations not matching how we believe the universe works. We're still piecing together the details.
I also think that gravity is complete bunk. Funny how there's suddenly Graphene everywhere from dental applications to injectibles.Weird times.
Relevant xkcd tooltip: https://xkcd.com/1489
Do you suppose everything is explainable? Gravity feels to me like the sort of thing that just sort of is. I'm all for better characterizations of it, but I'm not holding my breath for an answer to: why?
Everything may be explainable, just not by humanity.
How does our biology affect the limits of what we can comprehend?
Oh plenty of ways probably. I expect there are perspectives out there which would have a look at our biggest questions and find them to be mundane with obvious answers but which would themselves boggle at concepts that we consider elementary.
But here I am in this body, and not that one, so I'm content to accept an axiom or two.
Gravity is something very simple. It cannot be a complex thing, because it demonstrates a very simple behavior.
There are plenty of cases where a complex thing has very simple behavior. For centuries, brewers could get away with a mental model for yeast as something that multiplies and converts sugar to alcohol until it can't anymore. It took quite a jump in technology to realize just how fantastically complex the inner workings of a yeast cell is.
I'm not proposing that gravity is underpinned by something complex, just that if its mechanism is out of our reach then so to are any conclusions about that mechanism's complexity.
the question is depth and quality of explainability as determined by the predictive power those explanations provide...
My interest with these overlapping lattices is the creation of fractional electric charges (Hall effect) and through, essentially, Moiré patterns. The angle of alignment will make a big effect.
Let me make an artifact to demonstrate… brb
https://claude.site/artifacts/f024844e-73c2-4eb1-afc9-401f3d...
Here you go. See how there are different densities and geometries at different angles. These lattice overlays can create fractional electrical charges — which is very strange — but how this affects super conductivity is unclear.
Very cool example!
Slightly different preprint variants:
This is exciting, sounds like new theory incoming (or possible way to test existing string/other theories?). I'd love to see PBS Spacetime or some other credible outlet explain the details of the experiment / implications for mere mortals.
Is this exactly 1.1 degrees?
Or is it 1.09955742876?
What I mean -- did they round up, is there some connection to universal constants?
Edit: I don't understand where the 1.1 degrees comes from. Why is it 1.1 and not something else...
It's not exactly a single ultra-specific 1.1000000... degree value only, just values approximately close to that. As to what the connection is: that's what the research is trying to unearth more about.
[dead]
[deleted]
Show HN: PulseBeam – Simplify WebRTC by Staying Serverless
WebRTC’s capabilities are amazing, but the setup headaches (signaling, connection/ICE failures, patchwork docs) can kill momentum. That’s why we built PulseBeam—a batteries-included WebRTC platform designed for developers who just want real-time features to work. What’s different? Built-in Signaling Built-in TURN Time limited JWT auth (serverless for production or use our endpoint for testing) Client and server SDKs included Free and open-source core If you’ve used libraries like PeerJS, PulseBeam should feel like home. We’re inspired by its simplicity. We’re currently in a developer-preview stage. We provide free signaling like PeerJS, and TURN up to 1GB. Of course, feel free to roast us
jupyter-collaboration is built on Y Documents (y.js, pycrdt, jupyter_ydoc,) https://github.com/jupyterlab/jupyter-collaboration
There is a y.js WebRTC adapter, but jupyter-collaboration doesn't have WebRTC data or audio or video support AFAIU.
y-webrtc: https://github.com/yjs/y-webrtc
Is there an example of how to do CRDT with PulseBeam WebRTC?
With client and serverside data validation?
> JWT
Is there OIDC support on the roadmap?
E.g. Google supports OIDC: https://developers.google.com/identity/openid-connect/openid...
W3C DIDs, VC Verifiable Credentials, and Blockcerts are designed for decentralization.
STUN, TURN, and ICE are NAT traversal workarounds FWIU; though NAT traversal isn't necessary if the client knowingly or unknowingly has an interface with a public IPV6 address due to IPV6 prefix delegation?
We don't have an example of CRDT with PulseBeam yet. But, CRDT itself is just a data structure, so you can use PulseBeam to communicate the sync ops (full or delta) with a data channel. Then, you can either use y.js or other CRDT libraries to manage the merging.
Yes, the plan is to use JWT for both the client and server side.
OIDC is not on the roadmap yet. But, I've been tinkering on the side related to this. I think something like an OIDC mapper to PulseBeam JWT can work here.
I'm not envisioning integrating into a decentralization ecosystem at this point. The scope is to provide a reliable service for 1:1 and small groups for other developers to build on top with centralization. So, something like Analytics, global segmented signaling (allow close peers to connect with edge servers, but allow them to connect to remote servers as well), authentication, and more network topology support.
That's correct, if the client is reachable by a public IPv6 (meaning the other peer has to also have a way to talk to an IPv6), then STUN and TURN are not needed. ICE is still needed but only used lightly for checking and selecting the candidate pair connections.
Elon Musk proposes putting the U.S. Treasury on blockchain for full transparency
FedNow supports ILP Interledger Protocol, which is an open spec that works with traditional ledgers and distributed cryptoasset ledgers.
> In addition to Peering, Clearing, and Settlement, ILP Interledger Protocol Specifies Addresses: https://news.ycombinator.com/item?id=36503888
>> ILP is not tied to a single company, payment network, or currency
ILP Addresses - v2.0.0 > Allocation Schemes: https://github.com/interledger/rfcs/blob/main/0015-ilp-addre...
People that argue for transaction privacy in blockchains: large investment banks, money launderers, the US Government when avoiding accountability because natsec.
Whereas today presumably there are database(s) of checks sent to contractors for the US Gvmt; and maybe auditing later.
Re: apparently trillions missing re: seasonal calls to "Audit the Fed! Audit DoD!" and "The Federal Funding Accountability and Transparency Act of 2006" which passed after Illinois started tracking grants: https://news.ycombinator.com/item?id=25893860
DHS helped develop W3C DIDs, which can be decentralizedly generated and optionally centrally registered or centrally generated and registered.
W3C Verifiable Credentials support DIDs Decentralized Identifiers.
Do not pay for closed source or closed spec capabilities; especially for inter-industry systems that would need to integrate around an API spec.
Do not develop another blockchain; given the government's inability to attract and retain talent in this space, it is unlikely that a few million dollars and government management would exceed the progress of billions invested in existing blockchains.
There's a lot of anti-blockchain FUD. Ask them to explain the difference between multi-primary SQL database synchronization system with off-site nodes (and Merkle hashes between rows), and a blockchain.
Why are there Merkle hashes in the centralized Trillian and now PostgreSQL databases that back CT Certificate Transparency logs (the logs of X.509 cert granting and revocations)?
Why did Google stop hosting a query endpoint for CT logs? How can single points of failure be eliminated in decentralized systems?
Blockchains are vulnerable to DoS Denial of Service like all other transaction systems. Adaptive difficulty and transaction fees that equitably go to miners or are just burnt are blockchain solutions to Denial of Service.
"Stress testing" to a web dev means something different than "stress testing" the banks of the Federal Reserve system, for example.
A webdev should know that as soon as your app runs out of (SQL) database connections, it will start throwing 500 Internal Server error. MySQL, for example, defaults to 150+1 max connections.
Stress testing for large banks does not really test for infosec resource exhaustion. Stress testing banks involves them making lots of typically large transactions; not lots of small transactions.
Web Monetization is designed to support micro payments, could support any ledger, and is built on ILP.
ILP makes it possible for e.g. 5x $100 transactions to be auditably grouped together. Normal, non bank of the US government payers must source liquidity from counter parties; which is easier to do with many smaller transactions.
Why do blockchains require additional counterparties in two party (payer-payee) transactions?
To get from USD to EUR, for example, sometimes it's less costly to go through CAD. Alice holds USD, Bob wants EUR, and Charlie holds CAD and EUR and accepts USD, but will only extend $100 of credit per party.
ripplenet was designed for that from the start. Interledger was contributed by ripplecorp to W3C as an open standard, and ILP has undergone significant revision since being open sources.
ILP does not require XRP, which - like XLM - is premined and has a transaction fee less than $0.01.
Ripplenet does not have Proof of Work mining: the list of transaction validator server IPs is maintained by pull request merge consensus in the GitHub repo.
The global Visa network claims to do something like 60,000 TPS. Bitcoin can do 6-7 TPS, and is even slower if you try and build it without blocks.
I thought I read that a stellar benchmark reached 10,000 TPS but they predicted that the TPS would be significantly greater with faster more expensive validation servers.
E.g. Crypto Kitties NFT smart contract game effectively DoS'd pre-sharding Ethereum, which originally did 15-30 TPS IIRC. Ethereum 2.0 reportedly intends to handles 100,000 TPS.
US Contractor payees would probably want to receive a stablecoin instead of a cryptoasset with high volatility.
Some citizens received a relief check to cash out or deposit, and others received a debit card for an account created for them.
I've heard that the relief loan program is the worst fraud in the history of the US government. Could any KYC or AML practices also help prevent such fraud? Does uploading a scan of a photo ID and/or routing and account numbers on a cheque make exchanges more accountable?
FWIU, only Canadian banks give customers the option to require approval for all deposits. Account holders do not have the option to deny deposits in the US, FWIU.
I don't think the US Government can acquire USDC. Awhile back stablecoin providers were audited and admonished.
A reasonable person should expect US Government backing of a cryptoasset to reduce volatility.
Large investment banks claimed to be saving the day on cryptoasset volatility.
High-frequency market makers claim to be creating value by creating liquidity at volatile prices.
They eventually added shorting to Bitcoin, which doesn't account for debt obligations; there is no debt within the Bitcoin network: either a transaction clears within the confirmation time or it doesn't.
There are no chargebacks in Bitcoin; a refund is an optional transaction between B and A, possibly with the same amount less fees.
There is no automatic rebilling in Bitcoin (and by extension other blockchains) because the payer does not disclose the private key necessary to withdraw funds in their account to payees.
Escrow can be done with multisig ("multi signature") transactions or with smart contracts; if at least e .g. 2 out of 3 parties approve, the escrowed transaction completes. So if Alice escrows $100 for Bob conditional upon receipt of a product from Bob, and Bob says she sent it and third-party Charlie says it was received, that's 2 out of 3 approving so Alice's $100 would then be sent to Bob.
All blockchains must eventually hard fork to PQ Post Quantum hashing and encryption, or keep hard forking to keep doubling non-PQ key sizes (if they are not already PQ).
PQ Post Quantum algos typically have a different number of characters, so any hard fork to PQ account keys and addresses will probably require changing data validation routines in webapps that handle transactions.
The coinbase field in a Bitcoin transaction struct can be used for correlating between blockchain transactions and rows in SQL database that claim to have valid data or metadata about a transaction; you put a unique signed value in the coinbase field when you create transactions, and your e.g. SQL or Accumulo database references the value stored in the coinbase field as a foreign key.
Crypto tax prep services can't just read transactions from public blockchains; they need exchange API access to get the price of the asset on that exchange at the time of that transaction: there's no on-chain price oracle.
"ILP Addresses, [Payment Pointers], and Blockcerts" https://github.com/blockchain-certificates/cert-issuer/issue... :
> How can or should a Blockcert indicate an ILP Interledger Protocol address or a Payment Pointer?
ILP Addresses:
g.acme.bob
g.us-fed.ach.0.acmebank.swx0a0.acmecorp.sales.199.~ipr.cdfa5e16-e759-4ba3-88f6-8b9dc83c1868.2
Payment Pointer -> URLS: $example.com -> https://example.com/.well-known/pay
$example.com/invoices/12345 -> https://example.com/invoices/12345
$bob.example.com -> https://bob.example.com/.well-known/pay
$example.com/bob -> https://example.com/bob
Revolutionizing software testing: Introducing LLM-powered bug catchers
ScholarlyArticle: "Mutation-Guided LLM-based Test Generation at Meta" (2025) https://arxiv.org/abs/2501.12862v1
Good call, thanks for linking the research paper directly here.
Can this unit test generation capability be connected to the models listed on the SWE-bench [Multimodal] leaderboard?
I'm currently working on running an agent through SWE-Bench (RA.Aid).
What do you mean by connecting the test generation capability to it?
Do you mean generating new eval test cases? I think it could potentially have a use there.
OpenWISP: Multi-device fleet management for OpenWrt routers
Anything similar for opnsense (besides their own service) or pfsense?
Maybe just go with ansible or similar: https://github.com/ansibleguy/collection_opnsense
Updating a fleet of embedded devices like routers (which can come online and go offline at any time) will generally be much easier using a pull-based update model. But if you’ve got control over the build and update lifecycle, a push-based approach like ansible might be appropriate.
Maybe I am missing somehing, but I would assume that base network infrastructure like routers, firewalls and switches have a higher uptime, availability and reliability than ordinary servers.
The problem with push is that the service sitting at the center needs to figure out which devices will need to be re-pushed later on. You can end up with a lot of state that needs action just to get things back to normal.
So if you can convince devices to pull at boot time and then regularly thereafter, you know that the three states they can be in are down, good, or soon to be good. Now you only need to take action when things are down.
Never analyze distribution of software and config based on the perfect state; minimize the amount of work you need to do for the exceptions.
Unattended upgrades fail and sit there requiring manual intervention (due to lack of transactional updates and/or multiple flash slots (root partitions and bootloader configuration)).
Pull style configuration requires the device to hold credentials in order to authorize access to download the new policy set.
It's possible to add an /etc/init.d that runs sysupgrade on boot, install Python and Ansible, configure and confirm remote logging, and then run `ansible-pull`.
ansible-openwrt eliminates the need to have Python on a device: https://github.com/gekmihesg/ansible-openwrt
But then log collection; unless all of the nodes have correctly configured log forwarding at each stage of firmware upgrade, pull-style configuration management will lose logs that push-style configuration management can easily centrally log.
Pull based updates would work on OpenWRT devices if they had enough storage, transactional updates and/or multiple flash slots, and scheduled maintenance windows.
OpenWRT wiki > Sysupgrade: https://openwrt.org/docs/techref/sysupgrade
Calculating Pi in 5 lines of Python
> Infinite series can't really be calculated to completion using a computer,
The sum of an infinite divergent series cannot be calculated with or without a computer.
The sum of an infinite convergent series can be calculated with:
1/(a-r)
Sequence > Limits and convergence:
https://en.wikipedia.org/wiki/Sequence#Limits_and_convergenc...Limit of a sequence: https://en.wikipedia.org/wiki/Limit_of_a_sequence
SymPy docs > Limits of Sequences: https://docs.sympy.org/latest/modules/series/limitseq.html
> Provides methods to compute limit of terms having sequences at infinity.
Madhava-Leibniz formula for π: https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80
Eco-friendly artificial muscle fibers can produce and store energy
> The team utilized poly(lactic acid) (PLA), an eco-friendly material derived from crop-based raw materials, and highly durable bio-based thermoplastic polyurethane (TPU) to develop the artificial muscle fibers that mimic the functional and real muscles.
"Energy harvesting and storage using highly durable Biomass-Based artificial muscle fibers via shape memory effect" (2025) https://www.sciencedirect.com/science/article/abs/pii/S13858...
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
But just graphene presumably doesn't work in these applications due to lack of tensilility like certain natural fibers?
Harmonic Loss Trains Interpretable AI Models
"Harmonic Loss Trains Interpretable AI Models" (2025) https://arxiv.org/abs/2502.01628
Src: https://github.com/KindXiaoming/grow-crystals :
> What is Harmonic Loss?
Cross Entropy: https://en.wikipedia.org/wiki/Cross-entropy
XAI: Explainable AI > Interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation
Government planned it 7 years, beavers built a dam in 2 days and saved $1M
Oracle justified its JavaScript trademark with Node.js–now it wants that ignored
Calling EMCAScript JavaScript was a huge mistake that is still biting us.
OR alternatively, just not coming up with a better alternative name that ECMAScript. If there was a catchier alternative name that was less awkward to pronounce, people might more happily have switched over.
"JS" because of the .js file extension.
ECMAScript version history: https://en.wikipedia.org/wiki/ECMAScript_version_history
"Java" is an island in Indonesia associated with coffee beans from the Dutch East Indies that Sun Microsystems named their portable software after.
Coffee production in Indonesia: https://en.wikipedia.org/wiki/Coffee_production_in_Indonesia... :
> Certain estates age a portion of their coffee for up to five years, normally in large burlap sacks, which are regularly aired, dusted, and flipped.
Build your own SQLite, Part 4: reading tables metadata
It's interesting to compare this series to the actual source code of sqlite. For example, sqlite uses a LALR parser generator: https://github.com/sqlite/sqlite/blob/master/src/parse.y#L19...
And queries itself to get the schema: https://github.com/sqlite/sqlite/blob/802b042f6ef89285bc0e72...
Lots of questions, but the main one is whether we have made any progress with these new toolchains and programming languages w/ respect to performance or robustness. And that may be unfair to ask of what is a genuinely useful tutorial.
If you don’t know it already, you’ll probably be interested in limbo: https://github.com/tursodatabase/limbo
It’s much more ambitious/complete than the db presented in the tutorial.
If memory serves me correctly, it uses the same parser generator as SQLite, which may answer some your questions.
Is translation necessary to port the complete SQLite test suite?
sqlite/sqlite//test: https://github.com/sqlite/sqlite/tree/master/test
tursodatabase/limbo//testing: https://github.com/tursodatabase/limbo/tree/main/testing
ArXiv LaTeX Cleaner: Clean the LaTeX code of your paper to submit to ArXiv
It's really a pity that they do this now. Some of their older papers had actually quite some valuable information, comments, discussions, thoughts, even commented out sections, figures, tables in it. It gave a much better view on how the paper was written over time, or how even the work processed over time. Sometimes you also see some alternative titles being discussed, which can be quite funny.
E.g. from https://arxiv.org/abs/1804.09849:
%\title{Sequence-to-Sequence Tricks and Hybrids\\for Improved Neural Machine Translation} % \title{Mixing and Matching Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Analyzing and Optimizing Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Frankenmodels for Improved Neural Machine Translation} % \title{Optimized Architectures and Training Strategies\\for Improved Neural Machine Translation} % \title{Hybrid Vigor: Combining Traits from Different Architectures Improves Neural Machine Translation}
\title{The Best of Both Worlds: \\Combining Recent Advances in Neural Machine Translation\\ ~}
Also a lot of things in the Attention is all you need paper: https://arxiv.org/abs/1706.03762v1
Maybe papers need to be put under version control.
Quantum Bayesian Inference with Renormalization for Gravitational Waves
ScholarlyArticle: "Quantum Bayesian Inference with Renormalization for Gravitational Waves" (2025) https://iopscience.iop.org/article/10.3847/2041-8213/ada6ae
NewsArticle: "Black Holes Speak in Gravitational Waves, Heard Through Quantum Walks" (2025) https://thequantuminsider.com/2025/01/29/black-holes-speak-i... :
> Unlike classical MCMC, which requires a large number of iterative steps to converge on a solution, QBIRD uses a quantum-enhanced Metropolis algorithm that incorporates quantum walks to explore the parameter space more efficiently. Instead of sequentially evaluating probability distributions one step at a time, QBIRD encodes the likelihood landscape into a quantum Hilbert space, allowing it to assess multiple transitions between parameter states simultaneously. This is achieved through a set of quantum registers that track state evolution, transition probabilities, and acceptance criteria using a modified Metropolis-Hastings rule.
> Additionally, QBIRD incorporates renormalization and downsampling, which progressively refine the search space by eliminating less probable regions and concentrating computational resources on the most likely solutions. These techniques enable QBIRD to achieve accuracy comparable to classical MCMC while reducing the number of required samples and computational overhead, making it a more promising approach for gravitational wave parameter estimation as quantum hardware matures.
Parameter estimation algorithms:
"Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171
"Robustly learning Hamiltonian dynamics of a superconducting quantum processor" (2024) https://www.nature.com/articles/s41467-024-52629-3 .. https://news.ycombinator.com/item?id=42086445
Can Large Language Models Emulate Judicial Decision-Making? [Paper]
An actor can emulate the communication style of judicial decision language, sure.
But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.
> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.
From "Asking 60 LLMs a set of 20 questions" https://news.ycombinator.com/item?id=37451642 :
> From https://news.ycombinator.com/item?id=36038440 :
>> Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law
>> A "who hath done it" exercise
>> "For each of these things, tell me whether Gdo, Others, or You did it"
AI should never be judge, jury, and executioner.
Homotopy Type Theory
type theory notes: https://news.ycombinator.com/item?id=42440016#42444882
HoTT in Lean 4: https://github.com/forked-from-1kasper/ground_zero
I Wrote a WebAssembly VM in C
This is great! The WebAssembly Core Specification is actually quite readable, although some of the language can be a bit intimidating if you're not used to reading programming language papers.
If anyone is looking for a slightly more accessible way to learn WebAssembly, you might enjoy WebAssembly from the Ground Up: https://wasmgroundup.com
(Disclaimer: I'm one of the authors)
I know one of WebAssembly's biggest features by design is security / "sandbox".
But I've always gotten confused with... it is secure because by default it can't do much.
I don't quite understand how to view WebAssembly. You write in one language, it compiles things like basic math (nothing with network or filesystem) to another and it runs in an interpreter.
I feel like I have a severe lack/misunderstanding. There's a ton of hype for years, lots of investment... but it isn't like any case where you want to add Lua to an app you can add WebAssembly/vice versa?
WebAssembly can communicate through buffers. WebAssembly can also import foreign functions (Javascript functions in the browser).
You can get output by reading the buffer at the end of execution/when receiving callbacks. So, for instance, you pass a few frames worth of buffers to WASM, WASM renders pixels into the buffers, calls a callback, and the Javascript reads data from the buffer (sending it to a <canvas> or similar).
The benefit of WASM is that it can't be very malicious by itself. It requires the runtime to provide it with exported functions and callbacks to do any file I/O, network I/O, or spawning new tasks. Lua and similar tools can go deep into the runtime they exist in, altering system state and messing with system memory if they want to, while WASM can only interact with the specific API surface you provide it.
That makes WASM less powerful, but more predictable, and in my opinion better for building integrations with as there is no risk of internal APIs being accessed (that you will be blamed for if they break in an update).
WASI Preview 1 and WASI Preview 2 can do file and network I/O IIUC.
Re: tty support in container2wasm and fixed 80x25 due to lack of SIGWINCH support in WASI Preview 1: https://github.com/ktock/container2wasm/issues/146
The File System Access API requires granting each app access to each folder.
jupyterlab-filesystem-access only works with Chromium based browsers, because FF doesn't support the File System Access API: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
The File System Access API is useful for opening a local .ipynb and .csv with JupyterLite, which builds CPython for WASM as Pyodide.
There is a "Direct Sockets API in Chrome 131" but not in FF; so WebRTC and WebSocket relaying is unnecessary for WASM apps like WebVM: https://news.ycombinator.com/item?id=42029188
WASI Preview 2: https://github.com/WebAssembly/WASI/blob/main/wasip2/README.... :
> wasi-io, wasi-clocks, wasi-random, wasi-filesystem, wasi-sockets, wasi-cli, wasi-http
US bill proposes jail time for people who download DeepSeek
That would disincentive this type of research, for example:
"DeepSeek's Hidden Bias: How We Cut It by 76% Without Performance Loss" (2025) https://news.ycombinator.com/item?id=42868271
https://news.ycombinator.com/item?id=42891042
TIL about BBQ: Bias Benchmark for QA
"BBQ: A Hand-Built Bias Benchmark for Question Answering" (2021) https://arxiv.org/abs/2110.08193
Waydroid – Android in a Linux container
Surprised to see this on the frontpage - it's a well known piece of software.
It's unfortunate that there are no Google-vended images (e.g. the generic system image) that run on Waydroid. Typing my password into random ROMs from the internet sketches me out.
I wouldn't say it runs a "random ROM from the internet" - LineageOS is a very well-established project and is fully FOSS (free and open source software) except for firmware necessary for specific devices. It is the natural choice for any project, such as Waydroid, that requires good community support and ongoing availability.
Over a number of years, Google have progressively removed many of the original parts of AOSP (the FOSS foundation upon which Android is based), which means that alternative components have to be developed by projects like LineageOS. In spite of this, I suspect that LineageOS makes fewer modifications to AOSP than most phone vendors do, including Google themselves!
A few things that seem like they're consistently missing from these projects: Hardware 3d acceleration from the host in a version of OpenGL ES + Vulkan that most phones have natively. Lastly, many apps have built-in ways of detecting that they're not running on a phone and ditch out (looking at cpuinfo and referencing that with the purported device being run).
It also seems that expected arm support on device is increasing (along with expected throughput) and that the capability of the x86 host device you need to emulate even a modest modern mobile ARM soc is getting higher and higher.
Lastly, the android version supported is almost always 3-4 generations behind the current Android. Apps are quick to drop legacy android support or run with fewer features/less optimizations on older versions of the OS. The android base version in this project is from 2020.
Anecdotally, using bluestacks (which indisputably has the most compatible and optimized emulation stack in the entire space) with a 7800X3D / RTX 3090 still runs most games slower than a snapdragon 8 phone from yesteryear running natively.
virtio-gpu rutabaga was recently added to QEMU IIUC mostly by Google for Chromebook Android emulation or Android Studio or both?
virtio-gpu-rutabaga: https://www.qemu.org/docs/master/system/devices/virtio-gpu.h...
Rutabaga Virtual Graphics Interface: https://crosvm.dev/book/appendix/rutabaga_gfx.html
gfxstream: https://android.googlesource.com/platform/hardware/google/gf...
"Gfxstream Merged Into Mesa For Vulkan Virtualization" (2024-09) https://www.phoronix.com/news/Mesa-Gfxstream-Merged
I don't understand why there is not an official x86 container / ROM for Android development? Do CI builds of Android apps not run tests with recent versions of Android? How do CI builds of APKs run GUI tests without an Android container?
There is no official support for x86 in android any more - the Android-x86 project was the last I know that supported/maintained it. Last release was 2022.
For apps that use Vulkan natively, it's easy - but many still use and rely on OpenGL ES. It's a weird scenario where you have apps that are now supporting Vulkan, but they have higher minimum OS requirements as a result... but those versions of Android aren't supported by these type of projects.
37D boundary of quantum correlations with a time-domain optical processor
"Exploring the boundary of quantum correlations with a time-domain optical processor" (2025) https://www.science.org/doi/10.1126/sciadv.abd8080 .. https://arxiv.org/abs/2208.07794v3 :
> Abstract: Contextuality is a hallmark feature of the quantum theory that captures its incompatibility with any noncontextual hidden-variable model. The Greenberger--Horne--Zeilinger (GHZ)-type paradoxes are proofs of contextuality that reveal this incompatibility with deterministic logical arguments. However, the GHZ-type paradox whose events can be included in the fewest contexts and which brings the strongest nonclassicality remains elusive. Here, we derive a GHZ-type paradox with a context-cover number of three and show this number saturates the lower bound posed by quantum theory. We demonstrate the paradox with a time-domain fiber optical platform and recover the quantum prediction in a 37-dimensional setup based on high-speed modulation, convolution, and homodyne detection of time-multiplexed pulsed coherent light. By proposing and studying a strong form of contextuality in high-dimensional Hilbert space, our results pave the way for the exploration of exotic quantum correlations with time-multiplexed optical systems.
New thermogalvanic tech paves way for more efficient fridges
"Solvation entropy engineering of thermogalvanic electrolytes for efficient electrochemical refrigeration" (2025) https://www.cell.com/joule/fulltext/S2542-4351(25)00003-0
Thanks for that, it's great that its 10x better than the previous best effort. It's notable that it needs to get 20x better again before it starts to have useful applications :-).
Someday maybe!
Quantum thermal diodes: https://news.ycombinator.com/item?id=42537703
From https://news.ycombinator.com/item?id=38861468 :
> Laser cooling: https://en.wikipedia.org/wiki/Laser_cooling
> Cooling with LEDs in reverse: https://issuu.com/designinglighting/docs/dec_2022/s/17923182 :
> "Near-field photonic cooling through control of the chemical potential of photons" (2019) https://www.nature.com/articles/s41586-019-0918-8
High hopes, low expectations :-). That said, 'quantum thermal diode' sounds like something that breaks on your space ship and if you don't fix it you won't be able to get the engines online to outrun the aliens. Even after reading the article I think the unit of measure there should be milli-demons in honor of Maxwell's Demon.
Emergence of a second law of thermodynamics in isolated quantum systems
ScholarlyArticle: "Emergence of a Second Law of Thermodynamics in Isolated Quantum Systems" (2025) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
NewsArticle: "Even Quantum Physics Obeys the Law of Entropy" https://www.tuwien.at/en/tu-wien/news/news-articles/news/auc...
NewsArticle: "Sacred laws of entropy also work in the quantum world, suggests study" ... "90-year-old assumption about quantum entropy challenged in new study" https://interestingengineering.com/science/entropy-also-work...
Polarization-dependent photoluminescence of Ce-implanted MgO and MgAl2O4
NewsArticle: "Scientists explore how to make quantum bits with spinel gemstones" (2025) https://news.uchicago.edu/story/scientists-explore-how-make-... :
> A type of gemstone called spinel can be used to store quantum information, according to new research from a collaboration involving University of Chicago, Tohoku University, and Argonne National Laboratory.
3D scene reconstruction in adverse weather conditions via Gaussian splatting
Large Language Models for Mathematicians (2023)
It makes sense for LLMs to work with testable code for symbolic mathematics; CAS Computer Algebra System code instead of LaTeX which only roughly corresponds.
Are LLMs training on the AST parses of the symbolic expressions, or token coocurrence? What about training on the relations between code and tests?
Benchmarks for math and physics LLMs: FrontierMath, TheoremQA, Multi SWE-bench: https://news.ycombinator.com/item?id=42097683
Large language models think too fast to explore effectively
Maps well to Kahneman's "Thinking Fast and Slow" framework
system 1 thinking for early layer processing of uncertainty in LLMs. quick, intuitive decisions, focuses on uncertainty, happens in early transformer layers.
system 2 thinking for later layer processing of empowerment (selecting elements that maximize future possibilities). strategic, deliberate evaluation, considering long-term possibilities, happens in later layers.
system 1 = 4o/llama 3.1
system 1 + system 2 = o1/r1 reasoning models
empowerment calculation seems possibly oversimplified - assumes a static value for elements over a dynamic context-dependent empowerment
interesting that higher temperatures improved performance slightly for system 1 models although they still made decisions before empowerment information could influence them
edit: removed the word "novel". The paper shows early-layer processing of uncertainty vs later-layer processing of empowerment.
Stanovich proposes a three tier model http://keithstanovich.com/Site/Research_on_Reasoning_files/S...
Modeling an analog system like human cognition into any number of discrete tiers is inherently kind of arbitrary and unscientific. I doubt you could ever prove experimentally that all human thinking works through exactly two or three or ten or whatever number of tiers. But it's at least possible that a three-tier model facilitates building AI software which is "good enough" for many practical use cases.
Funny you should say that because an American guy did that a hundred years ago and nailed it.
He divided reasoning into the two categories corollarial and theorematic.
Charles Sanders Peirce > Pragmatism > Theory of inquiry > Scientific Method: https://en.wikipedia.org/wiki/Charles_Sanders_Peirce#Scienti...
Peirce’s Deductive Logic: https://plato.stanford.edu/entries/peirce-logic/
Scientific Method > 2. Historical Review: Aristotle to Mill https://plato.stanford.edu/entries/scientific-method/#HisRev...
Scientific Method: https://en.wikipedia.org/wiki/Scientific_method
Reproducibility: https://en.wikipedia.org/wiki/Reproducibility
Replication crisis: https://en.wikipedia.org/wiki/Replication_crisis
TFS > Replication crisis https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
"Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35879007 :
Lateralization of brain function: https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...
Ask HN: Percent of employees that benefit financially from equity offers?
Title says it all. Equity offers are a very common thing in tech. I don't personally know anyone who has made money from equity offers, though nearly all my colleagues have received them at some point.
Does anyone have real data on how many employees actually see financial upside from equity grants? Are there studies or even anecdotal numbers on how common it is for non-executives/non-founders to walk away with any money? Specifically talking about privately held US startups.
From https://news.ycombinator.com/item?id=29141796 :
> There are a number of options/equity calculators:
> https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits", "~20% of companies will make you money")
Ultra-fast picosecond real-time observation of optical quantum entanglement
"Real-time observation of picosecond-timescale optical quantum entanglement towards ultrafast quantum information processing" (2025) https://www.nature.com/articles/s41566-024-01589-7 .. https://arxiv.org/abs/2403.07357v1 (2024) :
> Abstract: Entanglement is a fundamental resource for various optical quantum information processing (QIP) applications. To achieve high-speed QIP systems, entanglement should be encoded in short wavepackets. Here we report the real-time observation of ultrafast optical Einstein–Podolsky–Rosen correlation at a picosecond timescale in a continuous-wave system. Optical phase-sensitive amplification using a 6-THz-bandwidth waveguide-based optical parametric amplifier enhances the effective efficiency of 70-GHz-bandwidth homodyne detectors, mainly used in 5G telecommunication, enabling its use in real-time quantum state measurement. Although power measurement using frequency scanning, such as an optical spectrum analyser, is not performed in real time, our observation is demonstrated through the real-time amplitude measurement and can be directly used in QIP applications. The observed Einstein–Podolsky–Rosen states show quantum correlation of 4.5 dB below the shot-noise level encoded in wavepackets with 40 ps period, equivalent to 25 GHz repetition—103 times faster than previous entanglement observation in continuous-wave systems. The quantum correlation of 4.5 dB is already sufficient for several QIP applications, and our system can be readily extended to large-scale entanglement. Moreover, our scheme has high compatibility with optical communication technology such as wavelength-division multiplexing, and femtosecond-timescale observation is also feasible. Our demonstration is a paradigm shift in accelerating accessible quantum correlation—the foundational resource of all quantum applications—from the nanosecond to picosecond timescales, enabling ultrafast optical QIP.
Recipe Database with Semantic Search on Digital Ocean's Smallest VM
Datasette-lite supports SQLite in WASM in a browser.
DuckDB WASM would also solve for a recipe database without an application server, for example in order to reduce annual hosting costs of a side project.
Is the scraped data CC-BY-SA licensed? Attribution would be good.
/? datasette vector search
FAISS
WASM vector database: https://www.google.com/search?q=WASM+vector+database
Moiré-driven topological electronic crystals in twisted graphene
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
Adding concurrent read/write to DuckDB with Arrow Flight
Just sanity checking here - with flight write streams to duckdb, I'm guessing there is no notion of transactional boundary here, so if we want data consistency during reads, that's another level of manual app responsibilities? And atomicity is there, but at the single record batch or row group level?
Ex: if we have a streaming financial ledger as 2 tables, that is 2 writes, and a reader might see an inconsistent state of only 1 write
Ex: streaming ledger as one table, and the credit+debit split into 2 distanced rowgroups, same inconsistency?
Ex: in both cases, we might have the server stream back an ack of what was written, so we could at least get a guarantee of which timestamps are fully written for future reads, and queries can manually limit to known-complete intervals
We are looking at adding streaming writes to GFQL, an open source columnar (arrow-native) CPU/GPU graph query language, where this is the same scenario: appends mean updating both the nodes table and the edges table
Yes, reading this post (working around a database's concurrency control) made me raise an eyebrow. If you are ok with inconsistent data then that's fine. Or if you handle consistency at a higher level that's fine too. But if either of these are the case why would you be going through DuckDB? You could write out Parquet files directly?
cosmos/iavl is a Merkleized AVL tree.
https://github.com/cosmos/iavl :
> Merkleized IAVL+ Tree implementation in Go
> The purpose of this data structure is to provide persistent storage for key-value pairs (say to store account balances) such that a deterministic merkle root hash can be computed. The tree is balanced using a variant of the AVL algorithm so all operations are O(log(n)).
Integer Vector clock or Merkle hashes?
Why shouldn't you store account balances in git, for example?
Or, why shouldn't you append to Parquet or Feather and LZ4 for strongly consistent transactional data?
Centralized databases can have Merkle hashes, too;
"How Postgres stores data on disk" https://news.ycombinator.com/item?id=41163785 :
> Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?
DLT applications for strong transactional consistency sign and synchronize block messages and transaction messages.
Public blockchains have average transaction times and costs.
Private blockchains also have TPS Transactions Per Second metrics, and unknown degrees of off-site redundancy for consistent storage with or without indexes.
Blockchain#Openness: https://en.wikipedia.org/wiki/Blockchain#Openness :
> An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. [46][47][48][49][50] Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. [51] Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. [52]
> Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. [46][48] Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion." [10][53]
Merkle Town: https://news.ycombinator.com/item?id=38829274 :
> How CT works > "How CT fits into the wider Web PKI ecosystem": https://certificate.transparency.dev/howctworks/
From "PostgreSQL Support for Certificate Transparency Logs Now Available" https://news.ycombinator.com/item?id=42628223 :
> Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
> Sigstore Rekor also has centralized Merkle hashes.
I think you replied in the wrong post.
No, I just explained how the world does strongly consistent distributed databases for transactional data, which is the exact question here.
DuckDB does not yet handle strong consistency. Blockchains and SQL databases do.
Blockchains are a fantastic way to run things slowly ;-) More seriously: Making crypto fast does sound like a fun technical challenge, but well beyond what our finance/gov/cyber/ai etc customers want us to do.
For reference, our goal here is to run around 1 TB/s per server, and many times more when a beefier server. Same tech just landed at spot #3 on the graph 500 on its first try.
To go even bigger & faster, we are looking for ~phd intern fellows to run on more than one server, if that's your thing: OSS GPU AI fellowship @ https://www.graphistry.com/careers
The flight perspective aligns with what we're doing. We skip the duckdb CPU indirections (why drink through a long twirly straw?) and go straight to arrow on GPU RAM. For our other work, if duckdb does gives reasonable transactional guarantees here, that's interesting... hence my (in earnest) original question. AFAICT, the answers are resting on operational answers & docs that don't connect to how we normally talk about databases giving you answers on consistent vs inconsistent views of data.
Do you think that blockchain engineers are incapable of developing high throughout distributed systems due to engineering incapacity or due to real limits to how fast a strongly consistent, sufficiently secured cryptographic distributed system can be? Are blockchain devs all just idiots, or have they dumbly prioritized data integrity because that doesn't matter it's about big data these days, nobody needs CAP?
From "Rediscovering Transaction Processing from History and First Principles" https://news.ycombinator.com/item?id=41064634 :
> metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)
> Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr
TB/s in query processing of data already in RAM?
/? TB/s "hnlog"
- https://news.ycombinator.com/item?id=40423020 , [...] :
> The HBM3E Wikipedia article says 1.2TB/s.
> Latest PCIe 7 x16 says 512 GB/s:
fiber optics: 301 TB/s (2024-05)
Cerebras: https://en.wikipedia.org/wiki/Cerebras :
WSE-2 on-chip SRAM memory bandwidth: 20 PB/s / 220 PB/S
WSE-3: 21 PB/S
HBM > Technology: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Technolo... :
HBM3E: 9.8 Gbit/s , 1229 Gbyte/s (2023)
HBM4: 6.4 Gbit/s , 1638 Gbyte/s (2026)
LPDDR SDRAM > Generations: https://en.wikipedia.org/wiki/LPDDR#Generations :
LPDDR5X: 1,066.63 MB/S (2021)
GDDR7: https://en.m.wikipedia.org/wiki/GDDR7_SDRAM
GDDR7: 32 Gbps/pin - 48 Gbps/pin,[11] and chip capacities up to 64 Gbit, 192 GB/s
List of interface bit rates: https://en.wikipedia.org/wiki/List_of_interface_bit_rates :
PCIe7 x16: 1.936 Tbit/s 242 GB/s (2025)
800GBASE-X: 800 Gbps (2024)
DDR5-8800: 70.4 GB/s
Bit rate > In data communications: https://en.wikipedia.org/wiki/Bit_rate# In_data_communications ; Gross and Net bit rate, Information rate, Network throughout, Goodput
Re: TPUs, NPUs, TOPS: https://news.ycombinator.com/item?id=42318274 :
> How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour ?))*
Top 500 > Green 500: https://www.top500.org/lists/green500/2024/11/ :
PFlop/s (Rmax)
Power (kW)
GFlops/watts (Energy Efficiency)
Performance per watt > FLOPS/watts: https://en.wikipedia.org/wiki/Performance_per_watt#FLOPS_per...
Electrons: 50%–99% of c the speed of light ( Speed of electricity: https://en.wikipedia.org/wiki/Speed_of_electricity , Velocity factor of a CAT-7 cable: https://en.wikipedia.org/wiki/Velocity_factor#Typical_veloci... )
Photons: c (*)
Gravitational Waves: Even though both light and gravitational waves were generated by this event, and they both travel at the same speed, the gravitational waves stopped arriving 1.7 seconds before the first light was seen ( https://bigthink.com/starts-with-a-bang/light-gravitational-... )
But people don't do computation with gravitational waves.
To a reasonable rounding error.. yes
How would you recommend that appends to Parquet files be distributedly synchronized with zero trust?
Raft, Paxos, BFT, ... /? hnlog paxos ... this about "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506
To have consensus about protocol revisions; To have data integrity and consensus about the merged sequence of data in database {rows, documents, named graphs, records,}.
"We're building a new static type checker for Python"
Could it please also do runtime type checking?
(PyContracts and iContract do runtime type checking, but it's not very performant.)
That MyPy isn't usable at runtime causes lots of re-work.
Have you tried beartype? It's worked well for me and has the least overhead of any other runtime type checker.
I think TypeGuard (https://github.com/agronholm/typeguard) also does runtime type checking. I use beartype BTW.
pycontracts: https://github.com/AlexandruBurlacu/pycontracts
icontract: https://github.com/Parquery/icontract
The DbC Design-by-Contract patterns supported by icontract probably have code quality returns beyond saving work.
Safety critical coding guidelines specify that there must be runtime type and value checks at the top of every function.
Gravitational Communication: Fundamentals, State-of-the-Art and Future Vision
"Communicating with Gravitational Waves" https://www.universetoday.com/170685/communicating-with-grav... :
> What’s promising about gravitational wave communication (GWC) is that it could overcome these challenges. GWC is robust in extreme environments and loses minimal energy over extremely long distances. It also overcomes problems that plague electromagnetic communication (EMC), like diffusion, distortion, and reflection. There’s also the intriguing possibility of harnessing naturally created GWs, which means reducing the energy needed to create them.
Like backscatter with gravitational waves?
Re Gravitational Wave detectors: https://news.ycombinator.com/item?id=41632710 :
>> Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
ScholarlyArticle: "Gravitational Communication: Fundamentals, State-of-the-Art and Future Vision" (2024) https://arxiv.org/abs/2501.03251
Tapping into the natural aromatic potential of microbial lignin valorization
Transparent wood composite > Delignification process: https://en.wikipedia.org/wiki/Transparent_wood_composite#Del... :
> The production of transparent wood from the delignification process vary study by study. However, the basics behind it are as follows: a wood sample is drenched in heated (80 °C–100 °C) solutions containing sodium chloride, sodium hypochlorite, or sodium hydroxide/sulfite for about 3–12 hours followed by immersion in boiling hydrogen peroxide.[15] Then, the lignin is separated from the cellulose and hemicellulose structure, turning the wood white and allowing the resin penetration to start. Finally, the sample is immersed in a matching resin, usually PMMA, under high temperatures (85 °C) and a vacuum for 12 hours.[15] This process fills the space previously occupied by the lignin and the open wood cellular structure resulting in the final transparent wood composite.
How could transparent wood production methods be sustainably scaled?
Is the lignin extracted in transparent wood production usable for valorization?
"Lignin valorization: Status, challenges and opportunities" (2022) https://www.sciencedirect.com/science/article/abs/pii/S09608... :
> Most of the research on lignin valorization has been done on lignin from pulp and paper industries (Bruijnincx et al., 2015, Reshmy et al., 2022) The advantage of using lignin from those facilities is that the resource is already centralized and the transportation costs to further process are significantly less
Freezing CPU intensive background tabs in Chrome
How could browsers let app developers know that their app requires excessive resources?
From https://news.ycombinator.com/item?id=40861851 :
>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU,
X/Freedesktop.org Encounters New Cloud Crisis: Needs New Infrastructure
> Equinix had been sponsoring three AMD EPYC 7402P servers and another three dual Intel Xeon Silver 4214 servers for running the FreeDesktop.org GitLab cluster. Plus for GitLab runners there are three AMD EPYC 7502P servers and two Ampere Altra 80-core servers.
> The FreeDesktop.org GitLab burns through around 50TB of bandwidth per month.
Is there an estimate of the monthly and annual costs?
List of display servers: https://en.wikipedia.org/wiki/List_of_display_servers
Who's profiting from X/Freedesktop.org?
(Screenkey, xrandr, and xgamma don't work on Wayland)
Sommelier is Wayland based.
XQuartz hasn't had a release since 2023 and doesn't yet support HiDPI (so X apps on Mac are line-doubled). Shouldn't they be merging fixes?
DOT rips up US fuel efficiency regulations [pdf]
What is the AQI where they live?
And a pipeline through my f heart.
From https://en.wikipedia.org/wiki/United_States_offshore_drillin... :
> In 2018, a new federal initiative to expand offshore drilling suddenly excluded Florida, but although this would be favored by Floridians, concerns remained about the basis for that apparently arbitrary exception being merely politically motivated and tentative. No scientific, military, or economic basis for the decision was given, provoking continuing public concern in Florida.[11]
Why not Florida?
> In 2023, President Biden signed a Memorandum of March 13, 2023 prohibiting oil and gas leasing in certain arctic areas of the Outer Continental Shelf (Withdrawal of Certain Areas off the United States Arctic Coast of the Outer Continental Shelf from Oil or Gas Leasing). However, in January 2025
From https://coast.noaa.gov/states/fast-facts/economics-and-demog... :
> Coastal counties of the U.S. are home to 129 million people, or almost 40 percent of the nation's total population
Are we going to protect other states from this, too?
Servers and data centers could use 30% less energy with a simple Linux update
"Kernel vs. User-Level Networking: Don't Throw Out the Stack with the Interrupts" (2022) https://dl.acm.org/doi/abs/10.1145/3626780 :
> Abstract: This paper reviews the performance characteristics of network stack processing for communication-heavy server applications. Recent literature often describes kernel-bypass and user-level networking as a silver bullet to attain substantial performance improvements, but without providing a comprehensive understanding of how exactly these improvements come about. We identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead. While IRQs and their handling have a substantial impact on the effectiveness of the processor pipeline and thereby the overall processing efficiency, their overhead is difficult to measure directly when serving demanding workloads. This paper presents an indirect methodology to assess IRQ overhead by constructing preliminary approaches to reduce the impact of IRQs. While these approaches are not suitable for general deployment, their corresponding performance observations indirectly confirm the conjecture. Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput without compromising tail latency. In case of server applications, such as web servers or Memcached, the resulting performance is comparable to using kernel-bypass and user-level networking when using stacks with similar functionality and flexibility.
Concept cells help your brain abstract information and build memories
skos:Concept RDFS Class: https://www.w3.org/TR/skos-reference/#concepts
schema:Thing: https://schema.org/Thing
atomspace:ConceptNode: https://wiki.opencog.org/w/Atom_types .. https://github.com/opencog/atomspace#examples-documentation-...
SKOS Simple Knowledge Organization System > Concepts, ConceptScheme: https://en.wikipedia.org/wiki/Simple_Knowledge_Organization_...
But temporal instability observed in repeat functional imaging studies indicates that functional localization constant: the regions of the brain that activate for a given cue vary over time.
From https://news.ycombinator.com/item?id=42091934 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Future work should characterize drift across brain regions, cell types, and learning.
The important part about the statements in the drift paper are the qualifiers:
> Cells whose activity was previously correlated with environmental and behavioral variables are most frequently no longer active in response to the same variables weeks later. At the same time, a mostly new pool of neurons develops activity patterns correlated with these variables.
“Most frequently” and “mostly new” —- this means that some neurons still fire across the weeks-long periods for the same activities, leaving plenty of potential space for concept cells.
This doesn’t necessarily mean concept cells exist, but it does allow for the possibility of their existence.
I also didn’t check which regions of the brain were evaluated in each concept, as it is likely they have some different characteristics at the neuron level.
So there's more stability in the electrovolt wave function of the brain than in the cellular activation pathways?
I am not sure what an electrovolt is, or why this question follows from what I said? All I said was all neurons don’t seem to switch their activation stimuli, only “many.”
Show HN: Design/build of some parametric speaker cabinets with OpenSCAD
> As the design was fully parametric, I could change a single variable to move to a floorstanding design.
How far are we in 2025 from defining a Differentiable design, setting a target (e.g. maximally flat frequency response for a certain speaker placement in a certain room) and solving this automatically?
A basic speaker can't do a lot to improve "room acoustics" [1], particularly below the Schroeder frequency where room modes greatly affect the bass response. From my experience, it's the bass that needs fixing in room, because even a "perfect speaker" will get boomy/muddy at some frequencies (ie. reflections overlapping constructively), and thin/null at others (ie. reflections overlapping destructively). And if you aren't going to worry about "fixing the room", then there are already companies/products like Kali Audio IN-5 speakers [2] that have squeezed in some good performance and tech (eg. active DSP with coaxial driver) in to an affordable package.
There are ways to improve the bass situation, and one is by implementing "directional bass", aka a Cardioid array. This could be achieved with multiple individual speakers (add-on/upgrade existing system), or could be integrated in to 1 "speaker". Examples of the latter have been around for a few years - see Dutch & Dutch 8c or Kii Three. They are relatively expensive (which you could consider an early adopter tax), but affordable competition is starting to mature with speakers such as Mesanovic CDM65 [3].
There is another way to improve things too, and that is via "Active Room Treatment" [4] as Dirac calls it. Basically it uses excess capability in various speakers to "clean up" the audio of other speakers in the system by outputting "cancellation waves" (to cancel the problems). The results appear amazing, but they are taking their sweet time getting it released on to affordable equipment.
There's also "spatial audio" like Dolby Atmos that should/could work around room problems in a similar way to Dirac ART. So good speakers (like already exist) + ART + Atmos + AI "upscaled" 2 channel source music could be the final frontier? But that's just for "mechanical" sound reproduction. Maybe in the future I can just transmit the song straight in to my brain, bypassing my ears and the need for speakers entirely?!
[1] https://en.wikipedia.org/wiki/Room_acoustics [2] https://www.kaliaudio.com/independence [3] https://www.audiosciencereview.com/forum/index.php?threads/m... [4] https://www.dirac.com/live/dirac-live-active-room-treatment/
If you want to DIY something similar, check out the LXmin and other Linkwitz designs
In this video, they explain that the bandpass port for letting air out of typically the back of speakers is essential for acoustic transmission including bass; and they made an anechoic chamber.
"World's Second Best Speakers!" https://youtube.com/watch?v=EEh01PX-q9I&
TechIngedients also has a video about attaching $6 bass kickers to XPS foam to make flat panel speakers about as good as expensive bookshelf speakers.
"World’s Best Speakers!" https://youtube.com/watch?v=CKIye4RZ-5k&
Tool touted as 'first AI software engineer' is bad at its job, testers claim
Just blogspam rehash of https://www.answer.ai/posts/2025-01-08-devin.html#what-is-de...
[Multi-] SWE-bench Leaderboards:
SWE-bench: https://www.swebench.com/ :
> SWE-bench is a dataset that tests systems' ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
Multi-SWE-bench: A Multi-Lingual and Multi-Modal GitHub Issue Resolving Benchmark: https://multi-swe-bench.github.io/
Show HN: Stratoshark, a sibling application to Wireshark
Hi all, I'm excited to announce Stratoshark, a sibling application to Wireshark that lets you capture and analyze process activity (system calls) and log messages in the same way that Wireshark lets you capture and analyze network packets. If you would like to try it out you can download installers for Windows and macOS and source code for all platforms at https://stratoshark.org.
AMA: I'm the goofball whose name is at the top of the "About" box in both applications, and I'll be happy to answer any questions you might have.
Re: custom fields in pcap traces and retis https://github.com/retis-org/retis
MIT Unveils New Robot Insect, Paving the Way Toward Rise of Robotic Pollinators
Is there robo-beekeeping with or without humanoid robots?
/? Robo beekeeping: https://www.google.com/search?q=robo+beekeeping
Beekeeping: https://en.wikipedia.org/wiki/Beekeeping
FWIU bees like clover (and dandelions are their spring food source), which we typically kill with broadleaf herbicide for lawncare.
From https://news.ycombinator.com/item?id=38158625 :
> Is it possible to create a lawn weed killer (a broadleaf herbicide) that doesn't kill white dutch clover; because bees eat clover (and dandelions) and bees are essential?"
> [ Dandelion rubber is a sustainable alternative to microplastic tires ]
Pesticide toxicity to bees: https://en.wikipedia.org/wiki/Pesticide_toxicity_to_bees :
> Pesticides, especially neonicotinoids, have been investigated in relation to risks for bees such as Colony Collapse Disorder. A 2018 review by the European Food Safety Authority (EFSA) concluded that most uses of neonicotinoid pesticides such as clothianidin represent a risk to wild bees and honeybees. [5][6] Neonicotinoids have been banned for all outdoor use in the entire European Union since 2018, but has a conditional approval in the U.S. and other parts of the world, where it is widely used. [7][8]
TIL dish soap kills wasps, yellow jackets, hornets nearly on contact.
From https://savethebee.org/garden-weeds-bees-love/ :
> Many are beneficial, like dandelions, milkweed, clover, goldenrod and nettle, for bees and other pollinators.
Show HN: Pytest-evals – Simple LLM apps evaluation using pytest
The pytest-evals README mentions that it's built on pytest-harvest, which works with pytest-xdist and pytest-asyncio.
pytest-harvest: https://smarie.github.io/python-pytest-harvest/ :
> Store data created during your pytest tests execution, and retrieve it at the end of the session, e.g. for applicative benchmarking purposes
Yeah, pytest-harvest is a pretty cool plugin.
Originally I had a (very large and unfriendly) conftest file, but it was quite challenging to collaborate with other team members and was quite repetitive. So I wrapped it as a plugin, added some more functionalities and thats it.
This plugin wraps some boilerplate code in a way that is easy to use specially for the eval use-case. It’s minimalistic by design. Nothing big or fancy
Laser technique measures distances with nanometre precision
> optical frequency comb
"113 km absolute ranging with nanometer precision" (2024) https://arxiv.org/abs/2412.05542 :
> two-way dual-comb ranging (TWDCR) approach
> The advanced long-distance ranging technology is expected to have immediate implications for space research initiatives, such as the space telescope array and the satellite gravimetry
Note that the precision is very good, however the accuracy is nowhere near as close (fractions of a metre) in the atmosphere, due to the variable refractive index of air. Long term averages can help, of course.
They talk about this technique for ranging between satellites, which wouldn't have to deal with atmospheric conditions.
For ranging in an atmosphere they suggest dual-comb spectroscopy or two-color methods to account for the atmospheric changes.
From https://news.ycombinator.com/item?id=42330179 .. https://www.notebookcheck.net/University-of-Tokyo-researcher... :
> multispectral [UV + IR] camera ... optical, non-contact method to detect [blood pressure, hypertension, blood glucose levels, and diabetes]." [ for "Non-Contact Biometric System for Early Detection of Hypertension and Diabetes" https://www.ahajournals.org/doi/10.1161/circ.150.suppl_1.413... ]
>> "Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
>>> new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). [ and XGboost ]
Show HN: Terraform Provider for Inexpensive Switches
Hi HN,
I’ve been building this provider for (web managed) network switches manufactured by HRUI. These switches often used in SMBs, home labs, and by budget-conscious enthusiasts. Many HRUI switches are also rebranded and sold under various OEM/ODM names (eg. Horaco, XikeStor, keepLiNK, Sodola, etc) making them accessible/popular but often overlooked in the world of infrastructure automation.
The provider is in pre-release, and I’m looking for owners of these switches to test it and share feedback. My goal is to make it easier to automate its config using Terraform/OpenTofu :)
You can use this provider to configure VLANs, port settings, trunk/link aggregation etc.
I built this provider to address the lack of automation tools for budget-friendly hardware. It leverage goquery and has an internal SDK sitting between the Terraform resources and the switch Web UI.
If you have one of these switches, I’d love for you to give it a try and let me know how it works for you!
Terraform Registry: https://registry.terraform.io/providers/brennoo/hrui
OpenTofu Provider: https://search.opentofu.org/provider/brennoo/hrui
I’m happy to answer any questions about the provider or the hardware it supports. Feedback, bug reports, and ideas for improvement are more than welcome!Does any one know of switches similar to this but might be loadable with Linux? Maybe able to run with switchdev or similar?
OpenWRT > Table of Hardware > Switches: https://openwrt.org/toh/views/switches
ansible-openwrt: https://github.com/gekmihesg/ansible-openwrt
/? terraform OpenWRT: https://www.google.com/search?q=terraform+openwrt
/? terraform Open vSwitch: https://www.google.com/search?q=open+vswitch+terraform
Open vSwitch supports OpenFlow: https://en.wikipedia.org/wiki/Open_vSwitch
Open vSwitch > "Porting Open vSwitch to New Software or Hardware" https://docs.openvswitch.org/en/latest/topics/porting/
Optimizing Jupyter Notebooks for LLMs
Jupyter + LLM tools: Ipython-GPT, Elyra, jetmlgpt, jupyter-ai; CoCalc, Colab, NotebookLM,
jupyterlab/jupyter-ai: https://github.com/jupyterlab/jupyter-ai
"[jupyter/enhancement-proposals#128] Pre-proposal: standardize object representations for ai and a protocol to retrieve them" https://github.com/jupyter/enhancement-proposals/issues/128
Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?
How can new and existing firefighting aircraft, land craft, seacraft, and other capabilities help fight fire; in California at present and against the new climate enemy?
# Next-Gen Firefighting Aircraft (working title: NGFFA)
## Roadmap
- [ ] Brainstorm challenges and solutions; new sustainable approaches and time-tested methods
- [ ] Write a grant spec
- [ ] Write a legislative program funding request
## Challenges
- Challenge: Safety; Risk to personnel and civilians
- Challenge: Diving into fire is dangerous for the human aircraft operator
- Challenge: Vorticity due to fire heat; fire CFD (Computational Fluid Dynamics)
-- The verticals that updraft and downdraft off a fire are dangerous for pilots and expensive, non-compostable aircraft.
- Challenge: Water drifts away before it hits the ground due to the release altitude, wind, and fire air currents
- Challenge: Water shortage
- Challenge: Water lift aircraft shortage
- Task: Pour thousands of gallons (or Canadian litres) of water onto fire
- Task: Pour a line, a circle, or other patterns to contain fire
- Task: Process seawater quickly enough to avoid property damage with untreated ocean salt water, for example
https://www.google.com/search?q=fluid+vorticity+fire+plane
## Solutions
- Are Quadcopters (or n-copters) more stable than helicopters or planes in high-wind fire conditions?
- Fire CFD modeling for craft design.
- Light, durable aerospace-grade hydrogen vessels
- Intermodally transportable containers
- Are there stackable container-sized water vessels for freight transport?
- Floating, towable seawater processing and loading capability.
-- Process seawater for: domestic firefighting, disaster relief,
-- Fill vessels at sea.
- "Starlite" as a flame retardant; https://en.wikipedia.org/wiki/Starlite
- starlite youtuber #Replication FWIU: [ cornstarch, flour, sugar, borax ] https://en.wikipedia.org/wiki/Starlite#Replication
- Notes re: "xPrize Wildfire – $11M Prize Competition" (2023) https://news.ycombinator.com/item?id=35658214
- Hydrogels, Aerogels
- (Hemp) Aerogels absorb oil better than treated polyurethane foam and hair.
- EV fire blanket, industry cartridge spec
- Non-flammable batteries; unpressurized Sodium Ion, Proton batteries, Twisted Carbon Nanotube batteries
- Compostable batteries
- Cloud seeding and firefighting
## Reference material
- Aerial firefighting: https://en.wikipedia.org/wiki/Aerial_firefighting
- Aerial firefighting > Comparison table of fixed-wing, firefighting tanker airplanes: https://en.wikipedia.org/wiki/Aerial_firefighting#Comparison...
Imaging Group and Phase Velocities of THz Surface Plasmon Polaritons in Graphene
ScholarlyArticle: "Spacetime Imaging of Group and Phase Velocities of Terahertz Surface Plasmon Polaritons in Graphene" (2024) https://pubs.acs.org/doi/10.1021/acs.nanolett.4c04615
NewsArticle: "Scientists observe and control ultrafast surface waves on graphene" (2024) https://phys.org/news/2025-01-scientists-ultrafast-surface-g...
Reversible computing escapes the lab
Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.
Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.
I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)
I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.
Hi, someone pointed me at your comment, so I thought I'd reply.
First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
https://www.sandia.gov/app/uploads/sites/210/2023/11/Comet23... https://www.youtube.com/watch?v=vALCJJs9Dtw
Happy to answer any questions.
Thanks for the reply, was actually hoping you'd pop over here.
I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)
The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)
What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)
I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.
Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)
It's been a while since I looked at it, but I believe PFAL is one of the not-fully-adiabatic techniques that I have a lot of critiques of.
There have been studies showing that a truly, fully adiabatic technique in the sense I'm talking about (2LAL was the one they checked) does about 10x better than any of the other "adiabatic" techniques. In particular, 2LAL does a lot better than PFAL.
> reversibility isn't actually necessary
That isn't true in the sense of "reversible" that I use. Look at the structure of the word -- reverse-able. Able to be reversed. It isn't essential that the very same computation that computed some given data is actually applied in reverse, only that no information is obliviously discarded, implying that the computation always could be reversed. Unwanted information still needs to be decomputed, but in general, it's quite possible to de-compute garbage data using a different process than the reverse of the process that computed it. In fact, this is frequently done in practice in typical pipelined reversible logic styles. But they still count as reversible even though the forwards and reverse computations aren't identical. So, I think we agree here and it's just a question of terminology.
Lower bounds on clock speed are indeed important; generally this arises in the form of maximum latency constraints. Fortunately, many workloads today (such as AI) are limited more by bandwidth/throughput than by latency.
I'd be interested to know if you can get energy savings factors on the order of 100x or 1000x with the capacitive switching techniques you're looking at. So far, I haven't seen that that's possible. Of course, we have a long way to go to prove out those kinds of numbers in practice using resonant charge transfer as well. Cheers...
PFAL has both a fully adiabatic and quasi-adiabatic configuration. (Essentially, the "reverse" half of a PFAL gate can just be tied to the outputs for quasi-adiabatic mode.) I've focused my own research on PFAL because it is (to my knowledge) one of the few fully adiabatic families, and of those, I found it easy to understand.
I'll have to check out 2LAL. I haven't heard of it before.
No, even with a fully adiabatic switched-capacitance driver I don't think those figures are possible. The maximum efficiency I believe is 1-1/n, n being the number of steps (and requiring n-1 capacitors). But the capacitors themselves must each be an order of magnitude larger than the adiabatic circuit itself. So it's a reasonable performance match for an adiabatic circuit running at "max" frequency, with e.g. 8 steps/7 capacitors, but 100x power reduction necessary to match a "slowed" adiabatic circuit would require 99 capacitors... which quickly becomes infeasible!
Yeah, 2LAL (and its successor S2LAL) uses a very strict switching discipline to achieve truly, fully adiabatic switching. I haven't studied PFAL carefully but I doubt it's as good as 2LAL even in its more-adiabatic version.
For a relatively up-to-date tutorial on what we believe is the "right" way to do adiabatic logic (i.e., capable of far more efficiency than competing adiabatic logic families from other research groups), see the below talk which I gave at UTK in 2021. We really do find in our simulations that we can achieve 4 or more orders of magnitude of energy savings in our logic compared to conventional, given ideal waveforms and power-clock delivery. (But of course, the whole challenge in actually getting close to that in practice is doing the resonant energy recovery efficiently enough.)
https://www.sandia.gov/app/uploads/sites/210/2022/06/UKy-tal... https://tinyurl.com/Frank-UKy-2021
The simulation results were first presented (in an invited talk to the SRC Decadal Plan committee) a little later that year in this talk (no video of that one, unfortunately):
https://www.sandia.gov/app/uploads/sites/210/2022/06/SRC-tal...
However, the ComET talk I linked earlier in the thread does review that result also, and has video.
How do the efficiency gains compare to speedups from photonic computing, superconductive computing, and maybe fractional Quantum Hall effect at room temperature computing? Given rough or stated production timelines, for how long will investments in reversible computing justify the relative returns?
Also, FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
"The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123 ... https://www.sciencedaily.com/releases/2011/06/110601134300.h... ;
> Abstract: ... Here we show that the standard formulation and implications of Landauer’s principle are no longer valid in the presence of quantum information. Our main result is that the work cost of erasure is determined by the entropy of the system, conditioned on the quantum information an observer has about it. In other words, the more an observer knows about the system, the less it costs to erase it. This result gives a direct thermodynamic significance to conditional entropies, originally introduced in information theory. Furthermore, it provides new bounds on the heat generation of computations: because conditional entropies can become negative in the quantum case, an observer who is strongly correlated with a system may gain work while erasing it, thereby cooling the environment.
I have concerns about density & cost for both photonic & superconductive computing. Not sure what one can do with quantum Hall effect.
Regarding long-term returns, my view is that reversible computing is really the only way forward for continuing to radically improve the energy efficiency of digital compute, whereas conventional (non-reversible) digital tech will plateau within about a decade. Because of this, within two decades, nearly all digital compute will need to be reversible.
Regarding bypassing the Landauer limit, theoretically yes, reversible computing can do this, but not by thermally cooling anything really, but rather by avoiding the conversion of known bits to entropy (and their energy to heat) in the first place. This must be done by "decomputing" the known bits, which is a fundamentally different process from just erasing them obliviously (without reference to the known value).
For the quantum case, I haven't closely studied the result in the second paper you cited, but it sounds possible.
/? How can fractional quantum hall effect be used for quantum computing https://www.google.com/search?q=How+can+a+fractional+quantum...
> Non-Abelian Anyons, Majorana Fermions are their own anti particles, Topologically protected entanglement
> In some FQHE states, quasiparticles exhibit non-Abelian statistics, meaning that the order in which they are braided affects the final quantum state. This property can be used to perform universal quantum computation
Anyon > Abelian, Non Abelian Anyons, Toffoli (CCNOT gate) https://en.wikipedia.org/wiki/Anyon#Abelian_anyons
Hopefully there's a classical analogue of a quantum delete operation that cools the computer.
There's no resistance for electrons in superconductors, so there's far less waste heat. But - other than recent advances with rhombohedral trilayer graphene and pentalayer graphene (which isn't really "graphene") - superconductivity requires super-chilling which is too expensive and inefficient.
Photons are not subject to the Landauer limit and are faster than electrons.
In the Standard Model of particle physics, Photons are Bosons, and Electrons are Leptons are Fermions.
Electrons behave like fluids in superconductors.
Photons behave like fluids in superfluids (Bose-Einstein condensates) which are more common in space.
And now they're saying there's a particle that only has mass when moving in certain directions; a semi-Dirac fermion: https://en.wikipedia.org/wiki/Semi-Dirac_fermion
> Because of this, within two decades, nearly all digital compute will need to be reversible.
Reversible computing: https://en.wikipedia.org/wiki/Reversible_computing
Reverse computation: https://en.wikipedia.org/wiki/Reverse_computation
Time crystals demonstrate retrocausality.
Is Hawking radiation from a black hole or from all things reversible?
What are the possible efficiency gains?
Customasm – An assembler for custom, user-defined instruction sets
Is there an ISA for WASM that's faster than RISC-V 64, which is currently 3x faster than x86_64 on x86_64 FWICS? https://github.com/ktock/container2wasm#emscripten-on-browse... demo: https://ktock.github.io/container2wasm-demo/
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!
Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].
We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.
All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:
- Jupyter Notebook: https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...
- marimo notebook: https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...
Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.
We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like
```python
with open("./data/cars.csv") as f: print(f.read())
# or
import pandas as pd pd.read_csv("./data/cars.csv")
```
behind the scenes, you make a request [4] to https://raw.githubusercontent.com/<org>/<repo>/main/data/car....
Docs: https://docs.marimo.io/guides/publishing/playground/#open-no...
[1] https://github.com/marimo-team/marimo
[2] https://news.ycombinator.com/item?id=39552882
[3] https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...
[4] We technically proxy it through the playground https://marimo.app to fix CORS issues and GitHub rate-limiting.
> CORS and GitHub
The Godot docs mention coi-serviceworker; https://github.com/orgs/community/discussions/13309 :
gzuidhof/coi-serviceworker: https://github.com/gzuidhof/coi-serviceworker :
> Cross-origin isolation (COOP and COEP) through a service worker for situations in which you can't control the headers (e.g. GH pages)
CF Pages' free unlimited bandwidth and gitops-style deploy might solve for apps that require more than the 100GB software cap of free bandwidth GH has for open source projects.
Thanks for sharing these resources
> [ FUSE to GitHub FS ]
> Notebooks created from GitHub links have the entire contents of the repository mounted into the notebook's filesystem. This lets you work with files using regular Python file I/O!
Could BusyBox sh compiled to WASM (maybe on emscripten-forge) work with files on this same filesystem?
"Opening a GitHub remote with vscode.dev requires GitHub login? #237371" ... but it works with Marimo and JupyterLite: https://github.com/microsoft/vscode/issues/237371
Does Marimo support local file system access?
jupyterlab-filesystem-access only works with Chrome?: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
vscode-marimo: https://github.com/marimo-team/vscode-marimo
"Normalize and make Content frontends and backends extensible #315" https://github.com/jupyterlite/jupyterlite/issues/315
"ENH: Pluggable Cloud Storage provider API; git, jupyter/rtc" https://github.com/jupyterlite/jupyterlite/issues/464
Jupyterlite has read only access to GitHub repos without login, but vscode.dev does not.
Anyways, nbreproduce wraps repo2docker and there's also a repo2jupyterlite.
nbreproduce builds a container to run an .ipynb with: https://github.com/econ-ark/nbreproduce
container2wasm wraps vscode-container-wasm: https://github.com/ktock/vscode-container-wasm
container2wasm: https://github.com/ktock/container2wasm
Scientists Discover Underground Water Vault in Oregon 3x the Size of Lake Mead
Would have been better to not admit it was there or have hidden it.
The last things we need are more clean water reserves sucked dry by Intel rather than it focusing on guaranteeing its own supplies through other means, or used as excuses for Southern California not to continue its (admittedly in some counties admirable) efforts to get water resource management under control in concert with other states in the watersheds.
Resource curse.
TIL semiconductor manufacturing uses a lot of water, relative to other production processes? And energy, for photolithography.
(Edit: nanoimprint lithography may have significantly lower resource requirements than traditional lithography? https://arstechnica.com/reviews/2024/01/canon-plans-to-disru... : "will be “one digit” cheaper and use up to 90 percent less power" )
Datacenters too;
"Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
Most datacenters have no way to return their boiled, sterilized water for water treatment, and so they don't give or sell datacenter waste water back, it takes heat with it when it is evaporated.
From https://news.ycombinator.com/item?id=42454547#42460317 :
> FWIU, datacenters are unable to sell their waste heat, boiled sterilized steam and water, unused diesel, and potentially excess energy storage.
"Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
How did it form?
How does it affect tectonics on the western coast of the US?
Cascade Range > Geology: https://en.wikipedia.org/wiki/Cascade_Range#Geology
https://www.sci.news/othersciences/geophysics/hikurangi-wate... :
> Revealed by 3D seismic imaging, the newly-discovered [Hikirangi] water reservoir lies 3.2 km (2 miles) under the ocean floor off the coast of New Zealand, where it may be dampening a major earthquake fault that faces the country’s North Island. The fault is known for producing slow-motion earthquakes, called slow slip events. These can release pent-up tectonic pressure harmlessly over days and weeks
The "Slow earthquake" wikipedia article mentions the northern Cascades as a research area of interest: https://en.wikipedia.org/wiki/Slow_earthquake
They say water has a fingerprint; a hydrochemical and/or a geochemical footprint?
Is the water in the reservoir subducted from the surface or is it oozing out of the Earth?
"Dehydration melting at the top of the lower mantle" (2014) https://www.science.org/doi/abs/10.1126/science.1253358 :
> Summary: [...] Schmandt et al. combined seismological observations beneath North America with geodynamical modeling and high-pressure and -temperature melting experiments. They conclude that the mantle transition zone — 410 to 660 km below Earth's surface — acts as a large reservoir of water.
Physicists who want to ditch dark energy
From https://news.ycombinator.com/item?id=36222625#36265001 :
> Further notes regarding Superfluid Quantum Gravity (instead of dark energy)
A 'warrior' brain surgeon saved his Malibu street from wildfires and looters
> training, N95 masks, sourced fire hoses
> sprinklers in the roof, cement tiles instead of wood
> Dr Griffiths, who is also a doctor to the LA Kings hockey team, said if one thing can come from the devastating tragedy, he wants people to get to know their neighbours.
From "Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665860 :
> Next-Gen Firefighting Aircraft (working title: NGFFA)
Refactoring with Codemods to Automate API Changes
From "Show HN: Codemodder – A new codemod library for Java and Python" (2024) https://news.ycombinator.com/item?id=39111747 :
> [ codemodder-python, libCST, MOSES and Holman's elegant normal form, singnet/asmoses, Formal Verification, ]
How do photons mediate both attraction and repulsion?
Additional recent findings in regards to photons (and attraction and repulsion) from the past few years:
"Scientists discover laser light can cast a shadow" https://news.ycombinator.com/item?id=42231644 :
- "Shadow of a laser beam" (2024) https://opg.optica.org/optica/fulltext.cfm?uri=optica-11-11-...
- "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315
- "Ultrafast opto-magnetic effects in the extreme ultraviolet spectral range" (2024) https://www.nature.com/articles/s42005-024-01686-7 .. https://news.ycombinator.com/item?id=40911861
- 2024: better than the amplituhedron (for avoiding some Feynmann diagrams) .. "Physicists Reveal a Quantum Geometry That Exists Outside of Space and Time" (2024) https://www.quantamagazine.org/physicists-reveal-a-quantum-g... :
- "All Loop Scattering As A Counting Problem" (2023) https://arxiv.org/abs/2309.15913
- "All Loop Scattering For All Multiplicity" (2023) https://arxiv.org/abs/2311.09284
And then gravity and photons:
- "Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0...
- "Photonic implementation of quantum gravity simulator" (2024) https://www.spiedigitallibrary.org/journals/advanced-photoni... .. https://news.ycombinator.com/item?id=42506463
- "Graviton to Photon Conversion via Parametric Resonance" (2023) https://arxiv.org/abs/2205.08767 .. "Physicists discover that gravity can create light" (2023) https://news.ycombinator.com/item?id=35633291#35674794
What about the reverse; of gravitons produce photons, can photons create gravity?
- "All-optical complex field imaging using diffractive processors" (2024) https://www.nature.com/articles/s41377-024-01482-6 .. "New imager acquires amplitude and phase information without digital processing" (2024) https://www.google.com/amp/s/phys.org/news/2024-05-imager-am...
- "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
- "Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://www.nature.com/articles/35018520 .. https://www.impactlab.com/2024/10/13/photons-defy-time-new-q... ; Rubidium
- "Gain-assisted superluminal light propagation" (2000) https://www.nature.com/articles/35018520 ; Cesium
- "Exact Quantum Electrodynamics of Radiative Photonic Environments" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. "New theory reveals the shape of a single photon"
- "Physicists Magnetize a Material with Light" (2025) https://news.ycombinator.com/item?id=42628841
Ask HN: What are the biggest PITAs about managing VMs and containers?
I’ve been asked to write a blog post about “The PITA of managing containers and VMs.”
It's meant to be a rant listicle (with explanations as appropriate). What should I be sure to include?
One of the goals of containers are to unify the development and deployment environments. I hate developing and testing code in containers, so I develop and test code outside them and then package and test it again in a container.
Containerized apps need a lot of special boilerplate to determine how much CPU and memory they are allowed to use. It’s a lot easier to control resource limits with virtual machines because the application in the system resources are all dedicated to the application.
Orchestration of multiple containers for dev environments is just short of feature complete. With Compose, it’s hard to bring down specific services and their dependencies so you can then rebuild and rerun. I end up writing Ansible playbooks to start and stop components that are designed to be executed in particular sequences. Ansible makes it hard to detach a container, wait a specified time, and see if it’s running. Compose just needs to be updated to support management of shutting down and restarting containers, so I can move away from Ansible.
Services like Kafka that query the host name and broadcast it are difficult to containerize since the host name inside the container doesn’t match the external host name. Requires manual overrides which are hard to specify at run time because the orchestrators don’t make it easy to pass in the host name to the container. (This is more of a Kafka issue, though.)
Systemd, k8s, Helm, and Terraform model service dependencies.
Quadlet is the podman recommended way to do podman with systemd instead of k8s.
Podman supports kubes of containers and pods of containers;
man podman-container
man podman-generate-kube
man podman-kube
man podman-pod
`podman generate kube` generates YAML for `podman kube play` and for k8s `kubectl`.Podman Desktop can create a local k8s (kubernetes) cluster with any of kind, minikube, or openshift local. k3d and rancher also support creating one-node k8s clusters with minimal RAM requirements for cluster services.
kubectl is the utility for interacting with k8s clusters.
k8s Ingress API configures DNS and Load Balancing (and SSL certs) for the configured pods of containers.
E.g. Traefik and Caddy can also configure the load balancer web server(s) and request or generate certs given access to a docker socket to read the labels on the running containers to determine which DNS domains point to which containers.
Container labels can be specified in the Dockerfile/Containerfile, and/or a docker-compose.yml/compose.yml, and/or in k8s yaml.
Compose supports specifying a number of servers; `docker compose up web=3`.
Terraform makes consistent.
Compose does not support rolling or red/green deployment strategies. Does compose support HA high-availability deployments? If not, justify investing in a compose yaml based setup instead of k8s yaml.
Quadlet is the way to do podman containers without k8s; with just systemd for now.
Thanks! I’ll take a look at quadlet.
I find that I tend to package one-off tasks as containers as well. For example, create database tables and users. Compose supports these sort of things. Ansible actually makes it easy to use and block on container tasks that you don’t detach.
I’m not interested in running kubernetes, even locally.
Podman kube has support for k8s Jobs now: https://github.com/containers/podman/pull/23722
k8s docs > concepts > workloads > controllers > Jobs: https://kubernetes.io/docs/concepts/workloads/controllers/jo...
Ingress, Deployment, StatefulSets,: https://news.ycombinator.com/item?id=37763931
Northeastern's curriculum changes abandon fundamentals of computer science
Racket: https://learnxinyminutes.com/racket/
Python: https://learnxinyminutes.com/python/
pyret?
pyret: https://pyret.org/pyret-code/ :
> Why not just use Java, Python, Racket, OCaml, or Haskell?
IMHO fun educational learning languages aren't general purpose or production ready; and so also at this point in my career I would appreciate a more reusable language in a CS curriculum.
Python isn't CS pure like [favorite lisp], but it is a language coworkers will understand, it supports functional and object-oriented paradigms, and the pydata tools enable CS applications in STEM.
A lot of AI and ML code is written in Python, with C/Rust/Go.
There's an AIMA Python, but there's not a pyret or a racket AIMA or SICP, for example.
"Why MIT Switched from Scheme to Python (2009)" https://news.ycombinator.com/item?id=14167453
Computational thinking > Characteristics: https://en.wikipedia.org/wiki/Computational_thinking#Charact...
"Ask HN: Which school produces the best programmers or software engineers?" https://news.ycombinator.com/item?id=37581843
overleaf/learn/Algorithms re: nonexecutable LaTeX ways to specify algorithms: https://www.overleaf.com/learn/latex/Algorithms
Book: "Classic Computer Science Algorithms in Python"
coding-problems: https://github.com/MTrajK/coding-problems
coding-interview-university: https://github.com/jwasham/coding-interview-university
coding-interview-university lists "Computational complexity" but not "Computational thinking", which possibly isn't that different from WBS (work breakdown structure), problem decomposition and resynthesis, and the Scientific Method
Ask HN: How can I learn to better command people's attention when speaking?
I've noticed over the years that whenever I'm in group conversations in a social setting, people in general don't pay too much attention to what I say. For example, let's say the group is talking about travel and someone says something I find relatable e.g. someone mentions a place I've been to and really liked. When I try to contribute to the conversation, people just don't seem interested, and typically the conversation moves on as if I hadn't said anything. If I try to speak for a longer time (continuing with the travel example, let's say I try to talk about a particular attraction I enjoyed visiting at that location), I'm usually interrupted, and the focus shifts to whoever interrupted me.
This has happened (and still happens often) a lot, in different social circles, with people of diverse backgrounds. So, I figure it's not that I hang out with rude people, the problem must be me. I think the saddest part of all this is that even my wife's attention drifts off most of the time I try to talk to her.
I know it's not a language barrier issue, and I know for sure I enunciate my words well. I wonder though if the issue may be that I have a weak voice, or just an overall weak presence/body language. How can that be improved, if that's the case?
Could building rapport help?
Book: "Power Talk: Using Language to Build Authority and Influence" (2001) https://g.co/kgs/6L8MxNy
- Speaking from the edge, Speaking from the center
"From Comfort Zone to Performance Management" (2009) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2... .. https://news.ycombinator.com/item?id=32786594 :
> The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages
Show HN: TabPFN v2 – A SOTA foundation model for small tabular data
I am excited to announce the release of TabPFN v2, a tabular foundation model that delivers state-of-the-art predictions on small datasets in just 2.8 seconds for classification and 4.8 seconds for regression compared to strong baselines tuned for 4 hours. Published in Nature, this model outperforms traditional methods on datasets with up to 10,000 samples and 500 features.
The model is available under an open license: a derivative of the Apache 2 license with a single modification, adding an enhanced attribution requirement inspired by the Llama 3 license: https://github.com/PriorLabs/tabpfn. You can also try it via API: https://github.com/PriorLabs/tabpfn-client
TabPFN v2 is trained on 130 million synthetic tabular prediction datasets to perform in-context learning and output a predictive distribution for the test data points. Each dataset acts as one meta-datapoint to train the TabPFN weights with SGD. As a foundation model, TabPFN allows for fine-tuning, density estimation and data generation.
Compared to TabPFN v1, v2 now natively supports categorical features and missing values. TabPFN v2 performs just as well on datasets with or without these. It also handles outliers and uninformative features naturally, problems that often throw off standard neural nets.
TabPFN v2 performs as well with half the data as the next best baseline (CatBoost) with all the data.
We also compared TabPFN to the SOTA AutoML system AutoGluon 1.0. Standard TabPFN already outperforms AutoGluon on classification and ties on regression, but ensembling multiple TabPFNs in TabPFN v2 (PHE) is even better.
There are some limitations: TabPFN v2 is very fast to train and does not require hyperparameter tuning, but inference is slow. The model is also only designed for datasets up to 10k data points and 500 features. While it may perform well on larger datasets, it hasn't been our focus.
We're actively working on removing these limitations and intend to release new versions of TabPFN that can handle larger datasets, have faster inference and perform in additional predictive settings such as time-series and recommender systems.
We would love for you to try out TabPFN v2 and give us your feedback!
anyone tried this? is this actually overall better than xgboost/catboost?
Benchmark of tabpfn<2 compared to xgboost, lightgbm, and catboost: https://x.com/FrankRHutter/status/1583410845307977733 .. https://news.ycombinator.com/item?id=33486914
A video tour of the Standard Model (2021)
Standard Model: https://en.wikipedia.org/wiki/Standard_Model
Mathematical formulation of the Standard Model: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...
Physics beyond the Standard Model: https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo...
Story of the LaTeX representation of the standard model, from a comment re: "The deconstructed Standard Model equation" (2016): https://news.ycombinator.com/item?id=41753471#41772385
Can manim work with sympy expressions?
A standard model demo video could vary each (or a few) highlighted variables and visualize the [geometric,] impact/sensitivity of each
Excitons in the fractional quantum Hall effect
"Excitons in the Fractional Quantum Hall Effect" (2024) https://arxiv.org/abs/2407.18224
Fractional quantum Hall effect in rhombohedral, pentalayer graphene: https://news.ycombinator.com/item?id=42314581
Fractional quantum Hall effect: https://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect
Superconducting nanostrip single photon detectors made of aluminum thin-films
"Superconducting nanostrip single photon detectors fabricated of aluminum thin-films" (2024) https://www.sciencedirect.com/science/article/pii/S277283072...
"Fabricating single-photon detectors from superconducting aluminum nanostrips" (2025) https://phys.org/news/2025-01-fabricating-photon-detectors-s...
Combining graphene and nanodiamonds for better microplasma devices
"High stability plasma illumination from micro discharges with nanodiamond decorated laser induced graphene electrodes" (2024) https://www.sciencedirect.com/science/article/pii/S277282852...
Physicists Magnetize a Material with Light
"Researchers discover new material for optically-controlled magnetic memory" (2024) https://phys.org/news/2024-08-material-optically-magnetic-me... ..
"Distinguishing surface and bulk electromagnetism via their dynamics in an intrinsic magnetic topological insulator" (2024) https://www.science.org/doi/10.1126/sciadv.adn5696
> MnBi2Te4
ScholarlyArticle: "Terahertz field-induced metastable magnetization near criticality in FePS3" (2024) https://www.nature.com/articles/s41586-024-08226-x
"Room temperature chirality switching and detection in a helimagnetic MnAu2 thin film" (2024) https://www.nature.com/articles/s41467-024-46326-4 .. https://scitechdaily.com/memory-breakthrough-helical-magnets... .. https://news.ycombinator.com/item?id=41921153
Can we create matter from light?
Indeed we can:
https://www.energy.gov/science/np/articles/making-matter-col...
Scientists find strong evidence for the long-predicted Breit-Wheeler effect—generating matter and antimatter from collisions of real photons.
Breit–Wheeler process > Experimental observations: https://en.wikipedia.org/wiki/Breit%E2%80%93Wheeler_process#...
Scientists find 'spooky' quantum entanglement within individual protons
Tell HN: ChatGPT can't show you a 5.25" floppy disk
Challenge: Come up with a query that makes it draw something resembling a 5.25" floppy, that is, without the metal shield and hub that is present on the 3.5" disks.
PostgreSQL Support for Certificate Transparency Logs Now Available
Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
Sigstore Rekor also has centralized Merkle hashes.
Isn’t Rekor runs on top of Trillian?
German power prices turn negative amid expansion in renewables
Given the intraday prices, are there sufficient incentives to stimulate creation of energy storage businesses to sell the excess electricity back a couple hours or days later?
Or have the wasteful idiot cryptoasset miners been regulated out of existence such that there is no longer a buyer of last resort for excess rationally-subsidized clean energy?
Are the Duck Curve and Alligator curve problems the same in EU and other energy markets with and without intraday electricity prices?
Duck curve: Solar peaks around noon, but energy usage peaks around the evening commute and dinner; https://en.wikipedia.org/wiki/Duck_curve
Alligator curve: Wind peaks in the middle of the night;
https://fresh-energy.org/renewable-integration-in-the-midwes... :
> So, if we’re not duck-like, what is the Upper Midwest’s energy mascot? We give you: the Smilin’ Gator Curve. Unlike California’s ominous, faceless duck, Sally Gator welcomes the Midwest’s commitment to renewable generation!
(An excellent drawing of a cartoon municipally-bonded incumbent energy utility mascot!)
Can cryptoasset or data mining firms scale to increase demand for electricity when energy prices low or below zero? What are there relocation costs?
Is such a low subsidized energy price lucrative to energy storage, not-yet-PQ Proof-of-Work, and other data mining firms?
Can't GPE gravitational potential energy in old but secured mine shafts scale to meet energy storage requirements?
As far as I’m aware, at least what has been explained to me, storage is a smaller problem. The bigger problem is how do you transfer that energy from where it is stored to where it is needed. Wind farms in the sea help nothing in Bayern.
So, per your understanding, grid connectivity for energy production projects is a bigger issue than energy storage in their market?
Price falls below zero because there's a supply glut (and due to aggressive, excessive, or effective subsidies to accelerate our team transition to clean energy).
Is it the national or regional electricity prices that are falling below zero?
Is the price lower during certain hours of the day? If so, then energy storage could probably smooth that out.
Zasper: A Modern and Efficient Alternative to JupyterLab, Built in Go
I am the author of Zasper.
The unique feature of Zasper is that the Jupyter kernel handling is built with Go coroutines and is far superior to how it's done by JupyterLab in Python.
Zasper uses one fourth of RAM and one fourth of CPU used by Jupterlab. While Jupyterlab uses around 104.8 MB of RAM and 0.8 CPUs, Zasper uses 26.7 MB of RAM and 0.2 CPUs.
Other features like Search are slow because they are not refined.
I am building it alone fulltime and this is just the first draft. Improvements will come for sure in the near future.
I hope you liked the first draft.
IPython maintainer and Jupyter dev (even if I barely touch frontend stuff these days). Happy to see diversity, keep up the good work and happy new year. Feel free to open issues upstream if you find lack of documentation or issue with protocol. You can also try to reach to jupyter media strategy team, maybe they'll be open to have a blog post about this on blog.jupyter.org
That's stellar sportmanship right there.
Not that jupyter's team needed even more respect from the community but damn.
I think that's fairly normal, having alternative frontends can only be beneficial to the community. I know it also look like there is a single Jupyter team, but the project is quite large, there are a lot of constraints and disagreements internally and there is not way to accomodate all users in the default jupyter install. Alternative are always welcome ; at least if they don't fragment the ecosystem by being not backward compatible with the default.
Also to be fair I'm also one of the Jupyter dev that agree with many points of OP, and would have pulled it into a different direction; but regardldess I will still support people wanting to go in a different direction than mine.
> Alternative are always welcome; at least if they don't fragment the ecosystem by being not backward compatible with the default.
Genuinely curious; what mechanisms has Jupyter introduced to prevent ecosystem fragmentation?
The Jupyter community maintains a public spec of the notebook file format [1], the kernel protocol [2], etc. I have been involved with many alternative Jupyter clients, and having these specs combined with a friendly and welcoming community is incredibly helpful!!!
[1] https://github.com/jupyter/nbformat
[2] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
jupyter-server/enterprise_gateway: https://github.com/jupyter-server/enterprise_gateway
JupyterLab supports Lumino and React widgets.
Jupyter Notebook was built on jQuery, but Notebook is now forked from JupyterLab and there's NbClassic.
Breaking the notebook extension API from Notebook to Lab unfortunately caused re-work for progress, as I recall.
jupyter-xeus/xeus is an "Implementation of the Jupyter kernel protocol in C++* https://github.com/jupyter-xeus/xeus
jupyter-xeus/xeus-python is a "Jupyter kernel for the Python programming language"* that's also what JupyterLite runs in WASM instead of ipykernel: https://github.com/jupyter-xeus/xeus-python#what-are-the-adv...
JupyterLite kernels normally run in WASM; which they are compiled to by emscripten / LLVM.
To also host WASM kernels in a go process, I just found: going: https://github.com/fizx/goingo .. https://news.ycombinator.com/item?id=26159440
Vscode and vscode.dev support wasm container runtimes now; so the Python kernel runs in WASM runs in a WASM container runs in vscode FWIU.
Vscode supports polyglot notebooks that run multiple kernels, like "vatlab/sos-notebook" and "minrk/allthekernels". Defining how to share variables between kernels is the more unsolved part AFAIU. E.g. Arrow has bindings for zero-copy sharing in multiple languages.
Cocalc, Zeppelin, Marimo notebook, Data Bricks, Google Colaboratory (Colab tools), and VSCode have different takes on notebooks with I/O in JSON.
There is no CDATA in HTML5; so HTML within an HTML based notebook format would need to escape encode binary data in cell output, too. But the notebook format is not a packaging format. So, for reproducibility of (polyglot) notebooks there must also be a requirements.txt or an environment.yml to indicate the version+platform of each dependency in Python and other languages.
repo2docker (and repo2podman) build containers by installing packages according to the first requirements .txt or environment.yml it finds according to REES Reproducible Execution Environment Standard. repo2docker includes a recent version of jupyterlab in the container.
JupyterLab does not default to HTTPS with an LetsEncrypt self-signed cert but probably should, because Jupyter is a shell that can run commands as the user that owns the Jupyter kernel process.
MoSH is another way to run a web-based remote terminal. Jupyter terminal is not built on MoSH Mobile Shell.
jupyterlab/jupyter-collaboration for real time collaboration is based on the yjs/yjs CRDT. https://github.com/jupyterlab/jupyter-collaboration
Cocalc's Time Slider tracks revisions to all files in a project; including latex manuscripts (for ArXiV), which - with Computer Modern fonts and two columns - are the typical output of scholarly collaboration on a ScholarlyArticle.
At this stage, notebooks should be a GUI powered docker-like image format you download and then click to run.
Non programmers using notebooks are usually the least qualified to make them reproducible, so better just ship the whole thing.
There are packaged installers for the jupyterlab-desktop GUI for Windows, Mac, and Linux: https://github.com/jupyterlab/jupyterlab-desktop#installatio...
Docker Desktop and Podman Desktop are GUIs for running containers on Windows, Mac, and Linux.
containers become out of date quickly.
If programmer or non-programmer notebook authors do not keep versions specified in a requirements.txt upgraded, what will notify other users that they are installing old versions of software?
Are there CVEs in any of the software listed in the SBOM for a container?
There should be tests to run after upgrading notebook and notebook server dependencies.
Notes re: notebooks, reproducibility, and something better than MHTML/ZIP; https://news.ycombinator.com/item?id=35896192 , https://news.ycombinator.com/item?id=35810320
From a JEP proposing "Markdown based notebooks" https://github.com/jupyter/enhancement-proposals/pull/103#is... :
> Any new package format must support cryptographic signatures and ideally WoT identity
Any new package format for jupyter must support multiple languages, because polyglot notebooks may require multiple jupyter kernels.
Existing methods for packaging notebooks as containers and/or as WASM: jupyter-docker-stacks, repo2docker / repo2podman, jupyterlite, container2wasm
You can sign and upload a container image built with repo2docker to any OCI image registry like Docker, Quay, GitHub, GitLab, Gitea; but because Jupyter runs a command execution shell on a TCP port, users should upgrade jupyter to limit the potential for remote exploitation of security vulnerabilities.
> Non programmers using notebooks are usually the least qualified to make them reproducible, so better just ship the whole thing.
Programs should teach idempotency, testing, isolation of sources of variance, and reproducibility.
What should the UI explain to the user?
If you want your code to be more likely to run in the future, you need to add a "package" or a "package==version" string in a requirements.txt (or pyproject.toml, or an environment.yml) for each `import` statement in the code.
If you do not specify the exact versions with `package==version` or similar, when users try to install the requirements to run your notebook, they could get a newer or a different version of a package for a different operating system.
If you want to prevent MITM of package installs, you need to specify a hash for the package for this platform in the requirements.txt or similar; `package==version#sha256=adc123`.
If you want to further limit software supply chain compromise, you must check the cryptographic signatures on packages to install, and verify that you trust that key to sign that package. (This is challenging even for expert users.)
WASM containers that run jupyter but don't expose it on a TCP port may be less of a risk, but there is a performance penalty to WASM.
If you want users to be able to verify that your code runs and has the same output (is "reproducible"), you should include tests to run after upgrading notebook and notebook server dependencies.
NASA, Axiom Space Change Assembly Order of Commercial Space Station
I wonder if there's any useful heavy equipment aboard the ISS that could be transferred to the Axiom prior to separation and thus salvaged. It'd have to be stuff the ISS could do without for the remaining couple years of its life.
I wonder if the ISS could instead be scrapped to the moon.
Let's get this space station to the moon.
Can a [Falcon 9 [Heavy] or similar] rocket shove the ISS from its current attitude into an Earth-Moon orbit with or without orbital refuelling?
The ISS weighs 900,000 lbs on Earth.
Have we yet altered the orbital trajectory of anything that heavy in space?
Can any existing rocket program rendezvous and boost sideways to alter the trajectory of NEOs (Near-Earth Objects) or aging, heirloom, defunct space stations?
Which of the things of ISS that we have internationally paid to loft into orbit would be useful for future robot, human, and emergency operations on the Moon?
NASA evaluated a bunch of options before deciding to de-orbit ISS. You can read a summary here: https://www.nasa.gov/wp-content/uploads/2024/06/iss-deorbit-... (PDF)
About boosting to a higher orbit, they wrote:
"Space station operations require a full-time crew to operate, and as such, an inability to keep crews onboard would rule out operating at higher altitudes. The cargo and crew vehicles that service the space station are designed and optimized for its current 257 mile (415km) altitude and, while the ability of these vehicles varies, NASA’s ability to maintain crew on the space station at significantly higher altitudes would be severely impacted or even impossible with the current fleet. This includes the International crew and cargo fleet, as Russian assets providing propulsion and attitude control need to remain operational through the boost phase.
"Ignoring the requirement of keeping crew onboard, NASA evaluated orbits above the present orbital regime that could extend just the orbital lifetime of the space station. [...]
"However, ascending to these orbits would require the development of new propulsive and tanker vehicles that do not currently exist. While still currently in development, vehicles such as the SpaceX Starship are being designed to deliver significant amounts of cargo to these orbits; however, there are prohibitive engineering challenges with docking such a large vehicle to the space station and being able to use its thrusters while remaining within space station structural margins. Other vehicles would require both new certifications to fly at higher altitudes and multiple flights to deliver propellant.
"The other major consideration when going to a higher altitude is the orbital debris regime at each specified locale. The risk of a penetrating or catastrophic impact to space station (i.e., that could fragment the vehicle) increases drastically above 257miles (415km). While higher altitudes provide a longer theoretical orbital life, the mean time between an impact event decreases from ~51 years at the current operational altitude to less than four years at a 497 mile (800km), ~700-year orbit. This means that the likelihood of an impact leaving station unable to maneuver or react to future threats, or even a significant impact resulting in complete fragmentation, is unacceptably high. NASA has estimated that such an impact could permanently degrade or even eliminate access to LEO for centuries."
Thanks for the research.
How are any lunar orbital trajectories relatively safe given the same risks to all crafts at such altitudes?
Is it mass or thrust, or failure to plan something better than inconsiderately decommissioning into the atmosphere and ocean.
If there are escape windows to the moon for other programs, how are there no escape windows to the moon for the ISS?
Given the standing risks of existing orbital debris and higher-altitude orbits' lack of shielding, are NEO impact collisions with e.g. hypersonic glide delivery vehicles advisable methods for NEO avoidance?
The NEO avoidance need is still to safely rendezvous and shove things headed for earth orbit into a different trajectory;
Is there a better plan than blowing a NEO up into fragments still headed for earth, like rendezvousing and shoving to the side?
m_ISS ~ 4.5e5 kg [1]
Rocket equation [2]:
m_0 = m_f exp(v_delta / v_e)
where
m_f = final mass, i.e. mass of ISS and the boosters
m_0 = m_f + propellant mass
v_delta = velocity change
v_e = effective exhaust velocity of the boosters
Let's try a high-thrust transfer from LEO to the Lunar Gateway's orbit via TLI (Trans-Lunar Injection) [3]:
v_delta = 3.20 + 0.43 = 3.63 km/s
For boosters, let's use the dual-engine Centaur III (because Wikipedia has mass and v_e data for it) [4]:
m_dry = 2462 kg
m_propellant = 20830 kg
v_e = 4.418 km/s
The idea is to attach n of these to the ISS. The rocket equation becomes
m_ISS + n (m_dry + m_propellant) = (m_ISS + n m_dry) exp(v_delta / v_e)
Solve for n:
n = m_ISS (exp(v_delta / v_e) - 1) / (m_propellant + m_dry (1 - exp(v_delta / v_e)) )
Plug in numbers and find
n ~ 32.4
So we need 33 Centaur III (and some way to attach them, which I optimistically assume won't add significantly to the ISS mass).
Total Centaur III + propellant mass: 33 * (2462 + 20830) = 768636 kg
Planned Starship payload capacity to LEO is 2e5 kg [5], so assuming that a way can be found to fit 7 Centaur III in its payload bay, we can get all 33 boosters to LEO with five Starship launches.
Why not use Starship itself? Its Raptor Vacuum engines have lower v_e (~3.7 km/s) [6], and if you want it back, you need to add fuel for the return trip to m_f. Exercise for the reader!
[1] https://en.wikipedia.org/wiki/International_Space_Station
[2] https://en.wikipedia.org/wiki/Tsiolkovsky_rocket_equation
[3] https://en.wikipedia.org/wiki/Delta-v_budget#Earth_Lunar_Gat...
[4] https://en.wikipedia.org/wiki/Centaur_(rocket_stage)
[5] https://en.wikipedia.org/wiki/SpaceX_Starship_(spacecraft)
[6] https://en.wikipedia.org/wiki/SpaceX_Raptor#Raptor_Vacuum
Thanks for the numbers. I think it's still possible to create gists with .ipynb Jupyter notebooks which have can have latex math and code with test assertions; symbolic algebra with sympy, astropy, GIZMO-public, spiceypy
> and some way to attach them
Because of my love for old kitchens on the Moon.
(The cost then to put all of that into orbit, in today's dollars)
So, orbitally refuelling Starship(s) would be less efficient than 33 of the cited capability all at once.
This list is pretty short:
Template:Engine_thrust_to_weight_table: https://en.wikipedia.org/wiki/Template:Engine_thrust_to_weig...
What about solar; could any solar-powered thrusters - given an unlimited amount of time - shove the ISS into a dangerous orbit towards the moon instead of the ocean?
> and some way to attach them
There's a laser welding in space spec and grants FWIU.
Can any space program do robotic spacecraft hull repair in orbit, like R2D2? With laser welding?
Or do we need to find more people like Col. McBride in brad pitt space movie, more astronauts?
Nanoimprint Lithography Aims to Take on EUV
Nanoimprint lithography (NIL) https://en.wikipedia.org/wiki/Nanoimprint_lithography
Feasibility of RF Based Wireless Sensing of Lead Contamination in Soil
"Feasibility of RF Based Wireless Sensing of Lead Contamination in Soil" (2024) [PDF] https://www.ewsn.org/file-repository/ewsn2024/ewsn24-final99...
Multispectral imaging through scattering media and around corners
ScholarlyArticle: "Multispectral imaging through scattering media and around corners via spectral component separation" (2024) https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-27-48786&id...
Multispectral identification of plastics: "Multi-sensor characterization for an improved identification of polymers in WEEE recycling" (2024) https://www.sciencedirect.com/science/article/pii/S0956053X2...
Ore to Iron in a Few Seconds: New Chinese Process Will Revolutionise Smelting
There's also a new process for Titanium production; for removing the oxygen FWIU:
"Direct production of low-oxygen-concentration titanium from molten titanium" (2024) https://www.nature.com/articles/s41467-024-49085-4
Awesome Donations: A repository of FLOSS donation options
Jupyter is now with LF Charities, with a new 501(c) EIN: https://github.com/jupyter/governance/issues/204#issuecommen...
NumFOCUS has Sponsored Projects: https://numfocus.org/sponsored-projects and Affiliated Projects: https://numfocus.org/sponsored-projects/affiliated-projects
Datasette-lite is another way to support client-side search and faceting of data like a list of open source projects with attributes, in YAML-LD in git, but SQLite with sqlite-utils.
FUNDING.yml is the GitHub Sponsors way to support donations:
"How to add a FUNDING.yml to github orgs and org/repos" https://docs.github.com/en/repositories/managing-your-reposi...
Open source projects can link to a "Custom URL" in a GitHub organization or project's FUNDING.yml file to add a "Donate" button.
From https://github.com/sponsors :
> Support the developers who power open source
> button text: See your top dependencies
> button text: Get sponsored
Open Source sponsoring organizations like NumFOCUS, LFX Platform, and LF Charities (Linux Foundation) could crawl GitHub Sponsors FUNDING.yml documents to build Donate pages.
Test as a normal human customer in through the front door like everyone else;
I want to donate with a DAF or a QCD,
How do I find the EIN for a nonprofit corporation in the US?
/? "$project" "EIN"
/? ein donor advised fund :
DAF: Donor Advised Fund
QCD: Qualified Charitable Donation (from an IRA)
DAF pros and cons: [ https://smartasset.com/investing/donor-advised-funds-pros-an... , ]
"Ten of America’s 20 Top Public Charities Are Donor-Advised Funds" (2024) https://inequality.org/article/top-public-charities-dafs/
From https://news.ycombinator.com/item?id=30189240 :
- CharityVest: https://www.charityvest.org/ :
>> Donate cash, stock, crypto, or complex assets to your giving account when it's most tax-efficient. It’s all tax-deductible upon receipt and on one tax statement at year-end.
>> Make employees the authors of your impact story with personal and corporate donor-advised funds.
>> Provide tax-smart charitable stipends to employees, automate matching gifts, and enable equity donations.*
- TheGivingBlock handles (crypto) charitable donations to and for nonprofits: https://thegivingblock.com/
W3C Web Monetization could solve for nonprofit donations too, I think. https://WebMonetization.org/
Web Monetization handles micropayments with low fees (with ILP Interledger Protocol, like FedNow) with an open standard;
Web Monetization > Monetizing a web page: https://webmonetization.org/specification/#monetizing-a-web-... :
<link
rel="monetization"
href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay"
onmonetization="sayThanks(this)"
>
<script>
function sayThanks(monetizedLink) {
// Do something here
}
</script>
Maybe some links to obviously relevant and manually tangential wikipedia pages about concepts described herein and herewith:Charitable organization: https://en.wikipedia.org/wiki/Charitable_organization
Philanthropy in the US: https://en.wikipedia.org/wiki/Philanthropy_in_the_United_Sta...
Philanthropy > Criticism, Differences between traditional and new philanthropy > Promoting equity through science and health philanthropy: https://en.wikipedia.org/wiki/Philanthropy#Promoting_equity_...
Nonprofit organization > Fundraising, Problems, tech support: https://en.wikipedia.org/wiki/Nonprofit_organization#Fundrai...
Non-profit technology > Uses: https://en.wikipedia.org/wiki/Non-profit_technology#Uses :
> [Computers and Internets,;] volunteer management and support, donor management, client tracking and support, project management, human resources (paid staff) management, financial accounting, program evaluation, research, marketing, activism and collaboration. Nonprofit organizations that engage in income-generation activities, such as ticket sales, may also use technology for these functions.
Static search trees: faster than binary search
suffix-array-searching/static-search-tree/src/s_tree.rs: https://github.com/RagnarGrootKoerkamp/suffix-array-searchin... partitioned_s_tree.rs: https://github.com/RagnarGrootKoerkamp/suffix-array-searchin...
Arnis: Generate cities in Minecraft from OpenStreetMap
Re: real-time weather, flight, traffic, react-based cockpits, and open source Google Earth GlTF data and flight sim applications like MSFS and X-Plane: https://news.ycombinator.com/item?id=38437821
Probably easier to determine whether a verbal or (lat,long,game) location reference is a reference to an in-game location or an virtual location in a game with the same name.
From https://developers.google.com/maps/documentation/tile/3d-til... re: Google Maps GlTF 3d tiles:
> Note: To render Google's photorealistic tiles, you must use a 3D Tiles renderer that supports the display of copyright attribution. You can use [CesiumJS or Cesium for Unreal]
Open source minecraft games with Python APIs:
> sensorcraft is like minecraft but in python with pyglet for OpenGL 3D; self.add_block(), gravity, ai, circuits
WebGL is more portable than OpenGL; or can't pyglet be compiled to WASM?
panda3d and godot easily compile to WASM FWIU.
An open source physics engine in Python that might be useful for minecraft clones: Genesis-Embodied-AI/Genesis; "Genesis: A Generative and Universal Physics Engine for Robotics and Beyond" https://news.ycombinator.com/item?id=42457213#42457224
From "Approximating mathematical constants using Minecraft" https://news.ycombinator.com/item?id=42319313 :
> [ luanti, minetest, mcpi, MCPI-Revival/minecraft-pi-reborn ]
That was done long ago under Flightgear, but just the maps.
Jimmy Carter has died
Jimmy Carter: https://en.wikipedia.org/wiki/Jimmy_Carter
Many analyses have since estimated the error of the projections they were conducting policy according to especially in the late 1970s.
"The Limits to Growth" (1971) https://en.wikipedia.org/wiki/The_Limits_to_Growth
Oil price shock (1973) https://en.wikipedia.org/wiki/1973_oil_crisis
Oil crisis (1979) https://en.wikipedia.org/wiki/1979_oil_crisis
"The Global 2000 Report to the President" (1980) https://en.wikipedia.org/wiki/The_Global_2000_Report_to_the_...
1980-: expensive international meddling we're still paying for, increased government spending , increased taxes on the middle class, oil governor, oil dci, vp, president
1992-: significant reprieve from costly meddling, globalization, soccer, fallout from arms dropped on the eastern bloc after the end of the cold war, not much development in batteries or renewables, dot-com boom, WorldCom, GLBA deregulation, dot-com crash
2000s: humvees, hummers and ~18 MPG SUVs, debt (war expensed to national debt, tax cuts), oil commodity price volatility given an international production rate agreement, crony capitalist bailouts to top off the starve the beast scorched earth debt, almost 20 years of war with countries initially engaged in the 1980s, trillions in spending, tax cuts
"The Limits to Growth" (2004) https://en.wikipedia.org/wiki/The_Limits_to_Growth :
> "Limits to Growth: The 30-Year Update" was published in 2004. The authors observed that "It is a sad fact that humanity has largely squandered the past 30 years in futile debates and well-intentioned, but halfhearted, responses to the global ecological challenge. We do not have another 30 years to dither. Much will have to change if the ongoing overshoot is not to be followed by collapse during the twenty-first century."
Hyperspectral sensors help characterize polymers in e-waste to improve recycling
"Multi-sensor characterization for an improved identification of polymers in WEEE recycling" (2024) https://www.sciencedirect.com/science/article/pii/S0956053X2...
Advancing unidirectional heat flow: The next era of quantum thermal diodes
ScholarlyArticle: "Enhanced thermal rectification in coupled qutrit–qubit quantum thermal diode" (2024) https://pubs.aip.org/aip/apq/article/1/4/046123/3325095/Enha...
Unforgeable Quantum Tokens Delivered over Fiber Network
I am worried about the future of quantum tokens...
Whilst theoretically they are secure, I worry about potential huge side-channels allowing leaking of the key...
All it takes is a few extra photons emitted at some harmonic frequency for the key to be leaked...
I would much prefer dumb hardware and clever digital software, because at least software is much easier to secure against side channels, and much easier to audit.
Isn't the entire security of Quantum Communication predicated on its complete lack of side-channels due to the fact that measuring quantum systems collapses their wave function?
Yes, in theory. In practice, photon generators won't behave perfectly. There are lots of possible attacks, like photon splitting [1].
[1] https://onlinelibrary.wiley.com/doi/full/10.1002/qute.202300...
Once you put error correction, doenn't you lose all the nice properties of the non cloning theorem? If the protocol tolerates 30% of errors, doesn't it tolerate 30% of MITM? (60%??)
You don't need error correction for some crypto primitives. There are QKD networks deployed that don't have that kind of error correction, as far as I know.
How can QKD repeaters store and forward or just forward without collapsing phase state?
How does photonic phase state collapse due to fiber mitm compare to a heartbeat on a classical fiber?
There is quantum counterfactual communication without entanglement FWIU? And there's a difference between QND "Quantum Non-Demolition" and "Interaction-free measurement"
From https://news.ycombinator.com/item?id=41480957#41533965 :
>> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
> That was probably the "Bell test" article, which - IIUC - does indeed indicate that if you can read 62% of the photons you are likely to find a loophole-free violation
> [ "Violation of Bell inequality by photon scattering on a two-level emitter", ]
Bell test > Detection loophole: https://en.wikipedia.org/wiki/Bell_test#Detection_loophole :
> when using a maximally entangled state and the CHSH inequality an efficiency of η>2sqrt(2)−2≈0.83 is required for a loophole-free violation.[51] Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for
η>2/3≈0.67,[52] which is the optimal bound for the CHSH inequality. [53] Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for η>(5−1)/2≈0.62 [54]
Isn't modern error detection and classical PQ sufficient to work with those odds?
> Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, [55] superconducting qubits, [56] and nitrogen-vacancy centers. [57] These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, [30][31] and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems. [10]
Portspoof: Emulate a valid service on all 65535 TCP ports
How does this compare to a tarpit?
Tarpit (networking) https://en.wikipedia.org/wiki/Tarpit_(networking)
/? inurl:awesome tarpit https://www.google.com/search?q=inurl%3Aawesome+tarpit+site%...
"Does "TARPIT" have any known vulnerabilities or downsides?" https://serverfault.com/questions/611063/does-tarpit-have-an...
https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62f... :
> However, if implemented incorrectly, TARPIT can also lead to resource exhaustion in your own server, specifically with the conntrack module. That's because conntrack is used by the kernel to keep track of network connections, and excessive use of conntrack entries can lead to system performance issues, [...]
> The script below uses packet marks to flag packets candidate for TARPITing. Together with the NOTRACK chain, this avoids the conntrack issue while keeping the TARPIT mechanism working.
The tarpit module used to be in tree.
xtables-addons/ xt_TARPIT.c: https://github.com/tinti/xtables-addons/blob/master/extensio...
Haven't looked into this too deeply but there is a difference between delaying a response (requests get stuck in the tarpit) vs providing a useless but valid response. This approach always provides a response, so it uses more resources than ignoring the request, but less resources than keeping the connection open. Once the response is sent the connection can be closed, which isn't quite how a tarpit behaves. The Linux kernel only needs to track open requests in memory so if connections are closed, they can be removed from the kernel and thus use no more resources than a standard service listening on a port.
There is a small risk in that the service replies to requests on the port, though, as replies get more complicated to mimic services, you run the risk of an attacked exploiting the system making the replies. Another way of putting it, this attempts to run a server that responds to incoming requests on every port, in a way that mimics what might run on each port. If so, it technically opens up an attack surface on every port because an attacker can feed it requests but the trade-off is that it runs in user mode and could be granted nil permissions or put on a honeypot machine that is disconnected from anything useful and heavily tripwired for unusual activity. And the approach of hardcoding a response to each port to make it appear open is itself a very simple activity, so the attack surface introduced is minimal while the utility of port scanning is greatly reduced. The more you fake out the scanning by behaving realistically to inputs, the greater the attack surface to exploit, though.
And port scanning can trigger false postives in network security scans which can then lead to having to explain why the servers are configured this way and that some ports that should always be closed due to vulnerability are open but not processing requests, so they can be ignored, etc.
The original Labrea Tarpit avoids DOS'ing it's own conntrack table somehow, too;
LaBrea.py: https://github.com/dhoelzer/ShowMeThePackets/blob/master/Sca...
La Brea Tar Pits and museum: https://en.wikipedia.org/wiki/La_Brea_Tar_Pits
The NERDctl readme says: https://github.com/containerd/nerdctl :
> Supports rootless mode, without slirp overhead (bypass4netns)
How does that work, though? (And unfortunately podman replaced slirp4netns with pasta from psst.)
rootless-containers/bypass4netns: https://github.com/rootless-containers/bypass4netns/ :
> [Experimental] Accelerates slirp4netns using SECCOMP_IOCTL_NOTIF_ADDFD. As fast as `--net=host`
Which is good, because --net=host with rootless containers is security inadvisable FWIU.
"bypass4netns: Accelerating TCP/IP Communications in Rootless Containers" (2023) https://arxiv.org/abs/2402.00365 :
> bypass4netns uses sockets allocated on the host. It switches sockets in containers to the host's sockets by intercepting syscalls and injecting the file descriptors using Seccomp. Our method with Seccomp can handle statically linked applications that previous works could not handle. Also, we propose high-performance rootless multi-node communication. We confirmed that rootless containers with bypass4netns achieve more than 30x faster throughput than rootless containers without it
RunCVM, Kata containers, GVisor all have a better host/guest boundary than rootful or rootless containers; which is probably better for honeypot research on a different subnet.
IIRC there are various utilities for monitoring and diffing VMs, for honeypot research.
There could be a list of expected syscalls. If the simulated workload can be exhaustively enumerated, the expected syscalls are known ahead of time and so anomaly detection should be easier.
"Oh, like Ghostbusters."
I tried something like that. It didn't work because the application added the socket to an epoll set before binding it, so before it could be replaced with a host socket. Replacing the file descriptor in the FD table doesn't replace it in epoll sets.
Show HN: Llama 3.1 8B CPU Inference in a Browser via WebAssembly
This is a demo of what's possible to run on edge devices using SOTA quantization. Other similar projects that try to run 8B models in browser are either using webgpu or 2 bit quantization that breaks the model. I implemented inference of AQLM quantized representation, making model that has 2 bit quantization and does not blow up.
Could this be done with WebNN? If not, how should the spec change?
Re: WebNN, window.ai, navigator.ml: https://news.ycombinator.com/item?id=40834952
Not really a WebNN expert, but looks like it doesn't support CPU inference yet and only works in Chrome. It also lacks support for custom kernels, which we need for running AQLM-quantized models.
When you ask about spec change, do you mean WebNN spec or something else?
There's WebNN, and WebGPU, but no WebTPU; so I guess WebNN would be it
"Web Neural Network API" W3C Candidate Recommendation Draft https://www.w3.org/TR/webnn/
"WebGPU" W3C Candidate Recommendation Snapshot: https://www.w3.org/TR/webgpu/
WebGPU: https://en.wikipedia.org/wiki/WebGPU
Tested koboldcpp with a 7B GGUF model because it should work with 8Gb VRAM; but FWIU somehow kobold pages between RAM and GPU VRAM. How does the user, in WASM, know that they have insufficient RAM for a model?
Twisted light: The Edison bulb has purpose again
ScholarlyArticle: "Bright, circularly polarized black-body radiation from twisted nanocarbon filaments" (2024) https://www.science.org/doi/10.1126/science.adq4068
Toward testing the quantum behavior of gravity: A photonic quantum simulation
"Photonic implementation of quantum gravity simulator" (2024) https://www.spiedigitallibrary.org/journals/advanced-photoni...
Relativistic quantum fields are universal entanglement embezzlers
"Relativistic quantum fields are universal entanglement embezzlers" (2024) https://journals.aps.org/prl/accepted/32073Yd2G141638436cb80... :
> Abstract: Embezzlement of entanglement refers to the counterintuitive possibility of extracting entangled quantum states from a reference state of an auxiliary system (the “embezzler”) via local quantum operations while hardly perturbing the latter. We uncover a deep connection between the operational task of embezzling entanglement and the mathematical classification of von Neumann algebras. Our result implies that relativistic quantum fields are universal embezzlers: Any entangled state of any dimension can be embezzled from them with arbitrary precision. This provides an operational characterization of the infinite amount of entanglement present in the vacuum state of relativistic quantum field theories.
"Quantum entanglement can be endlessly 'embezzled' from quantum fields" (2024) https://www.newscientist.com/article/2461485-quantum-entangl...
[Note: It's not necesary to make a second comment. You can edit your post for 1 or 2 hours. (I never remember.) If you change the comment too much, add a note, specialy if someone replied to your comment.]
Edit: Just like this. For obvious typos I usualy don't add the note anyway.
CLI tool to insert spacers when command output stops
- In Python, Sarge has code to poll stdout and stderr
- Years ago for timesheets, I created a "gap report" that finds when the gap between the last dated events exceeds a threshold.
- The vscode terminal indicates which stdout, stderr are from which process with a circle .
This says it's "Integrated > Shell Integration: Decorations Enabled" to configure the feature; https://stackoverflow.com/questions/73242939/vscode-remove-c...
- Are there other shells that indicate which stdout and stderr are from which process? What should it do about e.g. reset ANSI shell escape sequences?
Stopping by Woods on a Snowy Evening (1923)
"Stopping by Woods on a Snowy Evening" (1922) https://en.wikipedia.org/wiki/Stopping_by_Woods_on_a_Snowy_E...
WaveOrder: Generalist framework for label-agnostic computational microscopy
"waveOrder: generalist framework for label-agnostic computational microscopy" (2024) https://arxiv.org/abs/2412.09775 :
> Abstract: Correlative computational microscopy is accelerating the mapping of dynamic biological systems by integrating morphological and molecular measurements across spatial scales, from organelles to entire organisms. Visualization, measurement, and prediction of interactions among the components of biological systems can be accelerated by generalist computational imaging frameworks that relax the trade-offs imposed by multiplex dynamic imaging. This work reports a generalist framework for wave optical imaging of the architectural order (waveOrder) among biomolecules for encoding and decoding multiple specimen properties from a minimal set of acquired channels, with or without fluorescent labels. waveOrder expresses material properties in terms of elegant physically motivated basis vectors directly interpretable as phase, absorption, birefringence, diattenuation, and fluorophore density; and it expresses image data in terms of directly measurable Stokes parameters. We report a corresponding multi-channel reconstruction algorithm to recover specimen properties in multiple contrast modes. With this framework, we implement multiple 3D computational microscopy methods, including quantitative phase imaging, quantitative label-free imaging with phase and polarization, and fluorescence deconvolution imaging, across scales ranging from organelles to whole zebrafish. These advances are available via an extensible open-source computational imaging library, waveOrder, and a napari plugin, recOrder.
mehta-lab/waveorder: https://github.com/mehta-lab/waveorder :
> Wave optical models and inverse algorithms for label-agnostic imaging of density & orientation
> wave-optical simulation and reconstruction of optical properties that report microscopic architectural order.
mehta-lab/recOrder: https://github.com/mehta-lab/recOrder .. https://www.napari-hub.org/plugins/recOrder-napari :
> 3D quantitative label-free imaging with phase and polarization
> recOrder is a collection of computational imaging methods. It currently provides QLIPP (quantitative label-free imaging with phase and polarization), phase from defocus, and fluorescence deconvolution.
Cognitive load is what matters [Programming]
I think the effects of cognitive load on code readability can be demonstrated with variable and function names:
phi
s
ses
session
cursor_session
database_session
dbs
db.session
Really short variable names reduce the cognitive burden for users that understand that "phi" means "database session", but frustrate users who aren't contextually familiar.Really long variable names make it too difficult to verbalize and thus comprehend code.
Is 7±2 a practical limit to working memory for seasoned coders, though?
First demonstration of quantum teleportation over busy Internet cables
ScholarlyArticle: "Quantum teleportation coexisting with classical communications in optical fiber" (2024) https://arxiv.org/abs/2404.10738
Tracing packets in the Linux kernel networking stack and friends
> When storing events for later post-processing, the packets' journeys can be reconstructed: [...]
> Retis offers many more features including retrieving conntrack information, advanced filtering, monitoring dropped packets and dropped packets from Netfilter, generating pcap files from the collected packets, allowing writing post-processing scripts in Python and more.
Would syntax highlighting be a useful general feature, or should that be a post-processing script in e.g. Python?
It would definitely be useful. This is part of the plan and we started exploring different possibilities (early stage, at the moment). Thank you for the feedback and for filing the feature request on GH.
Hey np. Also,
Wireshark and also tshark iirc support custom protocol dissectors;
"How can I add a custom protocol analyzer to wireshark?" https://stackoverflow.com/questions/4904991/how-can-i-add-a-...
What can the pcap files contain?
the pcap-ng file will contain packets (l2/l3/l4 and so forth, but up to 255 bytes), each annotated with a comment that tells you from what kernel function or tracepoint the packet was "captured" from. For the time being you can generate pcapng files filtering packets based on a single probe (e.g. all filtered/tracked packets hitting `net:net_dev_start_xmit`). You can then use wireshark (or any tool you prefer) to dissect and further process. Custom dissectors should not be required.
Can Wireshark parse comments in pcapng files, even with a dissector?
/? ' https://www.google.com/search?q=Can+Wireshark+parse+comments... :
frame.comment contains "Your string"
And there's apparently a way to add a custom column to display frame.comment from pcapng traces in wiresharkyep. For Wireshark/Tshark, display filters (including "frame.comment contains ...") can be used as usual. Of course, if your pcap file contains only frames with the same comment, that expression is not particularly useful, but you can merge multiple files with e.g. mergecap.
The pcap subcommand, though, will be extended to allow extracting packets from multiple probes in a single run.
Show HN: Gribstream.com – Historical Weather Forecast API
Hello! I'd like share about my sideproject https://gribstream.com
It is an API to extract weather forecasting data from the National Blend of Models (NBM) https://vlab.noaa.gov/web/mdl/nbm and the Global Forecast System (GFS) https://www.ncei.noaa.gov/products/weather-climate-models/gl... . The data is freely available from AWS S3 in grib2 format which can be great but also really hard (and resource intensive) to work with, especially if you want to extract timeseries over long periods of time based on a few coordinates. Being able to query and extract only what you want out of terabytes of data in just an http request is really nice.
What is cool about this dataset is that it has hourly data with full forecast history so you can use the dataset to train and forecast other parameters and have proper backtesting because you can see the weather "as of" points in time in the past. It has a free tier so you can play with it. There is a long list of upcoming features I intend to implement and I would very much appreciate both feedback on what is currently available and on what features you would be most interested in seeing. Like... I'm not sure if it would be better to support a few other datasets or focus on supporting aggregations.
Features include:
- A free tier to help you get started - Full history of weather forecasts - Extract timeseries for thousands of coordinates, for months at a time, at hourly resolution in a single http request taking only seconds. - Supports as-of/time-travel, indispensable for proper backtesting of derivative models - Automatic gap filling of any missing data with the next best (most recent) forecast.
Please try it out and let me know what you think :)
In my experience, NWP data is big. Like, really big! Data over HTTP calls seems to limit the use case a bit; have you considered making it possible to mount storage directly (fsspec), and use fx the zarr format? In this way, querying with xarray would be much more flexible
Why is weather data stored in netcdf instead of tensors or sparse tensors?
Also, SQLite supports virtual tables that can be backed by Content Range requests; https://www.sqlite.org/vtab.html
sqlite-wasm-http, sql.js-httpvfs; HTTP VFS: https://www.npmjs.com/package/sqlite-wasm-http
sqlite-parquet-vtable: https://github.com/cldellow/sqlite-parquet-vtable
Could there be a sqlite-netcdf-vtable or a sqlite-gribs-vtable, or is the dimensionality too much for SQLite?
From https://news.ycombinator.com/item?id=31824578 :
> It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?
https://news.ycombinator.com/item?id=42264274
SpatialLite does geo vector search with SQLite.
datasette can JOIN across multiple SQLite databases.
Perhaps datasette and datasette-lite could support xarray and thus NetCDF-style multidimensional arrays in WASM in the browser with HTTP Content Range requests to fetch and cache just the data requested
"The NetCDF header": https://climateestimate.net/content/netcdfs-and-basic-coding... :
> The header can also be used to verify the order of dimensions that a variable is saved in (which you will have to know to use, unless you’re using a tool like xarray that lets you refer to dimensions by name) - for a 3-dimensional variable, `lon,lat,time` is common, but some files will have the `time` variable first.
"Loading a subset of a NetCDF file": https://climateestimate.net/content/netcdfs-and-basic-coding...
From https://news.ycombinator.com/item?id=42260094 :
> xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i...
%FETCH <url> <filename>
> Why is weather data stored in netcdf instead of tensors or sparse tensors?
NetCDF is a "tensor", at least in the sense of being a self-describing multi-dimensional array format. The bigger problem is that it's not a Cloud-Optimized format, which is why Zarr has become popular.
> Also, SQLite supports virtual tables that can be backed by Content Range requests
The multi-dimensional equivalent of this is "virtual Zarr". I made this library to create virtual Zarr stores pointing at archival data (e.g. netCDF and GRIB)
https://github.com/zarr-developers/VirtualiZarr
> xarray and thus NetCDF-style multidimensional arrays in WASM in the browser with HTTP Content Range requests to fetch and cache just the data requested
Pretty sure you can do this today already using Xarray and fsspec.
In theory it could be done. It is sort of analogous to what GribStream is doing already.
The grib2 files are the storage. They are sorted by time in the path and so that is used like a primary index. And then grib2 is just a binary format to decode to extract what you want.
I originally was going to write this as a plugin for Clickhouse but in the end I made it a Golang API cause then I'm less constrained to other things. Like, for example, I'd like to create and endpoint to live encode the gribfiles into MP4 so the data can be served as Video. And then with any video player you would be able to playback, jump to times, etc.
I might still write a clickhouse integration though because it would be amazing to join and combine with other datasets on the fly.
> It is sort of analogous to what GribStream is doing already.
The difference is presumably that you are doing some large rechunking operation on your server to hide from the user the fact that the data is actually in multiple files?
Cool project btw, would love to hear a little more about how it works underneath :)
Yeah, exactly.
I basically scrape all the grib index files to know all the offsets into all variables for all time. I store that in clickhouse.
When the API gets a request for a time range, set of coordinates and a set of weather parameters, first I pre-compute the mapping of (lat,lon) into the 1 dimensional index in the gridded data. That is a constant across the whole dataset. Then I query the clickhouse table to find out all the files+offset that need to be processed and all of them are queued into a multi-processing pool. And then processing each parameter implies parsing a grib file. I wrote a grib2 parser from scratch in golang so as to extract the data in a streaming fashion. As in... I don't extract the whole grid only to lookup the coordinates in it. I already pre-computed the index, so I can just decode every value in the grid in order and when I hit and index that I'm looking for, I copy it to a fixed size buffer with the extracted data. When You have all the pre-computed indexes then you don't even need to finish downloading the file, I just drop the connection immediately.
It is pretty cool. It is running in very humble hardware so I'm hoping I'll get some traction so I can throw more money into it. It should scale pretty linearly.
I've tested doing multi-year requests and the golang program never goes over 80Mb of memory usage. The CPUs get pegged so that is the limiting factor.
Grib2 complex packing (what the NBM dataset uses) implies lots of bit-packing. So there is a ton more to optimize using SIMD instructions. I've been toying with it a bit but I don't want to mission creep into that yet (fascinating though!).
I'm tempted to port this https://github.com/fast-pack/simdcomp to native go ASM.
Just a heads up that there is a similar Python package for this that is free called Herbie [1], though the syntax of this one looks a little easier to use.
blaylockbk/Herbie: https://github.com/blaylockbk/Herbie :
> Download numerical weather prediction datasets (HRRR, RAP, GFS, IFS, etc.) from NOMADS, NODD partners (Amazon, Google, Microsoft), ECMWF open data, and the Pando Archive System
The Herbie docs mention a "GFS GraphCast" but not yet GenCast? https://herbie.readthedocs.io/en/stable/gallery/noaa_models/...
"GenCast predicts weather and the risks of extreme conditions with state-of-the-art accuracy" (2024) https://deepmind.google/discover/blog/gencast-predicts-weath...
"Probabilistic weather forecasting with machine learning" (2024) ; GenCast paper https://www.nature.com/articles/s41586-024-08252-9
google-deepmind/graphcast: https://github.com/google-deepmind/graphcast
Are there error and cost benchmarks for these predictive models?
'Profound consequences': NZ scientists make 'dark energy' breakthrough
"Supernovae evidence for foundational change to cosmological models" (2024) https://academic.oup.com/mnrasl/article/537/1/L55/7926647
Borrow Checking, RC, GC, and Eleven Other Memory Safety Approaches
From https://news.ycombinator.com/item?id=33560227#33563857 :
- Type safety > Memory management and type safety: https://en.wikipedia.org/wiki/Type_safety#Memory_management_...
- Memory safety > Classification of memory safety errors: https://en.wikipedia.org/wiki/Memory_safety#Classification_o...
- Template:Memory management https://en.wikipedia.org/wiki/Template:Memory_management
- Category:Memory_management https://en.wikipedia.org/wiki/Category:Memory_management
Ask HN: Are you paying electricity bills for your service?
Hi HN, We are an early-stage startup working on OS-level AI energy optimization. Our goal is to revolutionize the way systems manage power consumption at the OS level making AI and compute more energy-efficient and sustainable. Our initial prototype has demonstrated significant saving (20%+) without application/model changes.
We are looking to connect with individuals who have exposure to energy costs, energy purchasing or a strong desire to optimize them at the server and datacenter level. Whether you are working in data centers, telco, AI infra, or any area where energy efficiency is crucial, we would love to discuss potential challenges, solutions, and opportunities in this space. We're eager to learn from your experiences and explore ways we can collaborate to make a meaningful impact. Drop us a note at scottcha@live.com and we'll connect up.
Thanks Scott & Chad
Various solutions for (incentivizing) energy efficiency:
- https://app.electricitymaps.com/map/24h
- "First open-source global energy system model" https://news.ycombinator.com/item?id=39333285
- There aren't intraday electricity prices in most markets in the United States. (You must have intraday electricity prices to be an EU Member State, by comparison.). Wouldn't intraday prices incentivize energy storage solutions and energy efficiency?
- "Ask HN: Does mounting servers parallel with the temperature gradient trap heat?" (2020) https://news.ycombinator.com/item?id=23033210
- "Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
- "New HPE fanless cooling achieves a 90% reduction in cooling power consumption" (2024) https://news.ycombinator.com/item?id=42106715
- "Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
- "Three New Supercomputers Reach Top of Green500 List" (2024) https://news.ycombinator.com/item?id=40473656
/? costed opcodes (smart contracts, eWASM,)
Thanks, Yes I'm a long time electricity maps customer. I also agree that the EU markets data transparency is further along in helping incentivize opportunities here.
Also thanks for the other links!
It looks like they have data for the United States now.
We have public utilities that must raise bonds to grow to scale with demand, and some competitive electrical energy markets with competitive margin and efficiency incentives in the US.
FWIU, datacenters are unable to sell their waste heat, boiled sterilized steam and water, unused diesel, and potentially excess energy storage.
To sell energy back to the grid to help solve the Duck curve and Alligator curve problems requires a smart grid 2.0 or a green field without much existing infrastructure to cross over or under.
To sell datacenter waste heat through a pipe under the street to the building next door, you must add some heat.
Nonprofit org opportunity: Screenplay wherein {{Superhero}} explains this and other efficiency and sustainability opportunities to the market at large, perhaps with comically excessive product placement for which charity or charities and physics with 3D CG
"Solar thermal trapping at 1,000°C and above" (2024) should be enough added heat to move waste datacenter heat to a compatible adjacent facility.
Sand batteries hold more heat than water, in certain thermal conditions.
Algae farms can eat CO2 and Heat, for example.
Cooling towers waste heat and water as steam.
Nonprofit org opportunity: Directory with for-charity sponsored lists of renewable energy products and services, with regions of operation; though there's a way to list schema:LocalBusiness and their schema:Offer (s) for products and services with rdfs:Class and rdfs:Property for search engines to someday also index
Business opportunity: managed renewable energy service to quote solar/wind/hydro/geo/fauna/insulation site plans [for nonprofit organizations], with support contracts and safety inspection and also crew management
The Landauer limit is presumed to be a real limit to electronic computation: changing a 1 to a 0 releases energy that can't be sent out on the negative so it's wasted as thermal energy.
Photonic computing, graphene computing, counterfactual computing, superconductors, and probably edge chiral currents do or will eventually do more computation per unit of energy.
Ops per wHr metrics quantify the difference.
Jupyter Notebooks as E2E Tests
You can just use nbclient/nbconvert as a library instead of on cmd. Execute the notebook, and the cell outputs are directly available as python objects [1]. Error handling is also inbuilt.
This should make integration with pytest etc, much simpler.
[1] https://nbconvert.readthedocs.io/en/latest/execute_api.html#...
There are a number of ways to run tests within notebooks and to run notebooks as tests.
ipytest: https://github.com/chmp/ipytest
nbval: https://github.com/computationalmodelling/nbval
papermill: https://github.com/nteract/papermill
awesome-jupyter > Testing: https://github.com/markusschanta/awesome-jupyter
"Is there a cell tag convention to skip execution?" https://discourse.jupyter.org/t/is-there-a-cell-tag-conventi...
Methods for importing notebooks like regular Python modules: ipynb, import_ipnb, nbimporter, nbdev
nbimporter README on why not to import notebooks like modules: https://github.com/grst/nbimporter#update-2019-06-i-do-not-r...
nbdev parses jupyter notebook cell comments to determine whether to export a module to a .py file: https://nbdev1.fast.ai/export.html
There's a Jupyter kernel for running ansible playbooks as notebooks with output.
dask-labextension does more of a retry-on-failure workflow
Biodegradable aerogel: Airy cellulose from a 3D printer
ScholarlyArticle: "Additive Manufacturing of Nanocellulose Aerogels with Structure-Oriented Thermal, Mechanical, and Biological Properties" (2024) https://onlinelibrary.wiley.com/doi/10.1002/advs.202307921
From https://news.ycombinator.com/item?id=13912857 :
> “After we 3-D print, we restore the hydrogen bonding network through a sodium hydroxide treatment,” Pattinson says. “We find that the strength and toughness of the parts we get … are greater than many commonly used materials” for 3-D printing, including acrylonitrile butadiene styrene (ABS) and polylactic acid (PLA).
From https://news.ycombinator.com/item?id=41314454 :
> "Plant-based epoxy enables recyclable carbon fiber" (2022) [that's stronger than steel and lighter than fiberglass] https://news.ycombinator.com/item?id=30138954 ... https://news.ycombinator.com/item?id=37560244
"Scientists convert bacteria into efficient cellulose producers" (2024) https://news.ycombinator.com/item?id=41175002
Do hemp and cellulose 3d-printing filaments emit less VOCs?
"Japanese Joinery, 3D printing, and FreeCAD" (2022) https://news.ycombinator.com/item?id=36191589
We developed a way to use light to dismantle PFAS 'forever chemicals'
A recycled story from a month ago (21 points, 1 comment) https://news.ycombinator.com/item?id=42199625
Which seems to be related to a different June study Low-Cost Method to Decompose PFAS Using LED Light and Semiconductor Nanocrystals (29 points, 5 months ago, 26 comments) https://news.ycombinator.com/item?id=41062885
Distinctly, this is a subsequent article by the authors published by phys.org
"Photocatalytic C–F bond activation in small molecules and polyfluoroalkyl substances" (2024) https://www.nature.com/articles/s41586-024-08327-7 :
> Abstract: [...] Here, we report an organic photoredox catalyst system that can efficiently reduce C–F bonds to generate carbon-centered radicals, which can then be intercepted for hydrodefluorination (swapping F for H) and cross-coupling reactions. This system enables the general use of organofluorines as synthons under mild reaction conditions. We extend this method to the defluorination of polyfluoroalkyl substances (PFAS) and fluorinated polymers, a critical challenge in the breakdown of persistent and environmentally damaging forever chemicals.
Natural Number Game: build the basic theory of the natural numbers from scratch
I’ve really been getting into Lean, and these tutorials are fantastic. It’s always interesting to see how many hidden assumptions are made in short and informal mathematical proofs that end up becoming a (very long) sequence of formal transformations when all of the necessary steps are written out explicitly.
My guess is that mathematical research utilizing these systems is probably the future of most of the field (perhaps to the consternation of the old guard). I don’t think it will replace human intuition, but the hope is that it will augment it in ways we didn’t expect or think about. Terence Tao has a great blog post about this, specifically about how proof assistants can improve collaboration.
On another note, these proof systems always prompt me to think about the uncomfortable paradox that underlies mathematics, namely, that even the simplest of axioms cannot be proven “ex nihilo” — we merely trust that a contradiction will never arise (indeed, if the axioms corresponding to the formal system of a sufficiently powerful theory are actually consistent, we cannot prove that fact from within the system itself).
The selection of which axioms (or finite axiomatization) to use is guided by... what exactly then? Centuries of human intuition and not finding a contradiction so far? That doesn’t seem very rigorous. But then if you try to select axioms probabilistically (e.g., via logical induction), you end up with a circular dependency, because the predictions that result are only as rigorous as the axioms underlying the theory of probability itself.
To be clear, I’m not saying anything actual mathematicians haven’t already thought about (or particularly care about that much for their work), but for a layperson, the increasing popularity of these formal proof assistants certainly brings the paradox to the forefront.
The selection of which axioms to use is an interesting topic. Historically, axioms were first used by Hilbert and cie. while trying to progress in set theory. The complexity of the topic naturally led them to formalize their work, and thus arose ZF(C) [1], which later stuck as the de facto basis of modern mathematics.
Later systems, like Univalent Fundations [2] came from some limitations of ZFC and the desire to have a set of axioms that is easier to work with (for e.g. computer proof assistants). The choice of any new systems of axioms is ultimately limited by the scope of what ZFC can do, so as to preserve the past two centuries of mathematical works.
[1] https://en.wikipedia.org/wiki/Zermelo-Fraenkel_set_theory
Homotopy type theory (HoTT) https://en.wikipedia.org/wiki/Homotopy_type_theory
/? From a set like {0,1} to a wave function of reals in Hilbert space [to Constructor Theory and Quantum Counterfactuals] https://www.google.com/search?q=From+a+set+like+%7B0%2C1%7D+... , https://www.google.com/search?q=From+a+set+like+%7B0%2C1%7D+...
From "What do we mean by "the foundations of mathematics"?" (2023) https://news.ycombinator.com/item?id=38102096#38103520 :
> HoTT in CoQ: Coq-HoTT: https://github.com/HoTT/Coq-HoTT
>>> The HoTT library is a development of homotopy-theoretic ideas in the Coq proof assistant. It draws many ideas from Vladimir Voevodsky's Foundations library (which has since been incorporated into the UniMath library) and also cross-pollinates with the HoTT-Agda library. See also: HoTT in Lean2, Spectral Sequences in Lean2, and Cubical Agda.
leanprover/lean2 /hott: https://github.com/leanprover/lean2/tree/master/hott
Lean4:
"Theorem Proving in Lean 4" https://lean-lang.org/theorem_proving_in_lean4/
Learnxinimutes > Lean 4: https://learnxinyminutes.com/lean4/
/? Hott in lean4 https://www.google.com/search?q=hott+in+lean4
> What is the relation between Coq-HoTT & Homotopy Type Theory and Set Theory with e.g. ZFC?
The Equal Rights Amendment is already part of the U.S. Constitution
Equal Rights Amendment > Recent Congressional action: https://en.wikipedia.org/wiki/Equal_Rights_Amendment#Recent_...
Declaration of Independence: https://www.archives.gov/founding-docs/declaration :
> We hold these truths to be self-evident, that all men are created equal,
Fifth Amendment to the United States Constitution: https://en.wikipedia.org/wiki/Fifth_Amendment_to_the_United_... :
> Like the Fourteenth Amendment, the Fifth Amendment includes a due process clause stating that no person shall "be deprived of life, liberty, or property, without due process of law". The Fifth Amendment's Due Process Clause applies to the federal government, while the Fourteenth Amendment's Due Process Clause applies to state governments (and by extension, local governments). The Supreme Court has interpreted the Fifth Amendment's Due Process Clause to provide two main protections: procedural due process, which requires government officials to follow fair procedures before depriving a person of life, liberty, or property, and substantive due process, which protects certain fundamental rights from government interference. The Supreme Court has also held that the Due Process Clause contains a prohibition against vague laws and an implied equal protection requirement similar to the Fourteenth Amendment's Equal Protection Clause.
Fourteenth Amendment: https://en.wikipedia.org/wiki/Fourteenth_Amendment_to_the_Un...
Fourteenth Amendment > Equal Protection Clause: https://en.wikipedia.org/wiki/Equal_Protection_Clause :
> The Equal Protection Clause is part of the first section of the Fourteenth Amendment to the United States Constitution. The clause, which took effect in 1868, provides "nor shall any State ... deny to any person within its jurisdiction the equal protection of the laws."
Equal justice under law: https://en.wikipedia.org/wiki/Equal_justice_under_law
Equality before the law: https://en.wikipedia.org/wiki/Equality_before_the_law :
> Article 7 of the Universal Declaration of Human Rights (UDHR) states: "All are equal before the law and are entitled without any discrimination to equal protection of the law". [1] Thus, it states that everyone must be treated equally under the law regardless of race, gender, color, ethnicity, religion, disability, or other characteristics, without privilege, discrimination or bias. The general guarantee of equality is provided by most of the world's national constitutions, [4] but specific implementations of this guarantee vary.
Equal Protection of Equal Rights.
Strict scrutiny iff: https://en.wikipedia.org/wiki/Strict_scrutiny
UDHR: Universal Declaration of Human Rights: https://www.un.org/en/about-us/universal-declaration-of-huma... :
> Article 7: All are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination.
> Article 8: Everyone has the right to an effective remedy by the competent national tribunals for acts violating the fundamental rights granted him by the constitution or by law.
> Article 10: Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.
> Article 16: Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family. They are entitled to equal rights as to marriage, during marriage and at its dissolution.
UDHR: https://en.wikipedia.org/wiki/Universal_Declaration_of_Human...
(1) Equal Protection of (2) Equal Rights.
Inalienable right: https://en.wikipedia.org/wiki/Inalienable_right ( -> Natural rights and legal rights (in British Common law since the Magna Carta))
Life:
Liberty: https://en.wikipedia.org/wiki/Liberty :
> In the Constitutional law of the United States, ordered liberty means creating a balanced society where individuals have the freedom to act without unnecessary interference (negative liberty) and access to opportunities and resources to pursue their goals (positive liberty), all within a fair legal system.
Pursuit of Happiness:
Searching for small primordial black holes in planets, asteroids, here on Earth
https://news.ycombinator.com/item?id=42347561 (1 point)
Maybe the gravity hole in the Indian Ocean is due to a primordial black hole
A black hole would orbit the center of the earth, it wouldn't be lodged in place somewhere
/? What happens if a black hole hits Earth? https://www.google.com/search?q=What+Happens+If+A+Black+Hole...
Though, the same folks claim there could be a microscopic black hole every 30 km on or throughout earth
https://news.ycombinator.com/item?id=38452488 :
> PBS Spacetime estimates that there are naturally occurring microscopic black holes every 30 km on Earth. https://news.ycombinator.com/item?id=33483002
> [...]
> Isn't there a potential for net relative displacement and so thus couldn't (microscopic) black holes be a space drive?
How did software get so reliable without proof? (1996) [pdf]
"How Did Software Get So Reliable Without Proof?" (1996) : https://6826.csail.mit.edu/2020/papers/noproof.pdf
> Abstract: By surveying current software engineering practice, this paper reveals that the techniques employed to achieve reliability are little different from those which have proved effective in all other branches of modern engineering: rigorous management of procedures for design inspection and review; quality assurance based on a wide range of targeted tests; continuous evolution by removal of errors from products already in widespread use; and defensive programming, among other forms of deliberate over-engineering. Formal methods and proof play a small direct role in large scale programming; but they do provide a conceptual framework and basic under- standing to promote the best of current practice, and point directions for future improvement.
From "Extreme Programming" > History: https://en.wikipedia.org/wiki/Extreme_programming :
> He began to refine the development methodology used in the project [in 1996] and wrote a book on the methodology (Extreme Programming Explained, published in October 1999)
> Many extreme-programming practices have been around for some time; the methodology takes "best practices" to extreme levels. For example, the "practice of test-first development, planning and writing tests before each micro-increment" was used as early as NASA's Project Mercury, in the early 1960s.
Agile > History: https://en.wikipedia.org/wiki/Agile_software_development#His... :
> Iterative and incremental software development methods can be traced back as early as 1957, [10] with evolutionary project management [11][12] and adaptive software development [[ ASD ]] emerging in the early 1970s.
> During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods (often referred to collectively as waterfall) that critics described as overly regulated, planned, and micromanaged. [15] These lightweight methods included: rapid application development (RAD), from 1991; [16][17] the unified process (UP) and dynamic systems development method (DSDM), both from 1994; Scrum, from 1995; Crystal Clear and extreme programming (XP), both from 1996; and feature-driven development (FDD), from 1997. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods. [3]
> "Manifesto for Agile Software Development" [2001]
awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
/? z3
From https://news.ycombinator.com/item?id=41944043 :
> - Does Z3 distinguish between xy+z and z+yx, in terms of floating point output? math.fma() would be a different function call with the same or similar output.
> (deal-solver is a tool for verifying formal implementations in Python with Z3.)
- re: property testing, fuzzing, formal verification: https://news.ycombinator.com/item?id=40884466 https://westurner.github.io/hnlog/#story-40875559
From "The Future of TLA+ [pdf]" (2024) https://news.ycombinator.com/item?id=41385141 :
> Formal methods including TLA+ also can't/don't prevent or can only workaround side channels in hardware and firmware that is not verified. But that's a different layer.
Things formal methods shouldn't be expected to find: Floating point arithmetic non-associativity, side-channels
> "Use of Formal Methods at Amazon Web Services" (2014) https://lamport.azurewebsites.net/tla/formal-methods-amazon....
>> This raised a challenge; how to convey the purpose and benefits of formal methods to an audience of software engineers? Engineers think in terms of debugging rather than ‘verification’, so we called the presentation “Debugging Designs” [8] . Continuing that metaphor, we have found that software engineers more readily grasp the concept and practical value of TLA+ if we dub it:
Exhaustively testable pseudo-code
> We initially avoid the words [...]Ask HN: Worst gravity and black hole art, and CG labeling guidelines for science
I think that computer graphic simulations of gravity and black holes should be labeled as simulations.
The same technology that we use to label or watermark genai image and video content could be used to label n-body gravity and black hole images.
"Is this a direct physical observation? Is this retouched; did they add color, did they refocus, etc? Is this a presumptively contrived simulation rendering that shouldn't have same strength of evidence as a less biased direct physical observation?"
Almost all images and videos of black holes are simulations just.
This likely affects our intuition. People try to fit models to observations. If the observations are conjecture but aren't so labeled, then the people have unfortunately biased intuition.
In the interest of progress in science, What are some of the worst black hole and gravity computer graphics you've seen?
How could science programming improve its citations in presenting simulated computer graphics as science evidence?
How could an "unproven science rendering labeling guideline" be politely and effectively shared with CG artists, physicists, and science content producers?
Sagittarius A*: https://en.wikipedia.org/wiki/Sagittarius_A\\\\* :
> Astronomers have been unable to observe Sgr A* in the optical spectrum because of the effect of 25 magnitudes of extinction (absorption and scattering) by dust and gas between the source and Earth. [26]
> [...]
> In May 2022, astronomers released the first image of the accretion disk around the horizon of Sagittarius A*, confirming it to be a black hole, using the Event Horizon Telescope, a world-wide network of radio observatories. [13] This is the second confirmed image of a black hole, after Messier 87's supermassive black hole in 2019. [14][15] The black hole itself is not seen; as light is incapable of escaping the immense gravitational force of a black hole," only nearby objects whose behavior is influenced by the black hole can be observed. The observed radio and infrared energy emanates from gas and dust heated to millions of degrees while falling into the black hole. [16]*
EHT: Event Horizon Telescope: https://en.wikipedia.org/wiki/Event_Horizon_Telescope
First EHT image of Sagittarius A*: (2017, 2022) https://commons.wikimedia.org/wiki/File:EHT_Saggitarius_A_bl...
/? How does the rotation of our solar system relate to the rotation of Sagittarius A* https://www.google.com/search?q=How+does+the+rotation+of+our...*
Milky Way > Astrography > Sun's location and neighborhood: https://en.wikipedia.org/wiki/Milky_Way#Sun's_location_and_n... :
> The apex of the Sun's way, or the solar apex, is the direction that the Sun travels through space in the Milky Way. The general direction of the Sun's Galactic motion is towards the star Vega near the constellation of Hercules,
> at an angle of roughly 60 sky degrees to the direction of the Galactic Center.
> The Sun's orbit about the Milky Way is expected to be roughly elliptical with the addition of perturbations due to the Galactic spiral arms and non-uniform mass distributions. In addition, the Sun passes through the Galactic plane approximately 2.7 times per orbit. [109] [unreliable source?] This is very similar to how a simple harmonic oscillator works with no drag force (damping) term.* [...]
> It takes the Solar System about 240 million years to complete one orbit of the Milky Way (a galactic year), [112] so the Sun is thought to have completed 18–20 orbits during its lifetime and 1/1250 of a revolution since the origin of humans.
> The orbital speed of the Solar System about the center of the Milky Way is approximately 220 km/s (490,000 mph) or 0.073% of the speed of light.
> The Sun moves through the heliosphere at 84,000 km/h (52,000 mph). At this speed, it takes around 1,400 years for the Solar System to travel a distance of 1 light-year, or 8 days to travel 1 AU (astronomical unit). [113] The Solar System is headed in the direction of the zodiacal constellation Scorpius, which follows the ecliptic. [114]
From an Answer to "Does our solar system orbit Sagittarius A (location of the SMBH)?" https://www.quora.com/Does-our-solar-system-orbit-Sagittariu... :
> [...] Think about this: the mass of the entire Milky Way galaxy is about 1 trillion solar masses; however, Sgr A is only about 4 million solar masses: about 0.0001% of the galaxy’s total mass. The force that is most responsible for keeping the solar system in orbit around the galaxy is not Sgr A, but the gravity of the galaxy itself.
> Need evidence? Consider the fact that as the Sun moves around the galaxy, it also bobs “above” and “below” the galactic plane. It’s orbital path resembles the edge of a warped record instead of a flat ellipse. This wouldn’t happen if the Sun were simply orbiting a point mass in the center of the galaxy, but it makes perfect sense if the Sun is being tugged “up” and then “down” by the galaxy’s collective gravity.
> So yes, the Sun is orbiting around a point corresponding to the location of Sgr A, but no, the Sun is not just orbiting Sgr A.
"[EHT Sagittarius A* (Sgr A*)] Milky Way black hole has 'strong, twisted' magnetic field in mesmerizing new [polarized (phase filtered)] image" (2024-03) https://www.npr.org/2024/03/28/1241403435/milky-way-black-ho...
"Astronomers unveil strong magnetic fields spiraling at the edge of Milky Way’s central black hole" (2024-03) https://www.eso.org/public/news/eso2406/
ScholarlyArticle: "First Sagittarius A* Event Horizon Telescope Results. VII. Polarization of the Ring" (2024) https://doi.org/10.3847/2041-8213/ad2df0 https://iopscience.iop.org/article/10.3847/2041-8213/ad2df0
ScholarlyArticle: "First Sagittarius A* Event Horizon Telescope Results. VIII.: Physical interpretation of the polarized ring" (2024) https://doi.org/10.3847/2041-8213/ad2df1 https://iopscience.iop.org/article/10.3847/2041-8213/ad2df1
"First image of our Milky Way's black hole [Sagittarius A*] may be inaccurate, scientists say" (2024-10) https://news.ycombinator.com/item?id=42087932 https://www.space.com/the-universe/black-holes/1st-image-of-... :
> "We hypothesize that the ring image resulted from errors during EHT's imaging analysis and that part of it was an artifact, rather than the actual astronomical structure."
> The ring-like structure, which really represents intense radio waves blasted from the brilliant disk of gas, known as the "accretion disk," that's swirling around the black hole, may in fact be more elongated than appears, the researchers argue
"An Independent Hybrid Imaging of Sgr A* from the Data in EHT 2017 Observations" (2024-10) https://doi.org/10.1093/mnras/stae1158 https://academic.oup.com/mnras/article/534/4/3237/7660988 :
> To mitigate the time variability in the Sgr A* structure, data weighting strategies derived from general relativistic magnetohydrodynamical (GRMHD) simulations with the source-size assumption are used. In addition, the EHTC analysis uses a unique criterion for selecting the final image. It is not based on consistency with the observed data, but rather on the highest rate of appearance of the image morphology from a wide imaging parameter space. Our concern is that these methods may interfere with the PSF deconvolution and cause the resulting image to reflect more structural features of the PSF than of the actual intrinsic source.
> On the other hand, our independent analysis used traditional very long baseline interferometry (VLBI) imaging techniques to derive our final image. We used the hybrid mapping method, which is widely accepted as the standard approach, namely iterations between imaging by the clean algorithm and calibrating the data by self-calibration, following well-established precautions. In addition, we performed a comparison with the PSF structure, noting the absence of distinct PSF features in the resulting images. Finally, we selected the image with the highest degree of consistency with the observational data."
> Our final image shows that the eastern half of this structure is brighter, which may be due to a Doppler boost from the rapid rotation of the accretion disc. We hypothesize that our image indicates that a portion of the accretion disc, located approximately 2 to several R_S (where R_S is the Schwarzschild radius) away from the black hole, is rotating at nearly 60% of the speed of light and is seen at an angle of 40°−45° (Section 4).
> general relativistic magnetohydrodynamical (GRMHD)
Magnetohydrodynamics; fluids and electricity: https://en.wikipedia.org/wiki/Magnetohydrodynamics
Magnetorotational instability: https://en.wikipedia.org/wiki/Magnetorotational_instability
SPH: 1970s
QHD: Quantum Hydrodynamics
SQS: Superfluid Quantum Space
SQG: Superfluid Quantum Gravity and SQR: Superfluid Quantum Relativity
Fedi's
> Schwarzschild radius
Does this image help to confirm or reject the Shwarzschild radius, or whether black holes have hair or turbulence at the boundary we've been calling the event horizon though Hawking radiation permeates through all such boundaries in at least some directions?
> which may be due to a Doppler boost from the rapid rotation of the accretion disc.
How much does the thermal energy vary within the (already physically distributed) matter surrounding the black hole?
Isn't there reason to believe that things are in a different phase of matter at those darker temperatures ? Are there superfluid states at those thermal ranges? Are our phase transition diagrams complete? Standard phase transition diagrams do not describe the Mpemba effect, for example. TODOcite. Does fusion plasma research have insight into the phases of matter at higher temperatures? What are the temperatures around the disturbance?
Does M87a show a similar Doppler effect as this analysis suggests affects the 2017 EHT Sgr A* image?
> Mpemba effect, phase transitions
And whether that matters for imaging of an accretion disc
> From "Exploring Quantum Mpemba Effects" (2024) https://physics.aps.org/articles/v17/10 :
> Similar processes, in which a system relaxes to equilibrium more quickly if it is initially further away from equilibrium, are being intensely explored in the microscopic world. Now three research teams provide distinct perspectives on quantum versions of Mpemba-like effects, emphasizing the impact of strong interparticle correlations, minuscule quantum fluctuations, and initial conditions on these relaxation processes [3–5]. The teams’ findings advance quantum thermodynamics
Is quantum thermodynamics modeled in the (GRMHD) magnetohydrodynamics applied?
From https://news.ycombinator.com/item?id=42369294#42371946 https://westurner.github.io/hnlog/#comment-42371946 :
> Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
>> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
> The same technology that we use to label or watermark genai image and video content could be used to label n-body gravity and black hole images.
"IPTC publishes metadata guidance for AI-generated “synthetic media”" (2023) https://iptc.org/news/iptc-publishes-metadata-guidance-for-a...
C2PA: Coalition for Content Provenance and Authenticity (C2PA) > Specifications > Attestation: https://c2pa.org/specifications/specifications/1.4/attestati...
Research of RAM data remanence times
From https://blog.3mdeb.com/2024/2024-12-13-ram-data-decay-resear... :
> The main goal is to determine the time required after powering off the platform for all the data to be irrecoverably lost from RAM.
Cold boot attack: https://en.wikipedia.org/wiki/Cold_boot_attack
Data remnance: https://en.wikipedia.org/wiki/Data_remanence
Ironically, quantum computers have the opposite problem; coherence times for QC qubits are still less than a second: there is not yet a way to store qubits for even one second.
(2024: 291±6µs, 2022: 20µs)
"Cryogenically frozen RAM bypasses all disk encryption methods" (2008) https://www.zdnet.com/article/cryogenically-frozen-ram-bypas...
And that's part of why secure enclaves and TPM.
From https://news.ycombinator.com/item?id=37099515 :
> If quantum information is never destroyed – and classical information is quantum information without the complex term i – perhaps our [RAM] states are already preserved in the universe; like reflections in water droplets in the quantum foam
According to the "Air gap malware" Wikipedia article's description of GSMem, the x86 RAM bus can be used as a low power 3G antenna. And, HDMI is an antenna, and, Ungrounded computers leak emanations to the ungrounded microphone port on the soundcard.
Air gap malware: https://en.wikipedia.org/wiki/Air-gap_malware
Stochastic forensics: https://en.wikipedia.org/wiki/Stochastic_forensics
Back in the day, we called that https://en.wikipedia.org/wiki/Van_Eck_phreaking
Room-temperature superconductivity in Bi-based superconductors
"Room-temperature superconductivity: Researchers uncover optical secrets of Bi-based superconductors" https://phys.org/news/2024-12-room-temperature-superconducti...
"Wavelength dependence of linear birefringence and linear dichroism of Bi2−xPbxSr2CaCu2O8+δ single crystals" (2024) https://www.nature.com/articles/s41598-024-78208-6
Paper title: Wavelength dependence of linear birefringence and linear dichroism of Bi2−xPbxSr2CaCu2O8+δ single crystals
[deleted]
Noninvasive imaging method can penetrate deeper into living tissue
Mary Lou Jepsen of openwater.cc [2] has managed to image neurons and other large cell structures deep inside living bodies by phase wave interferometry and descattering signals through human tissues, even through skull and bone[1] using CMOS imageing chips, lasers, ultrasonics and holographics. A thousand times cheaper fMRI. She aims to get to realtime million pixels moving picture resolutions of around a micron. Eventually she will be able to read and write neurons or vibrate and someday maybe burn cell tissues to destruction.
There are later presentations at the website where the technique is better described and visualized, but [1] is a good quick place to start and judge if you want to study their brainscanner in more depth. There are patents and a few scientific publications [3] that I'm aware of, but mostly many up to date talks with demonstrations by startup founder Mary Lou [4]. And recently she is open sourcing[5] parts of the hardware and software on github [6] so we can start building our own lab setups and improve the imageing software.
[1] https://www.youtube.com/watch?v=awADEuv5vWY
[2] https://www.openwater.health/
[3] https://scholar.google.com/citations?hl=en&user=5Ni7umEAAAAJ...
[4] https://www.youtube.com/watch?v=U_cHAH4T8Co
From the OpenwaterHealth/opw_neuromod_sw README: https://github.com/OpenwaterHealth/opw_neuromod_sw :
> open-LIFU uses an array to precisely steer the ultrasound focus to the target location, while its wearable small size allows transmission through the forehead into a precise spot location in the brain even while the patient is moving.
Does it use one beam of the array to waveguide another beam? For imaging or for treatment?
From "A simple technique to overcome self-focusing, filamentation, supercontinuum generation, aberrations, depth dependence and waveguide interface roughness using fs laser processing" (2017) https://www.nature.com/articles/s41598-017-00589-8 https://scholar.google.com/scholar?start=10&hl=en&as_sdt=5,4... :
> We show that all these effects can be significantly reduced if not eliminated using two coherent, ultrafast laser-beams through a single lens - which we call the Dual-Beam technique. Simulations and experimental measurements at the focus are used to understand how the Dual-Beam technique can mitigate these problems. The high peak laser intensity is only formed at the aberration-free tightly localised focal spot, simultaneously, suppressing unwanted nonlinear side effects for any intensity or processing depth. Therefore, we believe this simple and innovative technique makes the fs laser capable of much more at even higher intensities than previously possible, allowing applications in multi-photon processing, bio-medical imaging, laser surgery of cells, tissue and in ophthalmology, along with laser writing of waveguides.
Gah – CLI to install software from GitHub Releases
Software installers should check hashes and signatures before installing anything.
Doesn't GitHub support package repositories for signed packages?
https://slsa.dev/get-started explains about verifiable build provenance attestations:
> SLSA 2
> To achieve SLSA 2, the goals are to:
> - Run your build on a hosted platform that generates and signs provenance
> - Publish the provenance to allow downstream users to verify
> [...]
> SLSA 3
> To achieve SLSA 3, you must:
> - Run your build on a hosted platform that generates and signs provenance
> - Ensure that build runs cannot influence each other
> - Produce signed provenance that can be verified as authentic
And:
> For now, the convention is to keep the provenance attestation with your artifact. Though Sigstore is becoming more and more popular, the format of the provenance is currently tool-specific.
That's what' Aqua [0] does plus more!
From https://aquaproj.github.io/ :
> aqua installs tools securely. aqua supports Checksum Verification, Policy as Code, Cosign and SLSA Provenance, GitHub Artifact Attestations, and Minisign. Please see Security.
Show HN: Quantus – LeetCode for Financial Modeling
Hi everyone,
I wanted to share Quantus, a finance learning and practice platform I’m building out of my own frustration with traditional resources.
As a dual major in engineering and finance who started my career at a hedge fund, I found it challenging to develop hands-on financial modeling skills using existing tools. Platforms like Coursera, Udemy, Corporate Finance Institute (CFI), and Wall Street Prep (WSP) primarily rely on video-based tutorials. While informative, these formats often lack the dynamic, interactive, and repetitive practice necessary to build real expertise.
For example, the learning process often involves:
- Replaying videos multiple times to grasp key concepts.
- Constantly switching between tutorials and Excel files.
- Dealing with occasional discrepancies between tutorial numbers and the provided Excel materials.
To solve these problems, I created Quantus—an interactive platform where users can learn finance by trying out formulas or building financial models directly in an Excel-like environment. Inspired by LeetCode, the content is organized into three levels—easy, medium, and hard—making it accessible for beginners while still challenging for advanced users.
Our growing library of examples includes:
- 3-statement financial models
- Discounted Cash Flow (DCF) analysis
- Leveraged Buyouts (LBO)
- Mergers and Acquisitions (M&A)
Here’s a demo video to showcase the platform in action. https://www.youtube.com/watch?v=bDRNHgBERLQ
I’d love to hear your thoughts and feedback! Let me know what other features or examples you’d find useful.
How does your solution differ from JupyterHub + ottergrader or nbgrader? What about Kaggle Learn?
From taking the Udacity AI Programming in Python course, it looks like Udacity has a Jupyter Notebooks -based LMS LRS CMS platform.
How does your solution differ from the "Quant Platform" which hosts [Certificate in] "Python for Finance": https://home.tpq.io/pqp/
Do you award (OpenBadges) educational credentials as Blockcerts (W3C VC Verifiable Claims, W3C DIDs Decentralized Identifiers,) with blockchain-certificates/cert-issuer? https://github.com/blockchain-certificates/cert-issuer#how-b...
Our goal is to help users improve their finance and accounting knowledge. This includes acing job interviews and learning how to create different financial models for varied sectors. Currently, we have no plans to provide coding lessons or provide certificates.
Small Businesses vs. Corporations: What Tech Tools Are We Missing?
I was browsing Y Combinator's latest Request for Startups and one prompt caught my attention:
"Develop tools that help small businesses operate at the same level as large corporations"
This got me thinking: What critical operational capabilities are small businesses currently lacking?
What technological or strategic barriers prevent them from competing on equal footing with larger enterprises?
I'm curious to hear from entrepreneurs, small business owners, and tech enthusiasts:
What tools or systems do you wish existed?
What daily/weekly/monthly tasks feel unnecessarily complex or time-consuming?
What do large corporations do that seems out of reach for smaller organizations?
> What critical operational capabilities are small businesses currently lacking?
ERP / Accounting + Finance + Corporate Treasury integration (with ILP Interledger)
/? ERP accounting site:news.ycombinator.com site: github.com https://www.google.com/search?q=erp+accounting+site%3Anews.y...
/? ERP https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
/? SMB https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Security.
Having to pay for SAML and SCIM integration.
MDM and EDR.
Security baseline configuration deployments for different OS.
It’s a farce.
You mean the security in large organizations is a farce?
SAML/ SCIM Integration are often buggy or doesn't work as advertised..
MDM is just a circus in making, EDR can be easily bypassed...
Pentests are barely worth more than script kiddies even from well known and recognized vendors.
I am not even specialized in sec and it drives me crazy the amount of bypass/work around in IT organizations while pretending everything is well managed and design.
Re: IAM cost workarounds in SMBs, SAML / Oauth2/OIDC / LDAP:
From "Show HN: Skip the SSO Tax, access your user data with OSS" https://news.ycombinator.com/item?id=35529042 :
glim: https://github.com/doncicuto/glim
"Proxy LDAP to limit scope of access #60" https://github.com/doncicuto/glim/issues/60
glauth: https://github.com/glauth/glauth
slapd-sql: https://linux.die.net/man/5/slapd-sql
gitlab-ce-ldap-sync (PHP) https://github.com/Adambean/gitlab-ce-ldap-sync
Open Source SSO for SMB
"Launch HN: SSOReady (YC W24) – Making SAML SSO painless and open source" https://news.ycombinator.com/item?id=41110850 :
ssoready: https://github.com/ssoready/ssoready
Next-generation datacenters consume zero water for cooling
So MICROS~1 has finally discovered the wonders of heat pump systems? Great, what next, photovoltaics?
Thermophotovoltaics (TPV) and Thermoelectrics.
This is also a positive development in HPC sustainability:
"New HPE fanless cooling achieves a 90% reduction in cooling power consumption" (2024) https://news.ycombinator.com/item?id=42106715
[flagged]
Also, in a linked post [0], it looks like this still uses water in a closed loop heat exchange system. It just won't evaporate for cooling.
[0] https://www.microsoft.com/en-us/microsoft-cloud/blog/2024/07...
To be clear, I still consider this laudable and hope to change our house to use a heat exchange eventually, rather than a furnace plus air conditioner pair, per Technology Connections' advisement [1].
[1] https://www.youtube.com/watch?v=7J52mDjZzto 35 minutes
On YouTube, I've seen pools heated by immersion mining rigs and also pools heated by attic heat; but I've never heard of a "zero water datacenter".
Great work.
FWIU Type-III superconductors also have like no loss.
Is there a less corrosive sustainable thermal fluid or a water additive to prevent corrosion?
Is there (solid-state) thermoelectric heat energy reclamation?
Launch HN: Double (YC W24) – Index Investing with 0% Expense Ratios
Hi HN, we’re JJ and Mark, the founders of Double (https://double.finance). Double lets you invest in 50+ broad stock market indexes with 0% expense ratios. We handle all the management, including rebalancing and tax-loss harvesting—proactively selling losing stocks to potentially save on taxes—for $1/month.
Our goal is to bring the low fee trend pioneered by Robinhood to ETFs and mutual funds. We posted a Show HN about 3 months ago (https://news.ycombinator.com/item?id=41246686) and since then have crossed $10M in AUM (Assets Under Management) [1].
Here’s a demo video: https://www.loom.com/share/10c9150ce4114f278e8c249f211e7ec8. Please note that none of this content should be treated as financial advice.
Everyone knows that fees eat into your investing returns. Financial advisors generally charge 1% of AUM per year, and ETFs have a weighted average expense ratio of about 0.17%, although some go as low as 0.03% for VOO. Over a 30-year period on a $500k portfolio with $2k invested monthly, the money lost to those fees would be $1.30M for the financial advisor and $244k for the average ETF and even $42,951 for the low fee VOO.
Double lets you index invest without paying any percentage-based fees - we charge just $1/month. It works by buying the individual stocks that make up popular indexes. By buying the individual positions, we can also customize and tax-loss harvest your account, something ETFs or Mutual Funds cannot do.
Most ETFs and mutual funds today are not that complicated - they can be expressed as a CSV file with 2 columns - a ticker and a share number. You can find these holding csv files on most ETF pages (VOO[2], QQQ[3]). Right now there are about $9.1T of assets in ETFs[4] and $20T in Mutual Funds[5] in the US, with estimated revenue of $100B per year. We think this market is ripe for disruption.
We offer 50+ strategies that track popular ETFs and are updated as stocks merge and indexes change. You can customize these by weighting sectors or stocks differently, or even build your own indexes from scratch using our stock/etf screening tools. Once you've chosen your strategy, simply set your target weights and deposit funds (we also support transferring existing stocks). Our engine then checks your portfolio daily for potential trades, optimizes for tax-loss harvesting, portfolio tracking, and redeploys any generated cash.
I (JJ) started working on this after selling my last company. After using nearly every brokerage product out there and working with a financial advisor, I noticed a huge gap between the indexing capabilities of financial advisors and what individual investors could access. We wanted to bridge that gap and provide these powerful tools to everyone in a simple, low-cost way.
There are a number of robo-advisor products out there, but none that we know of offer direct indexing without expense ratios or AUM fees. One similar product is M1 Finance, but Double is more powerful. We offer tax-loss harvesting, a wider range of indexes, and greater customization. For example, when building your own index, you can set weights down to 0.1% (compared to M1's 1%) and even weight by market cap.
We also compete with robo-advisors like Wealthfront, but offer more control over your investments. And did I mention we don't charge AUM fees? You can see our strategies and play with the research page https://double.finance/p/explore without creating an account.
Over the past year we’ve learned a lot about the guts of building portfolio software. For example, stocks don’t really have persistent identifiers that are easy to model and pass around. We trade CUSIPs with our custodian Apex*, but these change all the time for stock splits or re-org’s that you would not think would lead to a new “stock”.
We’ve also learned a lot about how tax loss harvesting (TLH) is best implemented on large direct index portfolios using a factor model as opposed to pairs based replacements which I initially thought might be the way to execute these. We do use a pairs strategy on smaller sized strategies. And how TLH and portfolio optimization generally is best expressed as a linear optimization problem with competing objectives (tracking vs. tax alpha vs. trading costs for example).
If you have any thoughts on the product or our positioning as the low fee alternative I’d love to hear it. I think Robinhood has proved that you can build a strong business by getting rid of an industry wide cost (in their case commissions, in our expense ratios). We aim to do the same.
[1] https://www.axios.com/pro/fintech-deals/2024/12/10/direct-in... [2] https://investor.vanguard.com/investment-products/etfs/profi... [3] https://www.invesco.com/qqq-etf/en/about.html [4] https://fred.stlouisfed.org/series/BOGZ1LM564090005Q [5] https://fred.stlouisfed.org/series/BOGZ1LM654090000Q
* Edit to add a note on risk: If Double goes out of business, your assets are safe and held in your name at Apex Clearing. They have processes in place for these scenarios to help you access and transfer those assets. See more at https://news.ycombinator.com/item?id=42379135 below.
Ummm, have y'all thought about spread costs?
If you look at the spread of any of these ETF's mentioned (spread = ask px - bid px), you will notice that the spread is much smaller than if you were to sum up the spreads of each component stock.
That's possible because of a mature ecosystem of ETF market makers and arbitrageurs (like Jane Street).
If you buy all of the stocks individually, as it sounds like y'all's solution does, you will pay the spread cost for every. single. stock. The magnitude of these costs are not huge, but if we're comparing them against VOO's 17 bps/yr expense ratio, it's worth quantifying them.
I imagine eventually you can hope that market makers will be able to quote a tight spread on whatever the basket of stocks a client wants, but in the meantime, users would be bleeding money to these costs.
(Source: I work in market making and think about spreads more than I would like to admit.)
I just found this while researching for my other comment in this thread; re: "Fund of Funds Investment Agreements",
Would Rule 12d1-4 (2020) apply to holding funds versus holding individual stocks and/or ETFs? What about the 75-5-10 rule for mutual funds?
From https://www.klgates.com/SEC-Adopts-New-Rule-12d1-4-Overhauli... :
> Rule 12d1-4 will prohibit an acquiring fund and its “advisory group” from controlling, individually or in the aggregate, an acquired fund, except for an acquiring fund: (1) in the same fund group as the acquired fund; or (2) with a sub-adviser that also acts as adviser to the acquired fund. [4] Rule 12d1-4 requires an acquiring fund to aggregate its investment in an acquired fund with the investment of the acquiring fund’s advisory group to assess control
How does running index funds with 0% expense ratios differ from running diversified 401(k) funds?
Gusto payroll data can be synced with Guideline, which offers various 401(k) and IRA plans.
Are there All Weather or Golden Butterfly index funds?
Which well known index funds are weighted and which aren't? (This is probably not common knowledge, and might be useful for your pitch)
Given that you can't buy fractional shares, how and when are weighted indexes rebalanced to maintain the initial weight?
Like most funds, the S&P 500 index demonstrates Survivorship bias: underperformers are removed from the index, which thus is not a good indicator of total market performance over time.
From https://www.investopedia.com/articles/investing/030916/buffe... :
> Buffett's ultimately successful contention was that, including fees, costs and expenses, an S&P 500 index fund would outperform a hand-picked portfolio of hedge funds over 10 years. The bet pit two basic investing philosophies against each other: passive and active investing.
It's common for (cryptoasset) backtesting to have the S&P 500 as a benchmark. Weighted by market cap, the S&P 500 may or may not have higher returns than cryptoassets (for which there were not ETFs for so long).
Do you offer index fund backtesting; or, which performance and relative cost savings metrics do you track for each index fund?
How would a hypothetical index fund have performed during stress events, corrections, drawdowns, flash crashes, stress testing scenarios, and recessions; according to backtesting?
Do you offer fundamentals data?
Do you offer [GRI] sustainability report data to support portfolio/fund design?
Do you offer funds or index fund design with an emphasis on sustainability and responsible investing?
Can I generate an index fund to focus on one or more Sustainable Development Goals?
What is the difference between creating an index fund with you as compared with holding stocks in a portfolio and periodically rebalancing and reassessing?
IIUC in terms of cryptoassets:
- A (weighted and rebalanced) index fund is a collection of tokens.
- Each constituent stock or ETF could or may already be tokenized as a cryptoasset.
- A token is a string identifier for an asset. A token is a smart contract that has the necessary methods (satisfies the smart contract functional interface) to be exchanged over a cryptoasset network with cryptographic assurances.
- A "wrapped token" is wrapped to be listed on a different network. So, for example, if someone wanted to sell NASDAQ:AAPL on a different exchange or cryptoasset network they would need to wrap it and commit to an approved, on-file ETF fund management commitment that specifies how quickly they intend to buy or sell to keep the wrapped asset price close to the original asset's before-after-hours-trading market price.
- (ETFs typically have low to no fees. When you own an ETF you do not own voting shares; with ETFs, the fund owns the voting shares and votes on behalf of the ETF holders).
There are EIP and ERC standard specifications for bundles of assets; a token composed of multiple other tokens. A wallet may contain various types of fungible and non-fungible tokens. For wallet recovery and inheritance and estate planning, there's SSS, multisig transactions, multiple signature smart contracts, and Shamir backup, and banks can now legally hold cryptoassets for clients.
Alena Tensor in Unification Applications
"Alena Tensor in unification applications" (2024) https://iopscience.iop.org/article/10.1088/1402-4896/ad98ca
ALICE finds first ever evidence of the antimatter partner of hyperhelium-4
Does this actually prove that antimatter necessarily exists?
Does this prove that antimatter is necessary for theories of gravity to concur with other observations?
Do the observed properties of antimatter particles correspond with antimatter as the or a necessary nonuniform correction factor to theories of gravity?
My guess is that you are confusing "antimatter" and "dark matter".
If you want some antimmater, you can go to your nearby physics suply store and buy some radioactive material that produce positrons. It's quite easy. (Radioactive material may be dangerous. Don't fool with that!) If you want antiprotons or antihydrogen, you need a huge particle acelerator. They make plenty of antiprotons in the CERN, to make colisions. They are very difficult to store, so they survive a very short time on Earth.
Dark matter is very different. We have some experimental resuls that don't match the current physics theories. The current best guess is that there is some matter that we can't see for some reason. Nobody is sure what it is. Perhaps it's made of very dark big objects or perhaps it's made of tiny particles that don't interact with light. (I'm not sure the current favorite version in the area.) Anyway, some people don't like "dark matter" and prefer to change the theories, but the proposed new theories also don't match the experimental results.
> The current best guess is that there is some matter that we can't see for some reason. Nobody is sure what it is.
My pet conjecture (it's not detailed enough to be a hypothesis) is that this is related to the baryon asymmetry problem.
The antimatter symmetry problem is more than just baryons, despite the name, as we also have more electrons than positrons, not just more protons/neutrons than anti-protons/anti-neutrons.
There's a few possibilities:
1) the initial value just wasn't zero (an idea I heard from Sabine Hossenfelder)
2) the baryon number is violated in a process that requires conservation of charge
This would suggest antiprotons or antineutrons do something which involves the positron at the same time, so perhaps the anti-neutron is weirdly stable or something — neutron decay is a weak force process, and that can slightly violate the charge conjugation parity symmetry, so this isn't a completely arbitrary conjecture.
If we've got lots of (for example) surprise-stable anti-neutrons all over the place… it's probably not a perfect solution to the missing mass, but it's the right kind of magnitude to be something interesting to look at more closely.
3) the baryon number (proton/neutron/etc.) and/or lepton number (electron/positron/muon/etc.) is violated in a process that does not require conservation of charge.
If you have some combination of processes which don't each conserve charge, you're likely to get some net charge to the universe (unless the antiproton process just happens to occur at the same rate as the positron process); in quantum mechanics I understand such a thing is genuinely meaningless, while in GR this would contribute to the stress energy tensor in a way that looks kinda like dark energy.
But like I said, conjecture. I'm not skilled enough to turn this into a measurable hypothesis.
What about virtual particles, too?
Virtual particles: https://en.wikipedia.org/wiki/Virtual_particle :
> As a consequence of quantum mechanical uncertainty, any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. For this reason, virtual particles – which exist only temporarily as they are exchanged between ordinary particles – do not typically obey the mass-shell relation; the longer a virtual particle exists, the more the energy and momentum approach the mass-shell relation.
Virtual particles are the generalisation to all quantum fields of what near-field is in radio to just photons. It's where you don't really benefit much from even calling things "particles" in the first place, because the wave function itself is a much better description.
Unfortunately the maths of QM doesn't play nice with the maths of GR, which is also why zero-point effects are either renormalised to exactly zero or otherwise predict an effect 10^122 times larger than observed.
> Does this actually prove that antimatter necessarily exists?
Antimatter definitely exists, it is detectable, and used; e.g, PET scans use positrons (anti-electrons), and there have been experiments (only in animal models last I knew) with anti-proton radiotherapy for cancers.
This is the first evidence of a particular configuration of antimatter, not the first evidence of antimatter.
To be more specific this is the first time we have detected hyper-antimatter of Helium where one of the quarks in the nucleus is an anti-strange quark (an anti-lambda from the article)
Thank you all for corrections and clarifications.
I had confused Antimatter in particle theory (where there is no gravity) and Dark matter, which has no explanation in particle theory and maybe probably shouldn't be necessary for a unified model that describes n-body gravity at astrophysical and particle scales.
> [dark matter] has no explanation in particle theory
From https://news.ycombinator.com/item?id=42369294#42371561 :
> One theory of dark matter is that it's strange quark antimatter.
You're confusing antimatter with dark matter.
With a splash of antimass in that confusion too
My mistake, my mistake.
It seems I had my Antimatter confused with mah Dark matter.
Antimatter: https://en.wikipedia.org/wiki/Antimatter
Dark matter: https://en.wikipedia.org/wiki/Dark_matter
Antimass:
I and the Internet have never heard of antimass.
Negative mass: https://en.wikipedia.org/wiki/Negative_mass
Dark energy: https://en.wikipedia.org/wiki/Dark_energy
Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
Negative mass was what I was going for with "antimass". For some reason I thought that was a common term but I guess it is not.
Not sure why I confused the terms.
FWIU this Superfluid Quantum Gravity rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From "Show HN: Physically accurate black hole simulation using your iPhone camera" https://news.ycombinator.com/item?id=42191692 :
> Ctrl-F Fedi , Bernoulli, Gross-Pitaevskii:
>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
There's a newer paper on it.
Alternatives to general relativity > Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
The new Sagittarius* black hole image with phase might help with discarding models unsupported by evidence. Are those knots or braids or fields around a vortical superfluidic attractor system? There doesn't at all appear to be a hard boundary Schwarzschild radius.
But that's about not dark matter not antimatter.
Approximating mathematical constants using Minecraft
Wonder if the authors are familiar with https://www.luanti.org/. Just as "fun" surely, and less nudging kids into the jaws of the proprietary toll-extracting behemoths of our world.
Luanti (formerly minetest) is open source, written in Lua, has an API with mediawiki docs, and has a vscode extension: https://github.com/minetest/minetest
Luanti wiki > Modding intro: https://dev.minetest.net/Modding_Intro
SensorCraft is open source, written in Python, and is built on the Pyglet OpenGL library (which might compile to WASM someday) https://github.com/AFRL-RY/SensorCraft
/? Minecraft clone typescript site:github.com https://www.google.com/search?q=minecraft+clone+typescript+s...
/?hnlog minecraft:
https://news.ycombinator.com/item?id=32657066 :
> [ mcpi, pytest-minecraft, sensorcraft, ]
mcpi supports "Minecraft: Pi edition" and "RaspberryJuice" (~ Minecraft API on Bukkit server)
minecraft-pi-reborn is an open source rewrite of "Minecraft: Pi edition" that's written in C++. It looks like there's a glfw.h header in libreborn. Emscripten-glfw is a port of glfw which can be compiled to WASM. Glfw wraps OpenGL. Browsers have WebGL.
MCPI-Revival/minecraft-pi-reborn: https://github.com/MCPI-Revival/minecraft-pi-reborn docs: https://github.com/MCPI-Revival/minecraft-pi-reborn/blob/mas...
If you ask a random kid if they know what luanti is, they would have no clue.
Read the docs / documentation / API docs (and contribute)
Read the source
Write tests with test assertions and run them with a test runner
Learnxinyminutes > Lua: https://learnxinyminutes.com/docs/lua/
Learnxinyminutes > Python: https://learnxinyminutes.com/docs/python/
FWIW 22/7 and 666/212 are Rational approximations for pi.
You can write a script to find other integer/integer approximations for pi and other transcendental numbers like e.
Hah! There's a whole Wiki page[1] about pi approximations!
Transcendental number > Numbers proven to be transcendental: https://en.wikipedia.org/wiki/Transcendental_number#Numbers_... :
> [ π, e^a where a>0, ln(a) where a!=0 and a!=1, ]
e (Euler) https://en.wikipedia.org/wiki/E_(mathematical_constant)
Pi (Archimedes,) https://en.wikipedia.org/wiki/Pi
Avogadro constant: https://en.wikipedia.org/wiki/Avogadro_constant
Approximations of Pi: https://en.wikipedia.org/wiki/Approximations_of_%CF%80
List of representations of e: https://en.wikipedia.org/wiki/List_of_representations_of_e
Four dimensional space, 4D, Minkowski (SR), Einstein (GR) https://en.wikipedia.org/wiki/Four-dimensional_space
Four vector: https://en.wikipedia.org/wiki/Four-vector
Four velocity: https://en.wikipedia.org/wiki/Four-velocity
Four gradient: https://en.wikipedia.org/wiki/Four-gradient
List of mathematical constants: https://en.wikipedia.org/wiki/List_of_mathematical_constants
Searching for small primordial black holes in planets, asteroids and on Earth
ScholarlyArticle: "Searching for small primordial black holes in planets, asteroids and here on Earth" (2024) https://www.sciencedirect.com/science/article/abs/pii/S22126...
NewsArticle: "Ancient black holes could dig tiny tunnels in rocks and buildings" (2024) https://newatlas.com/physics/primordial-black-holes-tiny-tun...
RNA-targeting CRISPR reveals that noncoding RNAs are not 'junk'
Constraints Are Good: Python's Metadata Dilemma
Eggs are dying out, pointed out by this 2 year old blog post:
https://about.scarf.sh/post/python-wheels-vs-eggs
The metadata problem is related to the problem that pip had an unsound resolution algorithm based on "try to resolve something optimistically and hope it works when you get stuck and try to backtrack".
I did a lot of research along the line that led to uv 5 years ago and came to the conclusion that installing out of wheels you can set up a SMT problem the same way maven does and solve it right the first time. They had a PEP to publish metadata files for wheels in PyPi but I'd built something before that could suck the metadata out of a wheel with just 3 http range requests. I believed that any given project might depend on a legacy egg and in those cases you can build that egg into a wheel via a special process and store it in a private repo (a must for the perfect Python build system)
The metadata problem is unrelated to eggs. Eggs haven’t played much of a role in a long time but the metadata system still exists.
Range requests are used by both uv and pip if the index supports it, but they have to make educated guesses about how reliable that metadata is.
The main problem are local packages during development and source distributions.
Back in the case of eggs you couldn't count on having the metadata until you ran setup.py which forced pip to be unreliable because so much stuff got installed and uninstalled in the process of a build.
There is a need for a complete answer for dev and private builds, I'll grant that. Private repos like we are used to in maven would help.
Eggs did not contain setup.py files. The metadata like for wheels was embedded in the egg (in the EGG-INFO folder) from my recollection. Eggs were zip importable after all.
It looks like you can embed dependency data in an egg but it is also true that (1) a lot of eggs have an internal setup.py that does things like compile C code, and (2) it's a hassle for developers on some platforms who might not have the right C installed, and (3) eggs reserve the right to decide what dependencies they include when they are installed based on the environment and such.
I decided to look into how this works these days.
These days you need toml to parse pyproject.toml, and there's not a parser in the Python standard library for TOML: https://packaging.python.org/en/latest/guides/writing-pyproj...
pip's docs strongly prefer project.toml: https://pip.pypa.io/en/stable/reference/build-system/pyproje...
Over setup.py's setup(,setup_requires=[], install_requires=[]) https://pip.pypa.io/en/stable/reference/build-system/setup-p...
Blaze and Bazel have Skylark/Starlark to support procedural build configuration with maintainable conditionals
Bazel docs > Starlark > Differences with Python: https://bazel.build/rules/language
cibuildwheel: https://github.com/pypa/cibuildwheel ;
> Builds manylinux, musllinux, macOS 10.9+ (10.13+ for Python 3.12+), and Windows wheels for CPython and PyPy;
manylinux used to specify a minimum libc version for each build tag like manylinux2 or manylinux2014; pypa/manylinux: https://github.com/pypa/manylinux#manylinux
A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600: https://github.com/mayeut/pep600_compliance#distro-compatibi...
> Works on GitHub Actions, Azure Pipelines, Travis CI, AppVeyor, CircleCI, GitLab CI, and Cirrus CI;
Further software supply chain security controls: SLSA.dev provenance, Sigstore, and the new PyPI attestations storage too
> Bundles shared library dependencies on Linux and macOS through `auditwheel` and `delocate`
delvewheel (Windows) is similar to auditwheel (Linux) and delocate (Mac) in that it copies DLL files into the wheel: https://github.com/adang1345/delvewheel
> Runs your library's tests against the wheel-installed version of your library
Conda runs tests of installed packages;
Conda docs > Defining metadata (meta.yaml) https://docs.conda.io/projects/conda-build/en/latest/resourc... :
> If this section exists or if there is a `run_test.[py,pl,sh,bat,r]` file in the recipe, the package is installed into a test environment after the build is finished and the tests are run there.
Things that support conda meta.yml declarative package metadata: conda and anaconda, mamba and mambaforge, picomamba and emscripten-forge, pixi / uv, repo2docker REES, and probably repo2jupyterlite (because jupyterlite's jupyterlite-xeus docs mention mamba but not yet picomamba) https://jupyterlite.readthedocs.io/en/latest/howto/configure...
The `setup.py test` command has been removed: https://github.com/pypa/setuptools/issues/1684
`pip install -e .[tests]` expects extras_require['tests'] to include the same packages as the tests_require argument to setup.py: https://github.com/pypa/setuptools/issues/267
TODO: is there a new one command to run tests like `setup.py test`?
`make test` works with my editor. A devcontainers.json can reference a Dockerfile that runs something like this:
python -m ensurepip && python -m pip install -U pip setuptools
But then still I want to run the tests of the software with one command.Are you telling me there's a way to do an HTTPS Content Range request for the toml file in a wheel for the package dependency version constraints and/or package hashes (but not GPG pubkey fingerprints to match .asc manifest signature) and the build & test commands, but you still need an additional file in addition to the TOML syntax pyproject.toml like Pipfile.lock or poetry.lock to store the hashes for each ~bdist wheel on each platform, though there's now a -c / PIP_CONSTRAINT option to specify an additional requirements.txt but that doesn't solve for windows or mac only requirements in a declarative requirements.txt? https://pip.pypa.io/en/stable/user_guide/#constraints-files
conda supports putting `[win]` at the end of a YAML list item if it's for windows only.
Re: optimizing builds for conda-forge (and PyPI (though PyPI doesn't build packages (when there's a new PR, and then sign each build for each platform))) https://news.ycombinator.com/item?id=41306658
Debian opens a can of username worms
Perhaps it's time to agree upon how to Unicode in identifiers? The normalization, unprintable characters, confusing characters with same glyphs, etc. It's obviously problematic when everyone is doing it on their own.
Unicode has provided a specification for Unicode identifiers since 2005: https://www.unicode.org/reports/tr31/
Great! Is there a library for their validation? ICU seems to have only spoof checker for confusables.
ICU: International Components for Unicode: https://en.wikipedia.org/wiki/International_Components_for_U...
unicode-org/icu: https://github.com/unicode-org/icu
Microsoft/ICU: https://github.com/microsoft/icu
IDN: Internationalized domain name: https://en.wikipedia.org/wiki/Internationalized_domain_name
Punycode: https://en.wikipedia.org/wiki/Punycode
IDN homograph attack: https://en.wikipedia.org/wiki/IDN_homograph_attack
CWE-1007: Insufficient Visual Distinction of Homoglyphs Presented to User: https://cwe.mitre.org/data/definitions/1007.html
GNU libidn/libidn2: https://gitlab.com/libidn/libidn2
Comparison of regular expression engines > Language features > Part 2; Unicode support: https://en.wikipedia.org/wiki/Comparison_of_regular_expressi...
libu8ident
rurban/libu8ident : https://github.com/rurban/libu8ident :
> unicode security guidelines for identifiers
Terabit-scale high-fidelity diamond data storage
"Terabit-scale high-fidelity diamond data storage" (2024) https://www.nature.com/articles/s41566-024-01573-1
"Record-breaking diamond storage can save data for millions of years" (2024) https://www.newscientist.com/article/2457948-record-breaking... :
> Previous techniques have also used laser pulses to encode data into diamonds, but the higher storage density afforded by the new method means a diamond optical disc with the same volume as a standard Blu-ray could store approximately 100 terabytes of data – the equivalent of about 2000 Blu-rays – while lasting far longer than a typical Blu-ray’s lifetime of just a few decades.
"Optical data storage breakthrough increases capacity of diamonds by circumventing the diffraction limit" (2023) https://phys.org/news/2023-12-optical-storage-breakthrough-c... :
> [...] This is possible by multiplexing the storage in the spectral domain.
> "It means that we can store many different images at the same place in the diamond by using a laser of a slightly different color to store different information into different atoms in the same microscopic spots," said Delord, a postdoctoral research associate at CCNY. "If this method can be applied to other materials or at room temperature, it could find its way to computing applications requiring high-capacity storage." [...]
> "What we did was control the electrical charge of these color centers very precisely using a narrow-band laser and cryogenic conditions," explained Delord. "This new approach allowed us to essentially write and read tiny bits of data at a much finer level than previously possible, down to a single atom."
"Reversible optical data storage below the diffraction limit" (2023) https://www.nature.com/articles/s41565-023-01542-9
What about storing quantum bits (or qutrits or qudits) in diamond though?
"Ask HN: Can qubits be written to crystals as diffraction patterns?" (2023) https://news.ycombinator.com/item?id=38501668 :
> Aren't diffraction patterns due to lattice irregularities effective wave functions?
> Presumably holography would have already solved for quantum data storage if diffraction is a sufficient analog [of] a wave function?
"All-optical complex field imaging using diffractive processors" (2024) https://news.ycombinator.com/item?id=40541785 ; How to capture phase and intensity with a camera, Huygens-Steiner
Compressing Large Language Models Using Low Rank and Low Precision Decomposition
ScholarlyArticle: "Compressing Large Language Models Using Low Rank and Low Precision Decomposition" (2024) https://arxiv.org/abs/2405.18886 :
> CALDERA
NewsArticle: "Large language models can be squeezed onto your phone — rather than needing 1000s of servers to run — after breakthrough" (2024) https://www.livescience.com/technology/artificial-intelligen... :
Non-Contact Biometric System for Early Detection of Hypertension and Diabetes
"Non-Contact Biometric System for Early Detection of Hypertension and Diabetes Using AI and RGB Imaging" (2024) https://www.ahajournals.org/doi/10.1161/circ.150.suppl_1.413...
From https://www.notebookcheck.net/University-of-Tokyo-researcher... :
> optical, non-contact method to detect [blood pressure, hypertension, blood glucose levels, and diabetes]. A multispectral [UV + IR] camera was used and pointed at trial participants' faces to detect variations in the blood flow, which was further analyzed by AI software.
From https://news.ycombinator.com/item?id=41381977 :
> "Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
>> This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%
AI helps researchers dig through old maps to find lost oil and gas wells
It seems to me this approach could also be used for other aspects of mining. For instance, in Australia where I live there are many old gold, opal and other mines that have long been forgotten but which remain dangerous.
Most are unlikely to emit toxic or greenhouse gasses but they're nevertheless still dangerous because they're often very deep vertical shafts that a person could stumble across and fall in. These old mines were likely closed over when they were abandoned but often their closures/seals were made of wood that has probably rotted away over the past century or so.
It stands to reason that AI would be just as effective in this situation.
Is securing old mines a good job for (remotely-operated (humanoid)) robots?
Old mines can host gravitational energy storage.
From https://news.ycombinator.com/item?id=35778721 :
> FWIU we already have enough abandoned mines in the world to do all of our energy storage needs?
"Gravity batteries: Abandoned mines could store enough energy to power ‘the entire earth’" (2023) https://www.euronews.com/green/2023/03/29/gravity-batteries-...
Nearly half of teenagers globally cannot read with comprehension
Comprehension is a higher bar than "literacy". Global adult literacy rates have been drastically improving since the 1950s (https://ourworldindata.org/literacy). Today, global adult literacy rate is estimated at ~87%.
I -think- this is the appropiate interactive plot: https://ourworldindata.org/grapher/share-of-children-reachin...
The controls are annoying, but world level data only goes back to 2000:
* 2000: 38%
* 2005: 41%
* 2010: 45%
* 2015: 47%
* 2019: 48.5% (this is the closest point)
SDG reports have literacy stats,
Goal 4: "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all" https://sdgs.un.org/goals/goal4#targets_and_indicators
Target 4.6: By 2030, ensure that all youth and a substantial proportion of adults, both men and women, achieve literacy and numeracy
Indicator 4.6.1: Proportion of population in a given age group achieving at least a fixed level of proficiency in functional (a) literacy and (b) numeracy skills, by sex
SDG4: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4
SDG4 > Target 4.6: Universal literacy and numeracy: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4...
The UN Stats 2024 Statistical Annex says there's no data for 4.6.1 since 2020? https://unstats.un.org/sdgs/files/report/2024/E_2024_54_Stat...
SDG e-Handbook re: Target 4.6.1: https://unstats.un.org/wiki/plugins/servlet/mobile?contentId...
World Bank data series: SE.ADT.LITR.ZS, https://databank.worldbank.org/metadataglossary/millennium-d...
FRED Series > Literacy Rate, Adult Total for the World (SEADTLITRZSWLD https://fred.stlouisfed.org/series/SEADTLITRZSWLD
There is no Literacy Rate data for the United States in World Bank or in FRED according to a quick search.
Literacy in the United States: https://en.wikipedia.org/wiki/Literacy_in_the_United_States
The National Literacy Institute > Literacy Statistics 2024 - 2025 : https://www.thenationalliteracyinstitute.com/post/literacy-s...
> On average, 79% of U.S. adults nationwide are literate in 2024.
> 21% of adults in the US are illiterate in 2024.
> 54% of adults have a literacy below a 6th-grade level (20% are below 5th-grade level).
> Low levels of literacy costs the US up to 2.2 trillion per year.
But what about comprehension?
Reading comprehension: https://en.wikipedia.org/wiki/Reading_comprehension
Intel announces Arc B-series "Battlemage" discrete graphics with Linux support
Why don't they just release a basic GPU with 128GB RAM and eat NVidia's local generative AI lunch? The networking effect of all devs porting their LLMs etc. to that card would instantly put them as a major CUDA threat. But beancounters running the company would never get such an idea...
Just how "basic" do you think a GPU can be while having the capability to interface with that much DRAM? Getting there with GDDR6 would require a really wide memory bus even if you could get it to operate with multiple ranks. Getting to 128GB with LPDDR5x would be possible with the 256-bit bus width they used on the top parts of the last generation, but would result in having half the bandwidth of an already mediocre card. "Just add more RAM" doesn't work the way you wish it could.
> "Just add more RAM" doesn't work the way you wish it could.
Re: Tomasulo's algorithm the other day: https://news.ycombinator.com/item?id=42231284
Cerebras WSE-3 has 44 GB of on-chip SRAM per chip and it's faster than HBM. https://news.ycombinator.com/item?id=41702789#41706409
Intel has HBM2e off-chip RAM in Xeon CPU Max series and GPU Max;
What is the difference between DDR, HBM, and Cerebras' 44GB of on-chip SRAM?
How do architectural bottlenecks due to modified Von Neumann architectures' debuggable instruction pipelines limit computational performance when scaling to larger amounts of off-chip RAM?
Tomasulo's algorithm also centralizes on a common data bus (the CPU-RAM data bus) which is a bottleneck that must scale with the amount of RAM.
Can in-RAM computation solve for error correction without redundant computation and consensus algorithms?
Can on-chip SRAM be built at lower cost?
Von Neumann architecture: https://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_N... :
> The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system. [4]
> The von Neumann architecture is simpler than the Harvard architecture (which has one dedicated set of address and data buses for reading and writing to memory and another set of address and data buses to fetch instructions).
Modified Harvard architecture > Comparisons: https://en.wikipedia.org/wiki/Modified_Harvard_architecture
C-RAM: Computational RAM > DRAM-based PIM Taxonomy, See also: https://en.wikipedia.org/wiki/Computational_RAM
SRAM: Static random-access memory https://en.wikipedia.org/wiki/Static_random-access_memory :
> Typically, SRAM is used for the cache and internal registers of a CPU while DRAM is used for a computer's main memory.
For whatever reason Hynix hasn't turned their PIM into a usable product. LPDDR based PIM is insanely effective for inference. I can't stress this enough. An NPU+LPDDR6 PIM would kill GPUs for inference.
How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour *?))
/? TOPS/W and FLOPS/W: https://www.google.com/search?q=TOPS%2FW+and+FLOPS%2FW :
- "Why TOPS/W is a bad unit to benchmark next-gen AI chips" (2020) https://medium.com/@aron.kirschen/why-tops-w-is-a-bad-unit-t... :
> The simplest method therefore would be to use TOPS/W for digital approaches in future, but to use TOPS-B/W for analogue in-memory computing approaches!
> TOPS-8/W
> [ IEEE should spec this benchmark metric ]
- "A guide to AI TOPS and NPU performance metrics" (2024) https://www.qualcomm.com/news/onq/2024/04/a-guide-to-ai-tops... :
> TOPS = 2 × MAC unit count × Frequency / 1 trillion
- "Looking Beyond TOPS/W: How To Really Compare NPU Performance" (2023) https://semiengineering.com/looking-beyond-tops-w-how-to-rea... :
> TOPS = MACs * Frequency * 2
> [ { Frequency, NNs employed, Precision, Sparsity and Pruning, Process node, Memory and Power Consumption, utilization} for more representative variants of TOPS/W metric ]
Is this fast enough for DDR or SRAM RAM? "Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9" (2024) https://news.ycombinator.com/item?id=42318944
Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9
ScholarlyArticle: "Electrically driven long-range solid-state amorphization in ferroic In2Se3" (2024) https://www.nature.com/articles/s41586-024-08156-8
NewsArticles:
"'Accidental discovery' creates candidate for universal memory — a weird semiconductor that consumes a billion times less power" (2024) https://www.livescience.com/technology/computing/accidental-...
"Advances in energy-efficient avalanche-based amorphization could revolutionize data storage" (2024) https://techxplore.com/news/2024-11-advances-energy-efficien...
"Self shocks turn crystal to glass at ultralow power density" (2024) https://www.eurekalert.org/news-releases/1063944
[deleted]
Neuroevolution of augmenting topologies (NEAT algorithm)
NEAT: https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...
NEAT > See also links to EANT.
EANT: https://en.wikipedia.org/wiki/Evolutionary_acquisition_of_ne... :
> Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. [1] and Stanley and Miikkulainen. [2 [NEAT (2002)] Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming (now using the most advanced form of the evolution strategies CMA-ES in EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.
(in Hilbert space)
> [EANT] introduces a genetic encoding called common genetic encoding (CGE) that handles both direct and indirect encoding of neural networks within the same theoretical framework. The encoding has important properties that makes it suitable for evolving neural networks:
> It is complete in that it is able to represent all types of valid phenotype networks.
> It is closed, i.e. every valid genotype represents a valid phenotype. (Similarly, the encoding is closed under genetic operators such as structural mutation and crossover.)
> These properties have been formally proven. [3]
Neither MOSES: Meta-Optimizing Semantic Evolutionary Search nor PLN: Probabilistic Logic Networks are formally proven FWIU.
/? Z3 ... https://news.ycombinator.com/item?id=41944043 :
> Does [formal verification [with Z3]] distinguish between xy+z and z+yx, in terms of floating point output?
> math.fma() would be a different function call with the same or similar output.
> (deal-solver is a tool for verifying formal implementations in Python with Z3.)
EANT2 looks really cool, thanks for the link.
Show HN: Shiddns a DNS learning experience in Python
Hi HN, as a learning experience, I created a small DNS resolver with blocking capabilities and resolution of local host names. There are already tons of good ad-blocking DNS implementations out there. And they do this much better than my implementation. Great examples are Blocky or pihole.
So when I set out to "play around", I looked at these projects and tried to modify them. Fortunately for them, but unfortunately for beginners, these are already quite far developed and complex projects. So impatient me set out to "quickly write something in Python". Almost two weekends later, I put in so much time that I thought I might as well share this for others to explore (and hopefully enjoy). I tried keeping things simple and without external dependencies. In the end it was a good experience but became much bigger than I intended.
So I hope you enjoy: https://github.com/vvm2/shiddns
It looks like dnspython has DNSSEC, DoH, and DoQ support: test_dnssec.py: https://github.com/rthalley/dnspython/blob/main/tests/test_d... , dnssec.py: https://github.com/rthalley/dnspython/blob/main/dns/dnssec.p...
man delv
thank you for linking this, i took a peek and i will look at the DNSSEC pieces in more detail. this is something i did not dare to touch when i saw the RFC jungle around DNS.
Why CT Certificate Transparency logs are not possible by logging DNS record types like CERT, OPENPGPKEY, SSHFP, CAA, RRSIG, NSEC3; ACMEv2 Proof of Domain Control; and why we need a different system for signing software package build artifacts built remotely (smart contracts, JWS, SLSA, TUF, W3C Verifiable Credentials, blockcerts and transaction fees,)
Single-chip photonic deep neural network with forward-only training
"Single-chip photonic deep neural network with forward-only training" (2024) https://www.nature.com/articles/s41566-024-01567-z :
> Here we realize such a system in a scalable photonic integrated circuit that monolithically integrates multiple coherent optical processor units for matrix algebra and nonlinear activation functions into a single chip. We experimentally demonstrate this fully integrated coherent optical neural network architecture for a deep neural network with six neurons and three layers that optically computes both linear and nonlinear functions [...]
> This work lends experimental evidence to theoretical proposals for in situ training, enabling orders of magnitude improvements in the throughput of training data. Moreover, the fully integrated coherent optical neural network opens the path to inference at nanosecond latency and femtojoule per operation energy efficiency.
> femtojoule per operation energy efficiency
From "Transistor for fuzzy logic hardware: promise for better edge computing" https://news.ycombinator.com/item?id=42166303 :
> From "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
>> Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
Performance per watt: https://en.wikipedia.org/wiki/Performance_per_watt
https://news.ycombinator.com/item?id=41702789#41706409 :
> 5nm process
"Ask HN: Can qubits be written to crystals as diffraction patterns?" https://news.ycombinator.com/item?id=38501668
Notes from "Single atom defect in 2D material can hold quantum information at room temp" (2024) https://news.ycombinator.com/item?id=40478219 re Qubit data storage (quantum wave transmission through spacetime)
"Reversible optical data storage below the diffraction limit" (2024) https://news.ycombinator.com/item?id=38528877 :
> multiplexing Diamond with different color beams , "Real-space nanophotonic field manipulation using non-perturbative light–matter coupling" (2023) ; below the Abbe diffraction limit optional tweezers at 1/50 the photonic wavelength
"DNA-folding nanorobots can manufacture limitless copies of themselves" https://news.ycombinator.com/item?id=38569474
Phase from Intensity because Huygens' classical oscillatory model applies to photons;
From "Physicists use a 350-year-old theorem to reveal new properties of light waves" https://news.ycombinator.com/item?id=37226160 :
>> [..] Qian's team interpreted the intensity of a light as the equivalent of a physical object's mass, then mapped those measurements onto a coordinate system that could be interpreted using Huygens' mechanical theorem. "Essentially, we found a way to translate an optical system so we could visualize it as a mechanical system, then describe it using well-established physical equations," explained Qian.
>> Once the team visualized a light wave as part of a mechanical system, new connections between the wave's properties immediately became apparent—including the fact that entanglement and polarization stood in a clear relationship with
"Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> Abstract: [...] Here we report links of the two through a systematic quantitative analysis of polarization and entanglement, two optical coherence properties under the wave description of light pioneered by Huygens and Fresnel. A generic complementary identity relation is obtained for arbitrary light fields. More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
/?hnlog coherent , single photon
[...]
"Photonic quantum Hall effect and multiplexed light sources of large orbital angular momenta (2021) https://www.nature.com/articles/s41567-021-01165-8 .. https://engineering.berkeley.edu/news/2021/02/light-unbound-... :
"Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 .. https://news.ycombinator.com/item?id=39365579
From https://news.ycombinator.com/item?id=39287722 :
> "Room-temperature quantum coherence of entangled multiexcitons in a metal-organic framework" (2024) https://www.science.org/doi/10.1126/sciadv.adi3147
> "A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
> What about phononic quantum computing though, are phononic wave functions stable/coherent?
> Phase from Intensity because Huygens' classical oscillatory model applies to photons;
Correction:
Phase from second-order Intensity due to "mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation"
Parallel axis theorem: https://en.wikipedia.org/wiki/Parallel_axis_theorem :
> The parallel axis theorem, also known as Huygens–Steiner theorem, or just as Steiner's theorem, [1] named after Christiaan Huygens and Jakob Steiner, can be used to determine the moment of inertia or the second moment of area of a rigid body about any axis, given the body's moment of inertia about a parallel axis through the object's center of gravity and the perpendicular distance between the axes.
( Huygens-Fresnel principle > Generalized Huygens' principle > and quantum field theory: https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_princi... :
> [ Kirchoff, Path Integrals (Hamilton, Lorentz, Feynmann,), quabla operator and Minkowski space, Secondary waves and superposition; ]
> Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets.
Lorentz oscillator model > See also: https://en.wikipedia.org/wiki/Lorentz_oscillator_model )
> Photonic fractional quantum Hall effect
From "Electrons become fractions of themselves in graphene, study finds" (2024) re: the electronic fractional quantum Hall effect without magnetic fields https://news.mit.edu/2024/electrons-become-fractions-graphen... :
> They found that when five sheets of graphene are stacked like steps on a staircase, the resulting structure inherently provides just the right conditions for electrons to pass through as fractions of their total charge, with no need for any external magnetic field.
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024) https://www.nature.com/articles/s41586-023-07010-7
"How can electrons split into fractions of themselves?" https://news.mit.edu/2024/how-can-electrons-can-split-into-f... ... Re: why pentalayer
What was the SNL number of safety razer blades skit?
In rhombohedral trilayer graphene, there is superconductivity at one or more phases;
"Revealing the complex phases of rhombohedral trilayer graphene" (2024) https://www.nature.com/articles/s41567-024-02561-6 .. https://news.ycombinator.com/item?id=40919269
"Revealing the superconducting limit of twisted bilayer graphene" https://news.ycombinator.com/item?id=42051367 :
"Low-Energy Optical Sum Rule in Moiré Graphene" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... https://arxiv.org/abs/2312.03819
"Large quantum anomalous Hall effect in spin-orbit proximitized rhombohedral graphene" (2024) https://www.science.org/doi/10.1126/science.adk9749
"Physicists spot quantum tornadoes twirling in a ‘supersolid’" (2024) https://news.ycombinator.com/item?id=42082690
AI poetry is indistinguishable from human poetry and is rated more favorably
Ernest Davis has an outstanding and definitive rebuttal of these claims: https://cs.nyu.edu/~davise/papers/GPT-Poetry.pdf
The basic problem is that GPT generates easy poetry, and the authors were comparing to difficult human poets like Walt Whitman and Emily Dickinson, and rating using a bunch of people who don't particuarly like poetry. If you actually do like poetry, comparing PlathGPT to Sylvia Plath is so offensive as to be misanthropic: the human Sylvia Plath is objectively better by any possible honest measure. I don't think the authors have any interest in being honest: ignorance is no excuse for something like this.
Given the audience, it would be much more honest if the authors compared with popular poets, like Robert Frost, whose work is sophisticated but approachable.
The vast majority of people seem to prefer Avengers 17 over any cinematic masterpiece, the latest drake song would be better rated than a Tchaikovsky... We should let them play and worship chat gpt if that's what they want to waste their time on
I don't understand the logic of calling superhero movies lesser/unserious like this, it's very snobby. Movies and music are made to be entertaining, the avengers is more entertaining than your "arthouse cinematic masterpiece that nobody likes but it's just because they aren't smart enough to understand it". It's also lazy and ignorant to ignore the sheer manpower that goes into making a movie like that.
Nobody likes art that requires them to think.
On-scalp printing of personalized electroencephalography e-tattoos
Did a quick search for "hair" since in my experience the most obnoxious part of using a traditional EEG cap was having to rinse the gel out of my hair in the sink afterwards (I was in a research study and had to do this multiple times a month, right before heading to class. Always left me looking disheveled). It looks like this approach works better for slightly hairy skin (the person in most of these figures has a buzz cut) than other e-tattoo methods but they do note:
> More work is needed to scan and print on heads with long and thick hairs. Highlighting the persistent issue of racial bias in neuroscience research is crucial, as traditional EEG electrodes are often unreliable on individuals of African descent, resulting in suboptimal patient experiences and outcomes in clinical settings.69,70 Curly hairs tend to push against the EEG cap, reducing contact between the electrodes and the scalp, leading to a poor contact impedance.71 Developing on-scalp digital printing for different hair types should prioritize bridging this important gap.
>> More work is needed to scan and print on heads
> temporary-tattoo-like sensors
> Traditional electroencephalography (EEG) systems involve time-consuming manual electrode placement, conductive liquid gels, and cumbersome cables, which are prone to signal degradation and discomfort during prolonged use. Our approach overcomes these limitations by combining material innovations with non-contact, on-body digital printing techniques to fabricate e-tattoos that are self-drying, ultrathin, and compatible with hairy scalps. These skin-conformal EEG e-tattoo sensors enable comfortable, long-term, high-quality brain activity monitoring without the discomfort associated with traditional EEG systems. Using individual 3D head scans, custom sensor layout design, and a 5-axis microjet printing robot, we have created EEG e-tattoos with precise, tailored placement over the entire scalp. The inks for electrodes and interconnects have slightly different compositions to achieve low skin contact impedance and high bulk conductivity, respectively
"On-scalp printing of personalized electroencephalography e-tattoos" (2024) https://www.cell.com/cell-biomaterials/fulltext/S3050-5623(2...
Can they be printed on a forearm like in the Disney film "Moana 2"?
The robot is completely superfluous. These are conductive ink non-tattoos painted onto the scalp. It would be straightforward to do it by hand.
Handwriting but not typewriting leads to widespread connectivity in brain
I tried for a few minutes to find the HN thread where somebody refuted the "handwriting is better than typing" for fact recall claim.
OTOH, IIRC,
- EEG measures brain activation. It takes more of the brain to hand write than to type, and so it is unsurprising that there is more activation during a writing by hand activity than during a typing activity.
- Do brain connectivity or neural activation correspond to favorable outcomes?
- Synaptic pruning eliminates connectivity in the brain; and that is presumably advantageous.
- Hyperconnectivity may be regarded as a disadvantage in discussions of neurodiversity. Witness recall from a network with intentional hyperconnectivity; too distracted by spurious activations?
- That's a test of dumb recall, which is a test of trivia retention.
- Is rote memorization the pinnacle of learning? To foster creative thinking and critical thinking, is trivia recall optimization ideal?
- (Isn't it necessary to forget so much of what we've been taught in order to succeed.)
- Is spaced repetition as or more effective than hand writing at increasing recall compared to passively listening?
- With persons who have injury and/or disability that prevents them from handwriting, do they have adaptations that enable them to succeed despite "having to" type instead?
- We don't hand write software; though some schools teach coding with pen and paper (and that somewhat reduces cheating, which confounds variance in success and retention).
- We write tests for software; we can't efficiently automatedly test software for quality and correctness unless the code is typed.
- Does anything ever grade, confirm, reject, or validate handwritten notes?
- Are there studies of this that measure recall after typing, and then after handwriting, too?
Edit: A, measure, B, measure; AND: B, measure, A, measure
> I tried for a few minutes to find the HN thread where somebody refuted the "handwriting is better than typing" for fact recall claim.
You would have found it, had you written it down.
Kidding aside, is this the one? https://news.ycombinator.com/item?id=42288545
Physics-Informed Machine Learning: A Survey
"Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications" (2024) https://arxiv.org/abs/2211.08064 :
> We systematically review the recent development of physics-informed machine learning from three perspectives of machine learning tasks, representation of physical prior, and methods for incorporating physical prior. We also propose several important open research problems based on the current trends in the field. We argue that encoding different forms of physical prior into model architectures, optimizers, inference algorithms, and significant domain-specific applications like inverse engineering design and robotic control is far from being fully explored in the field of physics-informed machine learning. We believe that the interdisciplinary research of physics-informed machine learning will significantly propel research progress, foster the creation of more effective machine learning models, and also offer invaluable assistance in addressing long-standing problems in related disciplines.
Re: LLM ml benchmarks; FrontierMath (only 2% of tasks in 2024), TheoremQA: https://news.ycombinator.com/item?id=42097683
PRoot: User-space implementation of chroot, mount –bind, and binfmt_misc
Here's an example of how we've used this.
RStudio Server[0] 1.3 and older hard-coded a number of paths, such as the path for storing temporary files: Instead of looking for the TMPDIR environment variable (as specified by POSIX[1]), R Studio Server would always use /tmp. That is extremely annoying, because we set TMPDIR to a path on fast local storage (SATA or NVMe SSDs) that the job scheduler cleans up at the end of the compute job.
We do have a last-resort mechanism using pam_namespace[2], such that a user going to `/tmp` actually takes them to `/namespace/tmp/${username}`, but that is per-user, not per-job. If a user has two R Studio jobs, and those two jobs landed on the same host, there would be trouble.
So, we used PRoot to wrap R Studio, with /tmp bind-mounted to a directory under TMPDIR.
[0]: https://www.rstudio.com/products/rstudio/download-server/
[1]: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1...
For those that don’t know, you shouldn’t blindly let programs have access to tmp. They can get access to sockets and stuff. If you’re running with systemd there’s a private tmp option for this reason.
It’s always best to sandbox programs when you can. Linux has been making this much easier but it’s still non trivial
From the systemd.exec man page: https://www.freedesktop.org/software/systemd/man/systemd.exe... :
PrivateTmp=true
#JoinsNamespaceOf=
`unshare -m` and then bind-mounting a private /tmp at /tmp/systemd-private-/ does the same thing; `systemd-tmpfiles --help`: https://serverfault.com/questions/1010339/how-exactly-to-use..."`unshare -m` and then bind-mounting a private /tmp at /tmp/systemd-private-/ does the same thing"
As an intermediate level user and sysadmin, this kind of thing underscores the good work systemd is doing making it easy to get sane and safe defaults for things otherwise fiddly enough that many normal people wouldn't bother.
WasmCloud makes strides with WASM component model
"WASI 0.2 Launched" (2024-01) https://news.ycombinator.com/item?id=39289533 :
> At the same time, work towards WASI 0.3, also known as WASI Preview 3, will be getting underway. The major banner of WASI 0.3 is async, and adding the future and stream types to Wit. And here again, the theme is composability. It’s one thing to do async, it’s another to do composable async, where two components that are async can be composed together without either event loop having to be nested inside the other. Designing an async system flexible enough for Rust, C#, JavaScript, Go, and many others, which all have their own perspectives on async, will take some time, so while the design work is starting now, WASI 0.3 is expected to be at least a year away.
WebAssembly/WASI > WASI Preview 2: https://github.com/WebAssembly/WASI/tree/main/wasip2
Secure your URLSession network requests using Certificate Pinning
In a world of growing cyber threats, securing your app’s communication is critical. Learn how to implement Certificate Pinning in Swift with URLSession in order to protect your iOS or macOS apps from man-in-the-middle attacks and keep user data safe.
Do other softwares support specifying a CA bundle per domain or cert pinning?
Should FIPS specify a more limited CA cert bundle and/or cert pinning that users manage?
When the user approves a self-signed cert in the browser, isn't that cert pinning but without PKI risks and assurances?
What about CRL and OCSP; over what channel do they retrieve the cert revocation list; and can CT Certificate Transparency on a blockchain to the browser do better at [pinned] cert revocation?
Also probably relevant to these objectives:
"Chrome switching to NIST-approved ML-KEM quantum encryption" (2024) https://www.bleepingcomputer.com/news/security/chrome-switch...
"A new path for Kyber on the web" (2024) https://security.googleblog.com/2024/09/a-new-path-for-kyber...
Does Swift yet support ML-KEM too?
Neuronal sequences in population bursts encode information in human cortex
"Neuronal sequences in population bursts encode information in human cortex" (2024) https://www.nature.com/articles/s41586-024-08075-8
"Order matters: neurons in the human brain fire in sequences that encode information" (2024) https://www.nature.com/articles/d41586-024-03835-y
"Two-dimensional neural geometry underpins hierarchical organization of sequence in human working memory" (2024) https://www.nature.com/articles/s41562-024-02047-8 .. https://news.ycombinator.com/item?id=42084279
Layer Codes
"Layer codes" (2024) https://www.nature.com/articles/s41467-024-53881-3 :
> Abstract: Quantum computers require memories that are capable of storing quantum information reliably for long periods of time. The surface code is a two-dimensional quantum memory with code parameters that scale optimally with the number of physical qubits, under the constraint of two-dimensional locality. In three spatial dimensions an analogous simple yet optimal code was not previously known. Here we present a family of three dimensional topological codes with optimal scaling code parameters and a polynomial energy barrier. Our codes are based on a construction that takes in a stabilizer code and outputs a three-dimensional topological code with related code parameters. The output codes are topological defect networks formed by layers of surface code joined along one-dimensional junctions, with a maximum stabilizer check weight of six. When the input is a family of good quantum low-density parity-check codes the output codes have optimal scaling. Our results uncover strongly-correlated states of quantum matter that are capable of storing quantum information with the strongest possible protection from errors that is achievable in three dimensions.
"Hynix launches 321-layer NAND" (2024) https://news.ycombinator.com/item?id=42229825
See: "Rowhammer for Qubits" re: whether electron tunneling in current and future generation RAM is sufficient to simulate quantum operators; whether RAM [error correction] is a quantum computer even without the EM off the RAM bus.
Breakthrough Material Perfectly Absorbs 99% of Electromagnetic Waves
"Absorption-Dominant Electromagnetic Interference (EMI) Shielding across Multiple mmWave Bands Using Conductive Patterned Magnetic Composite and Double-Walled Carbon Nanotube Film" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202406197 :
> Abstract: The revolution of millimeter-wave (mmWave) technologies is prompting a need for absorption-dominant EMI shielding materials. While conventional shielding materials struggle in the mmWave spectrum due to their reflective nature, this study introduces a novel EMI shielding film with ultralow reflection (<0.05 dB or 1.5%), ultrahigh absorption (>70 dB or 98.5%), and superior shielding (>70 dB or 99.99999%) across triple mmWave frequency bands with a thickness of 400 µm. By integrating a magnetic composite layer (MCL), a conductive patterned grid (CPG), and a double-walled carbon nanotube film (DWCNTF), specific resonant frequencies of electromagnetic waves are transmitted into the film with minimized reflection, and trapped and dissipated between the CPG and the DWCNTF. The design factors for resonant frequencies, such as the CPG geometry and the MCL refractive index, are systematically investigated based on electromagnetic wave propagation theories. This innovative approach presents a promising solution for effective mmWave EMI shielding materials, with implications for mobile communication, radar systems, and wireless gigabit communication.
Is this material useful for space travel and also quantum computers?
It says mmWave just, though.
How is it absorbing instead of reflecting energy?
Do things store energy when they absorb [mmWave] EM? As phonons?
Twisted Single Walled Carbon Nanotubes store energy;
From "LG Chem develops material capable of suppressing thermal runaway in batteries" (2024) https://news.ycombinator.com/item?id=41745154 :
> "Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
From "Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41775612 :
> "Sm2O3 micron plates/B4C/HDPE composites containing high specific surface area fillers for neutron and gamma-ray complex radiation shielding" (2024) https://www.sciencedirect.com/science/article/abs/pii/S02663...
Show HN: TeaTime – distributed book library powered by SQLite, IPFS and GitHub
Recently there seem to be a surge in SQLite related projects. TeaTime is riding that wave...
A couple of years ago I was intrigued by phiresky's post[0] about querying SQLite over HTTP. It made me think that if anyone can publish a database using GitHub Pages, I could probably build a frontend in which users can decide which database to query. TeaTime is like that - when you first visit it, you'll need to choose your database. Everyone can create additional databases[1]. TeaTime then queries it, and fetches files using an IPFS gateway (I'm looking into using Helia so that users are also contributing nodes in the network). Files are then rendered in the website itself. Everything is done in the browser - no users, no cookies, no tracking. LocalStorage and IndexedDB are used for saving your last readings, and your position in each file.
Since TeaTime is a static site, it's super easy (and free) to deploy. GitHub repo tags are used for maintaining a list of public instances[2].
Note that a GitHub repository isn't mandatory for storing the SQLite files or the front end - it's only for the configuration file (config.json) of each database, and for listing instances. Both the instances themselves and the database files can be hosted on Netlify, Cloudflare Pages, your Raspberry Pi, or any other server that can host static files.
I'm curious to see what other kinds of databases people can create, and what other types of files TeaTime could be used for.
[0] https://news.ycombinator.com/item?id=27016630
[1] https://github.com/bjesus/teatime-json-database/
[2] https://github.com/bjesus/teatime/wiki/Creating-a-TeaTime-in...
Do static sites built with sphinx-build or jupyter-book or hugo or other jamstack static site generators work with TeaTime?
sphinx-build: https://www.sphinx-doc.org/en/master/man/sphinx-build.html
There may need to be a different Builder or an extension of sphinxcontrib.serializinghtml.JSONHTMLBuilder which serializes a doctree (basically a DOM document object model) to the output representation: https://www.sphinx-doc.org/en/master/usage/builders/#sphinxc...
datasette and datasette-lite can load CSV, JSON, Parquet, and SQLite databases; supports Full-Text Search; and supports search Faceting. datasette-lite is a WASM build of datasette with the pyodide python distribution.
datasette-lite > Loading SQLite databases: https://github.com/simonw/datasette-lite#loading-sqlite-data...
jupyter-lite is a WASM build of jupyter which also supports sqlite in notebooks in the browser with `import sqlite3` with the python kernel and also with a sqlite kernel: https://jupyter.org/try-jupyter/lab/
jupyterlite/xeus-sqlite-kernel: https://github.com/jupyterlite/xeus-sqlite-kernel
(edit)
xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i...
%FETCH <url> <filename> https://github.com/jupyter-xeus/xeus-sqlite/blob/ce5a598bdab...
xlite.cpp > void fetch(const std::string url, const std::string filename) https://github.com/jupyter-xeus/xeus-sqlite/blob/main/src/xl...
> Do static sites built with sphinx-build or jupyter-book or hugo or other jamstack static site generators work with TeaTime?
I guess it depends on what you mean by "work with TeaTime". TeaTime itself is a static site, generated using Nuxt. Nothing that it does cannot be achieved with another stack - at the end it's just HTML, CSS and JS. I haven't tried sphinx-build or jupyter-book, but there isn't a technical reason why Hugo wouldn't be able to build a TeaTime like website, using the same databases.
> datasette-lite > Loading SQLite databases: https://github.com/simonw/datasette-lite#loading-sqlite-data...
I haven't seen datasette before. What are the biggest benefits you think it has over sql.js-httpvfs (which I'm using now)? Is it about the ability to also use other formats, in addition to SQLite? I got the impression that sql.js-httpvfs was a bit more of a POC, and later some possibly better solutions came out, but I haven't really went that rabbit hole to figure out which one would be best.
Edit: looking a little more into datasette-lite, it seems like one of the nice benefits of sql.js-httpvfs is that it doesn't download the whole SQLite database in order to query it. This makes it possible have a 2GB database but still read it in chunks, skipping around efficiently until you find your data.
So the IPFS part is instead of a CDN? Does it replace asset URLs in templates with e.g. cf IPFS gateway URLs with SRI hashes?
datasette-lite > "Could this use the SQLite range header trick?" https://github.com/simonw/datasette-lite/issues/28
From xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i... re: "Loading partial SQLite databases over HTTP":
> sql.js-httpvfs: https://github.com/phiresky/sql.js-httpvfs
> sqlite-wasm-http: https://github.com/mmomtchev/sqlite-wasm-http
>> This project is inspired from @phiresky/sql.js-httpvfs but uses the new official SQLite WASM distribution.
Datasette creates a JSON API from a SQLite database, has an optional SQL query editor with canned queries, multi DB query support, docs; https://docs.datasette.io/en/stable/sql_queries.html#cross-d... :
> SQLite has the ability to run queries that join across multiple databases. Up to ten databases can be attached to a single SQLite connection and queried together.
The IPFS part is where the actual files are downloaded from. I wouldn't necessarily say that it's instead of a CDN - it just seems to be a reliable enough way of getting the files (without the pain of setting up a server with all these files myself). Indeed, it fetches them through IPFS Gateways (the Cloudflare IPFS Gateway is unfortunately no longer a thing) using their CID hash.
Teaching quantum physics in schools: Experts encourage focus on 2 state systems
ScholarlyArticle: "Design and evaluation of a questionnaire to assess learners’ understanding of quantum measurement in different two-state contexts: The context matters" (2024) https://journals.aps.org/prper/abstract/10.1103/PhysRevPhysE...
"Making quantum physics easier to digest in schools: Experts encourage focus on two-state systems" https://phys.org/news/2024-11-quantum-physics-easier-digest-...
Tunable ultrasound propagation in microscale metamaterials
These kinds of metamaterials will really open the door to many new applications deemed impossible before. I am curious how (or if) manufacturing these materials at scale will be possible tho.
the tool they used is basically a nanoscale version of a 3D printer that uses UV epoxy. Essentially using a laser focused through a high numerical aperture microscope to focus light and cure the epoxy (there's some nonlinear action here but it isn't that important).
The key point here being that it is a serial process where a single spot is rastered over the sample, unlike conventional optical lithography which is an area illuminating parallel process.
Top Open-Source End-to-End Testing Frameworks of 2024
Recording tests:
Playwright codegen vscode extension: https://playwright.dev/docs/codegen
Selenium IDE: https://www.selenium.dev/selenium-ide/docs/en/introduction/g...
Cypress Recorder exports to Cypress JS code and Puppeteer: https://github.com/cypress-io/cypress-recorder-extension
NightwatchJS Chrome Recorder generates tests from Chrome DevTools Recorder: https://github.com/nightwatchjs/nightwatch-chrome-recorder
Chrome DevTools Recorder exports to Puppeteer JS code: https://developer.chrome.com/docs/devtools/recorder/
Fwupd 2.0.2 Allows Updating Firmware on Many More Devices
> - Raspberry Pi Pico
https://github.com/fwupd/fwupd :
fwupdmgr get-devices
fwupdmgr refresh && fwupdmgr get-updates
fwupdmgr update
Robot Jailbreak: Researchers Trick Bots into Dangerous Tasks
Given that anyone who’s interacted with the LLM field for fifteen minutes should know that “jailbreaks” or “prompt injections” or just “random results” are unavoidable, whichever reckless person decided to hook up LLMs to e.g. flamethrowers or cars should be held accountable for any injuries or damage, just as they would for hooking them up to an RNG. Riding the hype wave of LLMs doesn’t excuse being an idiot when deciding how to control heavy machinery.
Anything that's fast, heavy, sharp, abrasive, hot, high voltage, controls a battery charger, keeps people alive, auto-fires, flys above our heads and around eyes, breakable plastic, spinning fast, interacts with children, persons with disabilities, the elderly and/or the infirm.
If kids can't push it over or otherwise disable it, and there is a risk of exploitation of a new or known vulnerability [of LLMs in general], what are the risks and what should the liability structure be? Do victims have to pay an attorney to sue, or does the state request restitution for the victim in conjunction with criminal prosecution? How do persons prove that chucky bot was compromised at the time of the offense?
SQLite: Outlandish Recursive Query Examples
Firefox bookmarks have nested folders in an arbitrary depth tree, so a recursive CTE might be faster; https://www.google.com/search?q=Firefox+bookmarks+%22CTE%22
(Edit) "Bug 1452376 - Replace GetDescendantFolders with a recursive subquery" https://hg.mozilla.org/integration/autoland/rev/827cc04dacce
"Recursive Queries Using Common Table Expressions" https://gist.github.com/jbrown123/b65004fd4e8327748b650c7738...
storing the tree as an MPTT/NestedSet would massively simplify this, without any subquery shenanigans.
https://en.m.wikipedia.org/wiki/Nested_set_model
https://imrannazar.com/articles/modified-preorder-tree-trave...
There are at least five ways to store a tree in PostgreSQL, for example: adjacency list, nested sets like MPTT Modified Preorder Tree Traversal, nested intervals, Materialized Paths, ltree, JSON
E.g. django-mptt : https://github.com/django-mptt/django-mptt/blob/main/mptt/mo... :
indexed_attrs = (mptt_meta.tree_id_attr,)
field_names = (
mptt_meta.left_attr,
mptt_meta.right_attr,
mptt_meta.tree_id_attr,
mptt_meta.level_attr, )
Nested set model: https://en.wikipedia.org/wiki/Nested_set_model :> The nested interval model stores the position of the nodes as rational numbers expressed as quotients (n/d).
Implementing topologically ordered time crystals on quantum processors
ScholarlyArticle: "Long-lived topological time-crystalline order on a quantum processor" (2024) https://www.nature.com/articles/s41467-024-53077-9
"Robust continuous time crystal in an electron–nuclear spin system" (2024) https://www.nature.com/articles/s41567-023-02351-6 .. https://news.ycombinator.com/item?id=39291044
1990s United States Boom
Notice that it's the "1990s boom", not the "Clinton Boom".
[deleted]
Scientists discover laser light can cast a shadow
Exploits nonlinear optical interactions in a medium (ruby) to cause one beam to cast a shadow from another beam.
Right. Materials with nonlinear optical properties make optical logic possible. Here's an overview of that.[1]
Cross-gain modulation (XGM)
XGM has been investigated extensively to design optical logic gates with SOA. It is assumed that there are two input light beams for each SOA. One is the probe light and the other is the much stronger pump light. The probe light cannot pass through the SOA when the pump light saturates it. The probe light can go through the SOA when the pump light is absent.
[1] https://www.oejournal.org/article/doi/10.29026/oes.2022.2200...
Additional evidence of photons interacting despite common belief, though:
"Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315
Photons affect mass by causing phonons, which affects scattering and re-emission; so thereby photons interact with photons through light-matter photon-phonon interactions.
From the article:
> Absorption of the green laser heats the cube, which changes the phonon population and lattice spacing and, in turn, the opacity.
/? are photons really massless: https://www.google.com/search?q=are+photons+really+massless
Y: because otherwise E=mc^2 is wrong
N: because it's never been proven that photons are massless
N: because photons are affected by black holes' massful gravitational attraction, and/or magnetic fields
N: because "solar wind" and "radiation pressure" cause displacement
If photons are slightly massful, then coherent light photons would draw together over long distances; thus they interact.
"Light and gravitational waves don't arrive simultaneously" https://news.ycombinator.com/item?id=38061551
"Physicists discover that gravity can create light" https://phys.org/news/2023-04-physicists-gravity.html https://news.ycombinator.com/item?id=35633291
Async Django in Production
When will it be common knowledge that introducing async function coloring (on the callee side, instead of just 'go' on the call side) was the biggest mistake of the decade?
gevent was created over a decade ago to solve this.
python users only have themselves to blame for ignoring it.
From "Asyncio, twisted, tornado, gevent walk into a bar" (2023) https://news.ycombinator.com/item?id=37227567 :
> IIRC the history of the async things in TLA in order: Twisted (callbacks), Eventlet (for Second Life by Linden Labs), tornado, gevent; gunicorn, Python 3.5+ asyncio, [uvicorn,]
Async/await > History: https://en.wikipedia.org/wiki/Async/await
Ubitium is developing 'universal' processor combining CPU, GPU, DSP, and FPGA
that's a really long winded way of saying "SoC with FPGA".
From "Universal AI RISC-V processor does it all — CPU, GPU, DSP, FPGA" (2024) https://www.eenewseurope.com/en/universal-ai-risc-v-processo... :
> For over half a century, general-purpose processors have been built on the Tomasulo algorithm, developed by IBM engineer Robert Tomasulo in 1967. It’s a $500B industry built on specialised CPU, GPU and other chips for different computing tasks. Hardware startup Ubitium has shattered this paradigm with a breakthrough universal RISC-V processor that handles all computing workloads on a single, efficient chip — unlocking simpler, smarter, and more cost-effective devices across industries — while revolutionizing a 57-year-old industry standard.
Tomasulo's algorithm: https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm
Intel, AMD, ARM, X-Silicon’s C-GPU RISC architecture, and Cerebras' on-chip SRAM architecture and are all Tomasulo algorithm OOO Out-of-Order execution processor architectures FWIU
The case for open source solutions in HIPAA
From the article: https://allthingsopen.org/articles/the-case-for-open-source-... :
> currently keeping client journal entries with Apple Pages
Those are "unstructured notes" in medical informatics or health informatics.
Health informatics: https://en.wikipedia.org/wiki/Health_informatics
For the medical dictation to transcription to unstructured notes workflow,
OpenAI Whisper is free but not open source, and it hallucinates; https://www.wired.com/story/hospitals-ai-transcription-tools...
Perhaps dictation to unstructured text to chart form fields is the clinical workflow that needs optimization.
Which chart form fields work with other physicians' tools too?
FHIR is an open spec: https://en.wikipedia.org/wiki/Fast_Healthcare_Interoperabili... :
> FHIR builds on previous data format standards from HL7, like HL7 version 2.x and HL7 version 3.x. But it is easier to implement because it uses a modern web-based suite of API technology, including a HTTP-based RESTful protocol, and a choice of JSON [JSON-LD], XML or RDF for data representation.[1] One of its goals is to facilitate interoperability between legacy health care systems, to make it easy to provide health care information to health care providers and individuals on a wide variety of devices from computers to tablets to cell phones, and to allow third-party application developers to provide medical applications which can be easily integrated into existing systems. [2]
> FHIR provides an alternative to document-centric approaches by directly exposing discrete data elements as services. For example, basic elements of healthcare like patients, admissions, diagnostic reports and medications can each be retrieved and manipulated via their own resource URLs.
FHIR spec: https://build.fhir.org/
Which EHRs / EMRs support FHIR?
FHIR / Open Source Implementations: https://confluence.hl7.org/display/FHIR/Open+Source+Implemen...
FHIR / Public Test Servers: https://confluence.hl7.org/display/FHIR/Public+Test+Servers
/? awesome clinical open source: https://www.google.com/search?q=awesome+clinical+open+source
> Perhaps dictation to unstructured text to chart form fields is the clinical workflow that needs optimization.
Medical annotation, Terminology Server, NER: Named Entity Recognition, WSD: Word-sense Disambiguation, AI Summarization, look up and pre-fill the correct forms and form fields in the chart (according to the unstructured notes), Medical Coding
Clinical coding or Medical coding: https://en.wikipedia.org/wiki/Clinical_coder
From https://news.ycombinator.com/item?id=38739890#38744177 re: Linked Data in heath informatics:
> Yeah, so it's at that workflow where they're not getting linked data edges out of their own unstructured data input.
Shadow of a Laser Beam
Discussion (53 points, 6 days ago, 20 comments) https://news.ycombinator.com/item?id=42145919
New theory reveals the shape of a single photon
"Exact Quantum Electrodynamics of Radiative Photonic Environments" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
Show HN: I made an ls alternative for my personal use
Did anyone here use Genera on an original lisp machine? It had a pseudo-graphical interface and a directory listing provided clickable results. It would be really neat if we could use escaping to confer more information to the terminal about what a particular piece of text means.
Feature-request: bring back clickable ls results!
Bonus points for defining a new term type and standard for this.
There's already `ls --hyperlink` for clickable results, but that depends on your terminal supporting the URL escape sequence.
NASA: Mystery of Life's Handedness Deepens
From "Amplification of electromagnetic fields by a rotating body" https://news.ycombinator.com/item?id=41873531 :
> ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
>> Could this be used as an engine of some kind?
> What about helical polarization?
If there is locomotion due to a dynamic between handed molecules and, say, helically polarized fields; is such handedness a survival selector for life in deep space?
Are chiral molecules more likely to land on earth?
> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
> Sugar molecules are asymmetrical / handed, per 3blue1brown and Steve Mould. /? https://www.google.com/search?q=Sugar+molecules+are+asymmetr... https://www.google.com/search?q=Sugar+molecules+are+asymmetr...
> Is there a way to get to get the molecular propeller effect and thereby molecular locomotion, with molecules that contain sugar and a rotating field or a rotating molecule within a field?
Though, a new and plausible terrestrial origin of life hypothesis:
Methane + Gamma radiation => Guanine && Earth thunderstorms => Gamma Radiation https://news.ycombinator.com/item?id=42131762#42157208 :
> A terrestrial life origin hypothesis: gamma radiation mutated methane (CH4) into Glycine (the G in ACGT) and then DNA and RNA.
Hybrid VQE photonic qudits for accurate QC without error-correction techniques
From OT TA NewsArticle: "Photon qubits challenge AI, enabling more accurate quantum computing without error-correction techniques" https://phys.org/news/2024-11-photon-qubits-ai-enabling-accu... :
> VQE is a hybrid algorithm designed to use a Quantum Processing Unit (QPU) and a Classical Processing Unit (CPU) together to perform faster computations.
> Global research teams, including IBM and Google, are investigating it in a variety of quantum systems, including superconducting and trapped-ion systems. However, qubit-based VQE is currently only implemented up to 2 qubits in photonic systems and 12 qubits in superconducting systems, and is challenged by error issues that make it difficult to scale when more qubits and complex computations are required. [...]
> In this study, a qudit was implemented by the orbital angular momentum state of a single-photon, and dimensional expansion was possible by adjusting the phase of a photon through holographic images. This allowed for high-dimensional calculations without complex quantum gates, reducing errors.
ScholarlyArticle: "Qudit-based variational quantum eigensolver using photonic orbital angular momentum states" (2024) https://www.science.org/doi/10.1126/sciadv.ado3472
> In this study, a qudit was implemented by the orbital angular momentum state of a single-photon, and dimensional expansion was possible by adjusting the phase of a photon through holographic images
From https://news.ycombinator.com/item?id=40281756 .. from "Physicists use a 350-year-old theorem to reveal new properties of light waves" (2023) https://phys.org/news/2023-08-physicists-year-old-theorem-re... :
> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
> [..] Qian's team interpreted the intensity of a light as the equivalent of a physical object's mass, then mapped those measurements onto a coordinate system that could be interpreted using Huygens' mechanical theorem. "Essentially, we found a way to translate an optical system so we could visualize it as a mechanical system, then describe it using well-established physical equations," explained Qian.
> Once the team visualized a light wave as part of a mechanical system, new connections between the wave's properties immediately became apparent—including the fact that entanglement and polarization stood in a clear relationship with one another.
Shouldn't that make holography and photonic phase sensors and light field cameras possible with existing photographic intensity sensors?
"Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
Huygens-Fresnel principle: https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_princi...
Parallel axis theorem: https://en.wikipedia.org/wiki/Parallel_axis_theorem
VQE: Variational Quantum Eigensolver: https://en.wikipedia.org/wiki/Variational_quantum_eigensolve... :
> Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on the variational method of quantum mechanics. [...] It is an example of a noisy intermediate-scale quantum (NISQ) algorithm
From https://news.ycombinator.com/item?id=42030319 :
> "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171
>> We fully resolve this problem, giving a polynomial time algorithm for learning H to precision ϵ from polynomially many copies of the Gibbs state at any constant β>0.
>> Our main technical contribution is a new flat polynomial approximation to the exponential function, and a translation between multi-variate scalar polynomials and nested commutators. This enables us to formulate Hamiltonian learning as a polynomial system. We then show that solving a low-degree sum-of-squares relaxation of this polynomial system suffices to accurately learn the Hamiltonian.
Relaxation (iterative method) https://en.wikipedia.org/wiki/Relaxation_(iterative_method)
Linux 6.13 will report the number of hung tasks since boot
What counts as a hung task? Blocking on unsatisfiable I/O for more than X seconds? Scheduler hasn’t gotten to it in X seconds?
If a server process is blocking on accept(), wouldn’t it count as hung until a remote client connects? or do only certain operations count?
torvalds/linux//kernel/hung_task.c :
static void check_hung_task(struct task_struct *t, unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
static void check_hung_uninterruptible_tasks(unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
Just to double check my understanding (because being wrong on the internet is perhaps the fastest way to get people to check your work):
Is this saying that regular tasks that haven't been scheduled for two minutes and tasks that are uninterruptible (truly so, not idle or also killable despite being marked as uninterruptible) that haven't been woken up for two minutes are counted?
Show HN: Llama 3.2 Interpretability with Sparse Autoencoders
I spent a lot of time and money on this rather big side project of mine that attempts to replicate the mechanistic interpretability research on proprietary LLMs that was quite popular this year and produced great research papers by Anthropic [1], OpenAI [2] and Deepmind [3].
I am quite proud of this project and since I consider myself the target audience for HackerNews did I think that maybe some of you would appreciate this open research replication as well. Happy to answer any questions or face any feedback.
Cheers
[1] https://transformer-circuits.pub/2024/scaling-monosemanticit...
[2] https://arxiv.org/abs/2406.04093
[3] https://arxiv.org/abs/2408.05147
The relative performance in err/watts/time compared to deep learning for feature selection instead of principal component analysis and standard xgboost or tabular xt TODO for optimization given the indicating features.
XAI: Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
/? XAI , #XAI , Explain, EXPLAIN PLAN , error/energy/time
From "Interpretable graph neural networks for tabular data" (2023) https://news.ycombinator.com/item?id=37269881 :
> TabPFN: https://github.com/automl/TabPFN .. https://x.com/FrankRHutter/status/1583410845307977733 [2022]
"TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second" (2022) https://arxiv.org/abs/2308.08945
> FWIU TabPFN is Bayesian-calibrated/trained with better performance than xgboost for non-categorical data
From https://news.ycombinator.com/item?id=34619013 :
> /? awesome "explainable ai" https://www.google.com/search?q=awesome+%22explainable+ai%22
- (Many other great resources)
- https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master... :
> Post model-creation analysis, ML interpretation/explainability
> /? awesome "explainable ai" "XAI"
"A Survey of Privacy-Preserving Model Explanations: Awesome Privacy-Preserving Explainable AI" https://awesome-privex.github.io/
Show HN: We open-sourced our compost monitoring tech
I'm from a compost tech startup (Monty Compost Co.) focused on making composting more efficient for households and industrial facilities. But our tech isn’t just for composting— it’s a versatile system that can be repurposed for a wide range of applications. So, we’ve made it open source for anyone to experiment with!
One of the exciting things about our open-source compost monitoring tech is its flexibility. You can connect it to platforms like Raspberry Pi, Arduino, or other single-board computers to expand its capabilities or integrate it into your own projects.
Our system includes sensors for: * Gas composition * Temperature * Moisture levels * Air pressure
All data can be exported as CSV files for analysis. While it’s originally built for monitoring compost, the hardware and data capabilities are versatile and could be repurposed for other applications (IoT, environmental monitoring, etc.)
Hacker’s Guide to Monty Tech: https://github.com/gtls64/MontyHome-Hackers-Guide
If you’re into data, sensors, or creative tech hacks, we’d love for you to check it out and let us know what you build!
collectd is an open source monitoring system which can record to e.g. RRD flat files or SQLite and can forward collected metrics to SIEM-like monitoring and charting and anomaly detection apps like Grafana or InfluxDB.
Nagios has "state flaping detection" to prevent spurious notifications.
collectd-python-plugins includes Python scripts for monitoring humidity and temp with i2c sensors and Python: https://github.com/dbrgn/collectd-python-plugins
There are LoraWAN soil moisture sensors, but they require batteries or an in-field charging method
"Satellite images of plants' fluorescence can predict crop yields" (2024)
"Sensor-Free Soil Moisture Sensing Using LoRa Signals (2022)" https://dl.acm.org/doi/abs/10.1145/3534608 .. https://news.ycombinator.com/context?id=40234912
/? open source soil moisture sensor: https://www.google.com/search?q=open+source+soil+moisture+se...
Thanks so much for sharing these resources—this is fantastic!
If you’re into LoRaWAN, you might be interested to hear that we’re also developing an industrial composting monitor that incorporates LoRaWAN tech. Here’s the promo video link if you’d like to check it out: https://www.youtube.com/watch?v=TZFiiwLhZh8&feature=youtu.be
Can your sensor product feed data to open source software for hobbyist and professional agriculture?
I set up FarmOS in a container once; the PWA approach to the offline mobile app was cool but I guess I wasn't that committed to manual data collection or hobbyist gardening.
Are there open standards to support architectural sensor data?
Where is the identifier on the sensor? How does the user scan the visually-confirmable sensor barcode or QR code or similar and associate that with a garden bed or a container?
How does it notify of low battery status; is there a voltage reading to predict the out of battery condition?
Is there a configurable polling interval?
How do I find a sensor unit with a dead battery; is there a low-power chirp, or do I need a metal detector or very directional wireless sensors and triangulation or trilateration?
Are there nooks and crannies in the casing?
How to replace the battery?
Can they be made out of compostable materials? E.g. carbon with existing nanofabrication capabilities
After Single Walled Twisted Carbon Nanotube batteries which are unfortunately still only in the lab, and more practically Sodium Ion, which batteries can safely be discarded or recycled in the agricultural field?
LoRaWAN may be more economical than multiple directional long range WiFi antenna like can be found on YouTube. https://youtu.be/GWq6L94ImX8
Notes on LoRa and OpenWRT, which also supports rtl-sdr, BATMAN wifi mesh networking, and (dual) Mini PCIe 4G radios: https://news.ycombinator.com/item?id=22735933
Thank you so much for these questions! For clarity, I’ll copy and paste the question for each response:
Q: Can your sensor product feed data to open-source software for hobbyist and professional agriculture?
A: Yes, our sensors can definitely feed data into open-source platforms, making them a great fit for both hobbyist and professional agriculture setups. The guide offers a helpful starting point for integration. Additionally, data collected by the sensors can be exported as a CSV through our consumer app. This makes it easy to process the information or import it into FarmOS or other open-source tools you might be using.
Q: Where is the identifier on the sensor? How does the user scan the visually-confirmable sensor barcode or QR code or similar and associate that with a garden bed or a container?
A: Each sensor uses its unique MAC address as its name. To make setup even easier, each device features a visually confirmable barcode or QR code. Once connected, the sensor’s indicator lights confirm the connection status, so you’ll know right away when it’s properly associated.
Q: How does it notify of low battery status; is there a voltage reading to predict the out-of-battery condition? Is there a configurable polling interval?
A: For low battery notifications, the device features a red indicator light that activates when the battery is running low. Additionally, you can poll the device over Bluetooth to get the current battery level.
Q: How do I find a sensor unit with a dead battery; is there a low-power chirp, or do I need a metal detector or very directional wireless sensors and triangulation or trilateration?
A: A low battery is signalled by the sensor’s lights turning orange, while a dead battery is indicated by the absence of flashing blue lights. If the sensor still has some power, you can poll it via Bluetooth to check its battery level.
Q: Are there nooks and crannies in the casing?
A: Yes, if you’re thinking of something specific for this — please let me know! I’m happy to chat further :)
Q: How to replace the battery?
A: Currently, the battery in our sensors is not replaceable. However, when the device reaches the end of its life, we’re committed to sustainability. We plan to offer users a significant replacement discount and take back the module and responsibly recycle it into new Montys. Interestingly, the original Monty design included a removable battery pack. Through testing, we discovered that most connectors weren’t durable enough to withstand the tough composting environment, so we shifted to a sealed design to ensure long-term reliability.
Q: Can they be made out of compostable materials? E.g. carbon with existing nanofabrication capabilities
A: We’ve trialed biodegradable plastics in the past, but we found that they degraded in the field. Instead, we’ve opted for a 100% recyclable plastic material to ensure that it is able top withstand harsh compost conditions.
Hopefully, I’ve covered everything here—if you have any further questions, just let me know!
/? crop monitoring wikipedia: https://www.google.com/search?q=crop+monitoring+wikipedia ...
Precision agriculture: https://en.wikipedia.org/wiki/Precision_agriculture
Digital agriculture: https://en.wikipedia.org/wiki/Digital_agriculture
/? crop monitoring system site:github.com https://www.google.com/search?q=crop+monitoring+system+site%...
SIEM: https://en.wikipedia.org/wiki/Security_information_and_event...
Show HN: Retry a command with exponential backoff and jitter (+ Starlark exprs)
This looks cool - will give it a try (hah!) Curious on why you picked starlark instead of cel for the conditional scripting part?
All right, let me tell you the history of recur to explain this choice. :-)
I wrote the initial version in Python in 2023 and used simpleeval [1] for the condition expressions. The readme for simpleeval states:
> I've done the best I can with this library - but there's no warranty, no guarantee, nada. A lot of very clever people think the whole idea of trying to sandbox CPython is impossible. Read the code yourself, and use it at your own risk.
In early 2024, simpleeval had a vulnerability report that drove the point home [2]. I wanted to switch to a safer expression library in the rare event someone passed untrusted arguments to `--condition` and as a matter of craft. (I like simpleeval, but I think it should be used for expressions in trusted or semi-trusted environments.)
First, I evaluated cel-python [3]. I didn't adopt it over the binary dependency on PyYAML and concerns about maturity. I also wished to avoid drastically changing the condition language.
Next in line was python-starlark-go [4], which I had only used in a project that ultimately didn't need complex config. I had been interested in Starlark for a while. It was an attractive alternative to conventional software configuration. I saw an opportunity to really try it.
A switch to python-starlark-go would have made platform-independent zipapps I built with shiv [5] no longer an option. This was when I realized I might as well port recur to Go, use starlark-go natively, and get static binaries out of it. I could have gone with cel-go, but like I said, I was interested in Starlark and wanted to keep expressions similar to how they were with simpleeval.
[1] https://github.com/danthedeckie/simpleeval
[2] https://github.com/danthedeckie/simpleeval/issues/138
[3] https://github.com/cloud-custodian/cel-python
There's not yet a Python implementation of Starlark (which, like Bazel, is a fork of Skylark FWIU)?
All that have tried to sandbox Python with Python have failed. E.g. RestrictedPython and RPython
I believe Starlark was renamed Skylark, even internally. Bazel is a build system that uses Starlark as a configuration language, not a fork of Starlark.
Bazel is an open source rewrite of Blaze (which introduced Skylark)
Less a rewrite, more a variant: a majority of the source is common to both.
Systemd does exponential retry but IDK about jitter?
A systemd unit file with RestartMaxDelaySec= for exponential back off:
[Unit]
Description="test exponential retry"
[Service]
ExecStart=-/bin/sh -c "date -Is"
Type=simple
Restart=on-failure
RestartSec=60ms
RestartMaxDelaySec=¶
RestartSteps=
# StartLimitBurst=2
# StartLimitIntervalSec=30
# OnFailure=
# TimeoutStartSec=
# TimeoutStartFailureMode=
# WatchdogSec=
# RestartPreventExitStatus=
# RestartForceExitStatus=
# OOMPolicy=
[Install]
WantedBy=default.target
man systemd.service
https://www.freedesktop.org/software/systemd/man/latest/syst...man systemd.unit https://www.freedesktop.org/software/systemd/man/latest/syst...
man systemd.resource-control https://www.freedesktop.org/software/systemd/man/latest/syst...
man systemd.service > "Table 1. Exit causes and the effect of the Restart= settings" https://www.freedesktop.org/software/systemd/man/latest/syst...
"RFE: Exponentially increasing RestartSec=" https://github.com/systemd/systemd/issues/6129#issuecomment-...
"core: add RestartSteps= and RestartSecMax= to dynamically increase interval between auto restarts" https://github.com/systemd/systemd/pull/26902
> RestartSteps= accepts a positive integer as the number of steps to take to increase the interval between auto-restarts from RestartSec= to RestartMaxDelaySec=
/? RestartMaxDelaySec https://www.google.com/search?q=RestartMaxDelaySec
Is this where to add jitter?
"core/service: make restart delay increase more smoothly" https://github.com/systemd/systemd/pull/28266/files
Show HN: Physically accurate black hole simulation using your iPhone camera
Hello! We are Dr. Roman Berens, Prof. Alex Lupsasca, and Trevor Gravely (PhD Candidate) and we are physicists working at Vanderbilt University. We are excited to share Black Hole Vision: https://apps.apple.com/us/app/black-hole-vision/id6737292448.
Black Hole Vision simulates the gravitational lensing effects of a black hole and applies these effects to the video feeds from an iPhone's cameras. The application implements the lensing equations derived from general relativity (see https://arxiv.org/abs/1910.12881 if you are interested in the details) to create a physically accurate effect.
The app can either put a black hole in front of the main camera to show your environment as lensed by a black hole, or it can be used in "selfie" mode with the black hole in front of the front-facing camera to show you a lensed version of yourself.
As far as I can tell, the black hole's you're generating don't look especially correct in the preview: they should have a circular shadow like this https://i.imgur.com/zeShgrx.jpeg
What the black hole looks like depends on how you define your field of view. And if the black hole is spinning, then you don't expect a circular shadow at all. But in our app, if you pick the "Static black hole" (the non-rotating, Schwarzschild case) and select the "Full FOV" option, then you will see the circular shadow that you expect.
The preview shows that you have a static black hole selected
This shape of the shadow is also wrong for kerr though, this is what kerr looks like:
"static" is "Shwarzchild without rotation"?
Do black holes have hair?
Where is the Hawking radiation in these models? Does it diffuse through the boundary and the outer system?
What about black hole jets?
What about vortices? With Gross-Pitaevskii and SQR Superfluid Quantum Relativity
https://westurner.github.io/hnlog/ Ctrl-F Fedi , Bernoulli, Gross-Pitaevskii:
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/ :
> FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Actual observations of black holes;
"This image shows the observed image of M87's black hole (left) the simulation obtained with a General Relativistic Magnetohydrodynamics model, blurred to the resolution of the Event Horizon Telescope [...]" https://www.reddit.com/r/space/comments/bd59mp/this_image_sh...
"Stars orbiting the black hole at the heart of the Milky Way" ESO. https://youtube.com/watch?v=TF8THY5spmo&
"Motion of stars around Sagittarius A*" Keck/UCLA. https://youtube.com/shorts/A2jcVusR54E
/? M87a time lapse
/? Sagittarius A time lapse
/? black hole vortex dynamics
"Cosmic Simulation Reveals How Black Holes Grow and Evolve" (2024) https://www.caltech.edu/about/news/cosmic-simulation-reveals...
"FORGE’d in FIRE: Resolving the End of Star Formation and Structure of AGN Accretion Disks from Cosmological Initial Conditions" (2024) https://astro.theoj.org/article/94757-forge-d-in-fire-resolv...
STARFORGE
GIZMO: http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html .. MPI+OpenMP .. Src: https://github.com/pfhopkins/gizmo-public :
> This is GIZMO: a flexible, multi-method multi-physics code. The code solves the fluid using Lagrangian mesh-free finite-volume Godunov methods (or SPH, or fixed-grid Eulerian methods), and self-gravity with fast hybrid PM-Tree methods and fully-adaptive resolution. Other physics include: magnetic fields (ideal and non-ideal), radiation-hydrodynamics, anisotropic conduction and viscosity, sub-grid turbulent diffusion, radiative cooling, cosmological integration, sink particles, dust-gas mixtures, cosmic rays, degenerate equations of state, galaxy/star/black hole formation and feedback, self-interacting and scalar-field dark matter, on-the-fly structure finding, and more.
Webvm: Virtual Machine for the Web
/? "webvm" jupyter: https://www.google.com/search?q=%22webvm%22+jupyter
- "Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"?" https://news.ycombinator.com/item?id=30167403#30168491 :
- [ ] "ENH: Terminal and Shell: BusyBox, bash/zsh, git; WebVM," https://github.com/jupyterlite/jupyterlite/issues/949
And then now WASM containers in the browser FWIU
Convolutional Differentiable Logic Gate Networks
The Solvay-Kitaev algorithm for quantum logical circuit construction in context to "Artificial Intelligence for Quantum Computing" (2024): https://news.ycombinator.com/item?id=42155909#42157508
From https://news.ycombinator.com/item?id=37379123 :
Quantum logic gate > Universal quantum gates: https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q... :
> Some universal quantum gate sets include:
> - The rotation operators Rx(θ), Ry(θ), Rz(θ), the phase shift gate P(φ)[c] and CNOT are commonly used to form a universal quantum gate set.
> - The Clifford set {CNOT, H, S} + T gate. The Clifford set alone is not a universal quantum gate set, as it can be efficiently simulated classically according to the Gottesman–Knill theorem.
> - The Toffoli gate + Hadamard gate. ; [[CCNOT,CCX,TOFF], H]
> - [The Deutsch Gate]
"Convolutional Differentiable Logic Gate Networks" (2024) https://arxiv.org/abs/2411.04732 :
> With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference. Recently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approaches because their inference only requires logic gate operators such as NAND, OR, and XOR, which are the underlying building blocks of current hardware and can be efficiently executed. We build on this idea, extending it by deep logic gate tree convolutions, logical OR pooling, and residual initializations. This allows scaling logic gate networks up by over one order of magnitude and utilizing the paradigm of convolution. On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
Quick Start to Quantum Programming with CUDA-Q
NVIDIA/cuda-q-academic: https://github.com/NVIDIA/cuda-q-academic
NVIDIA/cuda-quantum; CUDA-Q: src: https://github.com/NVIDIA/cuda-quantum .. docs: https://nvidia.github.io/cuda-quantum/latest/
tequilahub/tequila issues > "CUDA-Q": https://github.com/tequilahub/tequila/issues/374
Matching patients to clinical trials with large language models
"Matching Patients to Clinical Trials with Large Language Models" (2024) https://arxiv.org/abs/2307.15051
Src: https://github.com/ncbi-nlp/TrialGPT
NewsArticle: "NIH-developed AI algorithm matches potential volunteers to clinical trials" (2024) https://www.nih.gov/news-events/news-releases/nih-developed-... :
> Such an algorithm may save clinicians time and accelerate clinical enrollment and research [...]
> A study published in Nature Communications found that the AI algorithm, called TrialGPT, could successfully identify relevant clinical trials for which a person is eligible and provide a summary that clearly explains how that person meets the criteria for study enrollment. The researchers concluded that this tool could help clinicians navigate the vast and ever-changing range of clinical trials available to their patients, which may lead to improved clinical trial enrollment and faster progress in medical research.
Voyager 1 breaks its silence with NASA via radio transmitter not used since 1981
I didn’t realize the Voyagers relied on a once in a 175 year planetary alignment. What a lucky break technology had advanced to the point we could make use of it.
I’m trying to figure out why the alignment was so important.
Wouldn’t it be possible to get a gravity assist in any alignment - albeit just taking a little longer to ping-pong across the system?
The scenario where a gravity assist doesn't work is if turning sharply enough would require your minimum planetcentric approach distance to be less than the radius of the planet - you'd crash into it instead.
This is why Voyager 2 couldn't also do Pluto - it would have needed to change course by roughly 90º at Neptune, which would have required going closer to the center of Neptune than Neptune's own radius.
The most unusual gravity-assist alignment that we did was for Pioneer 11 going from Jupiter to Saturn. The encounters were separated by roughly 120º of heliocentric longitude. Pioneer 11 used Jupiter to bend its path "up" out of the ecliptic plane and encountered Saturn on the way back "down". Nowadays we wouldn't bother doing that (we'd wait for a more direct launch window instead), but the purpose of this was to get preliminary Jupiter and Saturn encounters done in time before Voyager's launch window for the grand tour alignment.
Could Voyager have reduced velocity, glanced off Neptune, waited for the return path on a narrow elliptical orbit, and then boosted to effectively make the 90° turn to Pluto at that time? Was that impossible given its Neptune approach trajectory, or would glancing off and waiting have been more fuel?
And, why didn't this vortical model that includes the forward velocity of the sun make a difference for Voyager's orbital trajectory and current position relative to earth? https://news.ycombinator.com/item?id=42159195 :
> "The helical model - our solar system is a vortex" https://youtube.com/watch?v=0jHsq36_NTU
Breaking that down: Voyager itself couldn't have reduced velocity, it had nowhere near enough reaction mass to do that. Hypothetically a spacecraft could but you might be talking about orders of magnitude more reaction mass. (Which means multiples more of the fuel to launch and accelerate that mass itself, which could quickly escalate beyond any chemical rocket capabilities.)
It also likely wasn't possible to get to Pluto on some future Pluto orbital pass. The limiting factor is likely that Voyager's incoming trajectory to Neptune was already too far beyond solar escape velocity to get into that narrow elliptical orbit you propose. (You'd have to slingshot so close to Neptune's center that you'd hit the planet instead.)
Designing from the beginning to come in slower to Neptune and adjust to encounter Pluto on some future Pluto orbital pass was probably possible, but yeah you might be talking about time scales of Pluto's entire orbit or even multiples of that. (We do similar things for inner solar system missions, like several encounters with Venus separated by multiple Venus-years, but that's on the order of single-digit years and not hundreds.)
The common answer to a lot of these outlandish slingshot questions is usually, yes it's eventually possible by orbital mechanics, but it gets so complicated and lengthy that you may as well just build another separate spacecraft instead. We talk about Voyager's grand tour alignment because it's captivating, but realistically if that hadn't happened we would have just done separate Jupiter-Uranus and Jupiter-Neptune missions instead.
The sun's motion relative to the galaxy doesn't matter for any of this - nothing else in the galaxy is remotely close enough to affect anything, the nearest star is still over 1000x Voyager's distance.
Is it possible to focus on a reflection on the Voyager spacecraft; or how aren't communications ever affected by lack of line of sight?
So there was no way to flip around and counter-thrust due to the velocity by that point in Voyager's trajectory (without a gravitationally-assisted slowdown or waiting for planetary orbits to align the same or in a feasible way)
FWICS; /? spirograph ... "Hypotrochoid" ... Hypotrochoid orbit
Aren't there hypotrochoid orbits to accelerate and decelerate using planetary gravity; gravity assist
Gravity assist: https://en.wikipedia.org/wiki/Gravity_assist
- https://space.stackexchange.com/questions/10021/how-are-grav... :
> How can I intuitively understand gravity assists?:
> Aim closer to the planet for a lower pass for a greater change in direction (and velocity from an external frame of reference), farther from the planet for a smaller change; aim ahead of the planet for a slower resulting external velocity, behind for a higher velocity: gravity assist guide (image from this KSP tutorial)
- Vindication! KSP is what I probably would have used to answer questions like this; though KSP2 doesn't work in Proton-GE on Steam on Linux and they've since disbanded / adjourned the KSP2 team fwiu.
- JPL SPICE toolkit: https://naif.jpl.nasa.gov/naif/toolkit.html
- SpiceyPy; src: https://github.com/AndrewAnnex/SpiceyPy docs: https://spiceypy.readthedocs.io/en/stable/
- SpiceyPy docs > Lessons: https://spiceypy.readthedocs.io/en/stable/lessonindex.html :
- > various SPICE lessons provided by the NAIF translated to use python code examples
TY for the explanation.
Hopefully cost effective solar sails are feasible.
FWIU SQR Superfluid Quantum Relativity doesn't matter for satellites at Earth-Sun Lagrangian points either; but, at Lagrangian points, don't they have to use thrust to rotate to account for the gravitational 'wind' due to additional local masses in the (probably vortical) n-body attractor system?
Bpftune uses BPF to auto-tune Linux systems
With this tool I am wary that I'll encounter system issues that are dramatically more difficult to diagnose and troubleshoot because I'll have drifted from a standard distro configuration. And in ways I'm unaware of. Is this a reasonable hesitation?
Yes, it is. IMO, except for learning (which should not be done in prod), you shouldn’t make changes that you don’t understand.
The tools seems to mostly tweak various networking settings. You could set up a test instance with monitoring, throw load at it, and change the parameters the tool modifies (one at a time!) to see how it reacts.
I'd run such a tool on prod in "advice mode". It should suggest the tweaks, explaining the reasoning behind them, and listing the actions necessary to implement them.
Then humans would decide if they want to implement that as is, partly, modified, or not at all.
Fair point, though I didn’t see any such option with this tool.
It's developed in the open; we can create Github issue.
In the existing issue, we can link to the code and docs that would need to be understood and changed:
usage, main() https://github.com/oracle/bpftune/blob/6a50f5ff619caeea6f04d...
- [ ] CLI opts: --pretend-allow <tuner> or --log-only-allow <tuner> or [...]
Probably relevant function headers in libbpftune.c:
bpftune_sysctl_write(
bpftuner_tunable_sysctl_write(
bpftune_module_load(
static void bpftuner_scenario_log(struct bpftuner *tuner, unsigned int tunable, ; https://github.com/oracle/bpftune/blob/6a50f5ff619caeea6f04d... https://github.com/oracle/bpftune/blob/6a50f5ff619caeea6f04d...
Here's that, though this is dnvted to -1? https://github.com/oracle/bpftune/issues/99#issuecomment-248...
Ideally this tool could passively monitor and recommend instead of changing settings in production which could lead to loss of availability by feedback failure; -R / --rollback actively changes settings, which could be queued or logged as idk json or json-ld messages.
Transistor for fuzzy logic hardware: promise for better edge computing
I've heard it said that, we could do quantum computational operations with analog electronic components except for the variance in component quality.
How do these fuzzy logic electronic components overcome the same challenges as analog electronic quantum computing?
The article describes performance on a visual CNN Convolutional Neural Network ML task.
Is quantum logic the only sufficient logic to describe systems with phase?
What is most production cost and operating cost efficient at this or similar ML tasks?
For reference, from https://news.ycombinator.com/item?id=41322088 re: "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
> The TPU is constructed with a systolic array architecture that allows parallel 2 bit integer multiply–accumulate operations. A five-layer convolutional neural network based on the TPU can perform MNIST image recognition with an accuracy of up to 88% for a power consumption of 295 µW. We use an optimized nanotube fabrication process [...] 1 TOPS/W/s
ScholarlyArticle: "A van der Waals interfacial junction transistor for reconfigurable fuzzy logic hardware" (2024) https://www.nature.com/articles/s41928-024-01256-3
Two galaxies aligned in a way where their gravity acts as a compound lens
If the lens curved light back toward us, could we see earth several million years ago?
Technically? But the image would be very very very small, so we'd need a detector bigger than the solar system (guesstimate) to see it. That's to see it: I can't imagine what it would take to resolve the image. The tricks in this paper are a start.
To zoom into a reflection on a lens or a water droplet?
From "Hear the sounds of Earth's magnetic field from 41,000 years ago" (2024) https://news.ycombinator.com/item?id=42010159 :
> [ Redshift, Doppler effect, ]
> to recall Earth's magnetic field from 41,000 years ago with such a method would presumably require a reflection (41,000/2 = 20,500) light years away
To see Earth in a reflection, though
Age of the Earth: https://en.wikipedia.org/wiki/Age_of_Earth :
> 4.54 × 10^9 years ± 1%
"J1721+8842: The first Einstein zig-zag lens" (2024) https://arxiv.org/abs/2411.04177v1
What is the distance to the centroid of the (possibly vortical ?) lens effect from Earth in light years?
/? J1721+8842 distance from Earth in light years
- https://www.iflscience.com/first-known-double-gravitational-... :
> The first lens is relatively close to the source, with a distance estimated at 10.2 billion light-years. What happens is that the quasar’s light is magnified and multiplied by this massive galaxy. Two of the images are deflected in the opposite direction as they reach the second lens, another massive galaxy. The path of the light is a zig-zag between the quasar, the first lens, and then the second one, which is just 2.3 billion light-years away
So, given a simplistic model with no relative motion between earth and the presumed constant location lens:
Earth formation: 4.54b years ago
2.3b * 2 = 4.6b years ago
10.2b * 2 = 20.4b years ago
Does it matter that our models of the solar systems typically omit that the sun is traveling through the universe (with the planets swirling now coplanarly and trailing behind), and would the relative motion of a black hole at the edge of our solar system change the paths between here and a distant reflector over time?"The helical model - our solar system is a vortex" https://youtube.com/watch?v=0jHsq36_NTU
YC is wrong about LLMs for chip design
IDK about LLMs there either.
A non-LLM monte carlo AI approach: "Pushing the Limits of Machine Design: Automated CPU Design with AI" (2023) https://arxiv.org/abs/2306.12456 .. https://news.ycombinator.com/item?id=36565671
A useful target for whichever approach is most efficient at IP-feasible design:
From https://news.ycombinator.com/item?id=41322134 :
> "Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490
Artificial Intelligence for Quantum Computing
My master thesis was on using machine learning techniques to synthesise quantum circuits. Since any operation on a QC can be represented as a unitary matrix my research topic was that, using ML, given a set of gates, how many and in what arrangement of them you could generate or at least approximate this matrix. Another aspect was, given a unitary matrix, could a neural network predict a number of gates needed to simulate that matrix as a QC and thereby give us a measure of complexity. It was a lot of fun to test different algorithms from genetic algorithms to neural network architectures. Back then NNs were a lot smaller and I trained them mostly on one GPU and since the matrices get exponentially bigger with the amount of qbits in the circuit it was only possible for me to investigate small circuits with less than a dozen qubits but it was still nice to see that in principle this worked quite well.
How did it do compared to the baseline of the solvoy kitaev algorithm (and the more advanced algorithms that experts have come up with since)?
How (if at all) can the ML approach come up with a circuit when the unitary is too big to explicitly represent it as all the terms in the matrix but the general form of it is known? Eg it is known what the quantum fourier transform on N qubits is defined to be, and consequently any particular element within its matrix is easy to calculate, but you don't need (and shouldn't try) to write it out as a 2^n x 2^n matrix to figure out its implementation.
Solvay-Kiteav theorem: https://en.wikipedia.org/wiki/Solovay%E2%80%93Kitaev_theorem
/? Solvay-Kiteav theorem Cirq QISkit: https://www.google.com/search?q=Solvay-Kiteav+theorem+cirq+q...
qiskit/transpiler/passes/synthesis/solovay_kitaev_synthesis.py: https://github.com/Qiskit/qiskit/blob/main/qiskit/transpiler...
qiskit/synthesis/discrete_basis/solovay_kitaev.py: https://github.com/Qiskit/qiskit/blob/stable/1.2/qiskit/synt...
SolovayKitaevDecomposition: https://docs.quantum.ibm.com/api/qiskit/qiskit.synthesis.Sol...
What are more current alternatives to the Solvay-Kiteav theorem for gate-based quantum computing?
The more current alternatives (iirc) are usually specific to the particular hardware and the gates its capable of rather than a general solution.
Though to be fair this topic was tangential to my focus and I'm a bit rusty on it so I'd have to look around to answer that.
Gamma rays convert CH4 to complex organic molecules, may explain origin of life
ScholarlyArticle: "γ-Ray Driven Aqueous-Phase Methane Conversions into Complex Molecules up to Glycine" (2024) https://onlinelibrary.wiley.com/doi/10.1002/anie.202413296 :
> Abstract: γ-Ray is capable of driving an aqueous-phase CH4+H2O+O2 conversion selectively into CH3COOH via ⋅CH3 and ⋅OH radical intermediates, and driving an aqueous-phase CH3COOH+NH3 conversion into NH2CH2COOH via ⋅CH2COOH and ⋅NH2 radical intermediates. Such γ-ray driven aqueous-phase methane conversions probably play an important role in the formation network of complex organic molecules in the universe, and provides a strategy to convert abundant CH4 into value-added products at mild conditions.
Glycine: https://en.wikipedia.org/wiki/Glycine :
> Glycine (symbol Gly or G;) is an amino acid that has a single hydrogen atom as its side chain. It is the simplest stable amino acid (carbamic acid is unstable). Glycine is one of the proteinogenic amino acids. It is encoded by all the codons starting with GG (GGU, GGC, GGA, GGG). [8]
"Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41726698 :
"Flickering gamma-ray flashes, the missing link between gamma glows and TGFs" (2024) https://www.nature.com/articles/s41586-024-07893-0
Virtual black hole: https://en.wikipedia.org/wiki/Virtual_black_hole ; "planck relics in the quantum foam"
Gamma ray: https://en.wikipedia.org/wiki/Gamma_ray
How are gamma rays causally linked to virtual black holes or planck relics in the quantum foam?
And, is there Hawking radiation from virtual black holes, at what points in their lifecycle?
Window coating reflects heat to cool buildings by 40 degrees
"Neutral-Colored Transparent Radiative Cooler by Tailoring Solar Absorption with Punctured Bragg Reflectors" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202410613
Memories are not only in the brain, human cell study finds
This brings up the question about whether there are hereditary information transmission methods other than DNA. There are so many things we ascribe to “instinct” that might be information transmitted from parent to offspring in some encoded format.
Like songs that newborn songbirds know, migration routes that animals know without being shown, that a mother dog should break the amniotic sac to release the puppies inside, what body shapes should be considered more desirable for a mate out of an infinite variety of shapes.
It seems it implausible to me that all of these things can be encoded as chemical signalling; it seems to require much more complex encoding of information, pattern matching, templates, and/or memory.
You might be interested in epigenetic inheritence. We do know that some epigenetic marks are passed down but its still very much unknown how much heritable information is encoded in epigenetics.
Transgenerational epigenetic inheritance: https://en.wikipedia.org/wiki/Transgenerational_epigenetic_i...
Methods of intergenerational transfer: DNA, RNA, bacteria, fungi, verbal latencies, explicit training
From https://x.com/westurner/status/1213675095513878528 :
> Does the fundamental limit of the amount of classical information encodable in the human genome (even with epigenetics & simultaneous encoding) imply a vast capacity for learning survival-beneficial patterns in very little time, with very few biasing priors?
> [Fundamental 'gbit' requirement 1: “No Simultaneous Encoding”:] if a gbit is used to perfectly encode one classical bit, it cannot simultaneously encode any further information. Two close variants of this are Zeilinger’s Principle (10) and Information Causality (11).
> Is there a proved presumption that genes only code in sequential combinations? Still overestimating the size of the powerset of all [totally-ordered] nonlocal combinations? Still trying to understand counterfactuals in re: constructor theory
Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory
(quantum) Counterfactuals reasoning: https://www.google.com/search?q=(quantum)+*Counterfactual*+r... :
> Counterfactual reasoning is the process of considering events that could have happened but didn't.
Counterfactual definiteness: https://en.wikipedia.org/wiki/Counterfactual_definiteness
Quantum discord; there are multiple types of quantum entropy; entanglement and non-entanglement entropy: https://en.wikipedia.org/wiki/Quantum_discord
N-ary entanglement,
Collective unconscious > See also: https://en.wikipedia.org/wiki/Collective_unconscious
FWIU memories are stored in the cortex and also in the hippocampus; "Brain found to store three copies of every memory" (2024) https://news.ycombinator.com/item?id=41352124
... How do dogs know what not to eat in the wild?
> ... How do dogs know what not to eat in the wild?
They don’t.
Then how have any survived?
Observe the human response to dandelions; are they weeds or are they edible?
Do they have lobed leaves? What [neurons,] do mammals have to heuristically generalize according to visual and gustatory-olfactory features, and counterfactually which don't they have?
Or it's entirely learned, and then the coding for the substrate is still relevant
This topic is related to the work of Michael Levin’s lab, which I only recently found out about and have been digging into. They’ve released a bunch of papers, and Michael has given plenty of in-depth interviews available on YouTube. They’re looking at low-level structures like cells and asking “what can be learned/achieved by viewing these structures as being intelligent agents?” The problem of memory is tied intricately with intelligence, and examples of it at these low levels are found throughout their work.
The results of their experiments are surprising and intriguing: bringing cancer cells back into proper functioning, “anthrobots” self-assembling from throat tissue cells, malformed tadpoles becoming normal frogs, cells induced to make an eye by recruiting their neighbors…
An excerpt from the link below: Our main model system is morphogenesis: the ability of multicellular bodies to self-assemble, repair, and improvise novel solutions to anatomical goals. We ask questions about the mechanisms required to achieve robust, multiscale, adaptive order in vivo, and about the algorithms sufficient to reproduce this capacity in other substrates. One of our unique specialties is the study of developmental bioelectricity: ways in which all cells connect in somatic electrical networks that store, process, and act on information to control large-scale body structure. Our lab creates and employs tools to read and edit the bioelectric code that guides the proto-cognitive computations of the body, much as neuroscientists are learning to read and write the mental content of the brain.
> developmental bioelectricity
Probably already aware of methods like:
- Tissue Nanotransfection : https://en.wikipedia.org/wiki/Tissue_nanotransfection
- "Direct neuronal reprogramming by temporal identity factors" (2023) https://www.pnas.org/doi/10.1073/pnas.2122168120#abstract
FrontierMath: A benchmark for evaluating advanced mathematical reasoning in AI
ScholarlyArticle: "FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI" (2024) https://arxiv.org/abs/2411.04872 .. https://epochai.org/frontiermath/the-benchmark :
> [Not even 2%]
> Abstract: We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematics -- from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.
Additional AI math benchmarks:
- "TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/TIGER-AI-Lab/TheoremQA
Rethinking Code Refinement: Learning to Judge Code Efficiency
Re: costed opcodes in BPF, eWASM (like EVM) https://news.ycombinator.com/item?id=40475509 :
> Costed opcodes incentivize energy efficiency.
Aren't these all fitness scoring methods for what GA calls mutation, crossover, and selection?
And do the functions under test stably coverage on the same output even if not exactly functionally isomorphic? When are less costly approximate solutions sufficient?
Big O, Dynamic tracing and instrumentation, costed opcodes, OPS/kWHr/$
https://news.ycombinator.com/item?id=41333249 :
> codefuse-ai/Awesome-Code-LLM > Analysis of AI-Generated Code, Benchmarks: https://github.com/codefuse-ai/Awesome-Code-LLM#6-analysis-o...
Superconductors at Room Temperature? UIC's Hydrides Could Make It Possible
NewsArticle: "Superconductors at Room Temperature? UIC’s Groundbreaking Materials Could Make It Possible" (2024) https://scitechdaily.com/superconductors-at-room-temperature...
ScholarlyArticle: "Designing multicomponent hydrides with potential high T_c superconductivity" (2024) https://www.pnas.org/doi/10.1073/pnas.2413096121 :
> Abstract: [...] First-principles searches are impeded by the computational complexity of solving the Eliashberg equations for large, complex crystal structures. Here, we adopt a simplified approach using electronic indicators previously established to be correlated with superconductivity in hydrides. This is used to study complex hydride structures, which are predicted to exhibit promisingly high critical temperatures for superconductivity. In particular, we propose three classes of hydrides inspired by the Fm3m RH_3 structures that exhibit strong hydrogen network connectivity, as defined through the electron localization function. [...] These design principles and associated model structures provide flexibility to optimize both Tc and the structural stability of complex hydrides.
Confinement in the Transverse Field Ising Model on the Heavy Hex Lattice
"Confinement in the Transverse Field Ising model on the Heavy Hex lattice" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... :
> Abstract: Inspired by a recent quantum computing experiment [Y. Kim et al., Nature (London), 618, 500–5 (2023)], we study the emergence of confinement in the transverse field Ising model on a decorated hexagonal lattice. Using an infinite tensor network state optimized with belief propagation we show how a quench from a broken symmetry state leads to striking nonthermal behavior underpinned by persistent oscillations and saturation of the entanglement entropy. We explain this phenomenon by constructing a minimal model based on the confinement of elementary excitations. Our model is in excellent agreement with our numerical results. For quenches to larger values of the transverse field and/or from nonsymmetry broken states, our numerical results display the expected signatures of thermalization: a linear growth of entanglement entropy in time, propagation of correlations, and the saturation of observables to their thermal averages. These results provide a physical explanation for the unexpected classical simulability of the quantum dynamics.
"Evidence for the utility of quantum computing before fault tolerance" (2023) https://www.nature.com/articles/s41586-023-06096-3 :
> Abstract: [...] In the regime of strong entanglement, the quantum computer provides correct results for which leading classical approximations such as pure-state-based 1D (matrix product states, MPS) and 2D (isometric tensor network states, isoTNS) tensor network methods [2,3] break down. These experiments demonstrate a foundational tool for the realization of near-term quantum applications [4,5]
NewsArticle: "Computers Find Impossible Solution, Beating Quantum Tech at Own Game" (2024) https://www.sciencealert.com/computers-find-impossible-solut... :
> As successful as that test was, follow-up experiments have shown classical computers can do it too.
> According to the Flatiron Institute's Joseph Tindall and Dries Sels, this is possible because of a behavior called confinement, in which extremely stable states appear in the interconnected chaos of undecided particle properties, giving a classical computer something it can model.
Ask HN: Similar to the FoxDot live coding Pattern object?
FoxDot is an open source music live coding environment written in Python on SuperCollider.
FoxDot has a Pattern object for musical composition that has neat broadcasting rules for [musical] sequences. Are there similar or similarly-concise pattern objects in other languages or projects?
FoxDot pattern object docs: https://foxdot.org/docs/pattern-basics/
> FoxDot has a Pattern object for musical composition that has neat broadcasting rules for [musical] sequences. Are there similar or similarly-concise pattern objects in other languages or projects?
> FoxDot pattern object docs: https://foxdot.org/docs/pattern-basics/
FWIU the PitchGlitch fork of FoxDot is the most updated: https://gitlab.com/iShapeNoise/foxdot
Show HN: Draw.Audio – A musical sketchpad using the Web Audio API
Nice. This made me think of Conway's Game of Life so I made a little mashup of a tone matrix with GoL. https://matthewbilyeu.com/tone-of-life.html
From https://news.ycombinator.com/item?id=36927473 :
> Neat. Can it zoom out instead of wrapping around
Is it a 2d convolution?
> scipy.ndimage.convolve / scipy.fftpack.dct
This would be a cool instrument in BespokeSynth
Quantum Picturalism: Learning Quantum Theory in High School
"Quantum Picturalism: Learning Quantum Theory in High School - Lia Yeh" ZX-calculus https://youtube.com/watch?v=Je2I-Z74p7k&
"Quantum Picturalism: Learning Quantum Theory in High School" (2023) https://arxiv.org/abs/2312.03653 :
> Abstract: Quantum theory is often regarded as challenging to learn and teach, with advanced mathematical prerequisites ranging from complex numbers and probability theory to matrix multiplication, vector space algebra and symbolic manipulation within the Hilbert space formalism. It is traditionally considered an advanced undergraduate or graduate-level subject. In this work, we challenge the conventional view by proposing "Quantum Picturalism" as a new approach to teaching the fundamental concepts of quantum theory and computation. We establish the foundations and methodology for an ongoing educational experiment to investigate the question "From what age can students learn quantum theory if taught using a diagrammatic approach?". We anticipate that the primary benefit of leveraging such a diagrammatic approach, which is conceptually intuitive yet mathematically rigorous, will be eliminating some of the most daunting barriers to teaching and learning this subject while enabling young learners to reason proficiently about high-level problems. We posit that transitioning from symbolic presentations to pictorial ones will increase the appeal of STEM education, attracting more diverse audience.
Q12, QIS
ZX-calculus: https://en.wikipedia.org/wiki/ZX-calculus :
> Generators: The building blocks or generators of the ZX-calculus are graphical representations of specific states, unitary operators, linear isometries, and projections in the computational basis |0⟩, |1⟩ and the Hadamard-transformed basis
The subject matter is too advanced for all but a very tiny fraction of high school students (who should be tracked into customized education anyway). Adding pictures doesn't help.
Also they describe only one part of quantum theory. There is a ton of quantum chemistry and other things that are quantum, but don't deal with stuff like quantum teleportation, which frankly is fairly exotic for everybody but physics grad students and their advisors.
It could be an elective if it teaches its own prereqs, right?
Why not code instead?
OTOH a QIS learning sequence:
QuantumQ open source circuit puzzle game with no intro at all because amoebas, Cirq (SymPy) and QISkit tutorials with Python syntax, tequila abstractions and expectation values, and The Qubit Game to respect it the QC platform (though that's getting better all the times)
QuantumQ: https://github.com/ray-pH/quantumQ
Quantum logic gate: https://en.wikipedia.org/wiki/Quantum_logic_gate
Exercise: Implement a QuantumQ circuit puzzle level with Cirq or QISkit in a Jupyter notebook
DB48X: High Performance Scientific Calculator, Reinvented
Could this run on a TI or a NumWorks calculator?
RPN: Reverse Polish Notation: https://en.wikipedia.org/wiki/Reverse_Polish_notation
RPL: Reverse Polish LISP: https://en.wikipedia.org/wiki/RPL_(programming_language)
In theory yes. In practice, the TI 83/84 are probably not beefy enough, and the NumWorks has a keyboard that makes the endeavour really complicated.
JupyterLite includes SymPy CAS, Pandas, NumPy, and SciPy in WASM (Pyodide) in a browser tab; but it's not yet easy to save notebooks to git or gdrive out of the JupyterLite build. awesome-jupyter > Hosted notebooks; Cocalc, BinderHub, G Colab, ml-tooling/best-of-jupyter
It's also possible to install JupyterLab etc on Android with termux: `pkg install proot gitea python; pip install jupyterlab pandas` iirc
But that doesn't limit the user to a non-QWERTY A-Z keyboard for the College Board.
jupyter notebooks can be (partially auto-) graded in containers with Otter-Grader or nbgrader; and there's nbgitpuller or jupyterlab-git for revision control or source control in (applied) mathematics
The BPF instruction set architecture is now RFC 9669
So… eBPF is now BPF and old BPF is now cBPF or classic bpf.
And there's wasm-bpf: https://github.com/eunomia-bpf/wasm-bpf#how-it-works
But should (browser) WASM processes cross the kernel boundary for BPF performance?
FWIW EVM/eWASM opcodes have a cost in gas/particles.
Do you think that BPF opcodes should be costed, too? Why or why not?
Are you... asking yourself? What did you decide?
Costing instructions leads to efficiency metrics, which makes it possible to incentivize efficiency.
BPF instructions could also each have an abstract relative cost with or without real value.
BPF in WASM (unfortunately without the kernel performance advantages or possible side channels) or the fwiu now-defunct eWASM might be an easier place to test the value of costed opcodes.
The [e]BPF verifier does not yet rewrite according to opcode costs of candidate programs.
"A look inside the BPF verifier [lwn]" https://news.ycombinator.com/item?id=41135478
Physicists spot quantum tornadoes twirling in a ‘supersolid’
The actual paper is hidden behind a Nature paywall. There is a preprint on arxiv here https://arxiv.org/abs/2403.18510
"Observation of vortices in a dipolar supersolid" (2024) https://arxiv.org/abs/2403.18510
- "Topological gauge theory of vortices in type-III superconductors" (2024) https://news.ycombinator.com/item?id=41803662
- "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691 ( https://news.ycombinator.com/item?id=41442489 )
- Aren't there electron vortices in rhombohedral trilayer graphene? https://news.ycombinator.com/item?id=40919385
- And isn't edge current chirality vortical, too? https://news.ycombinator.com/item?id=41765192
> isn't edge current chirality vortical, too?
- /? valley vortex edge modes chiral: https://www.google.com/search?q=Valley+vortex+edge+modes+chi...
- "Active particle manipulation with valley vortex phononic crystal" (2024) https://www.sciencedirect.com/science/article/abs/pii/S03759... :
> The valley vortex state presents an emerging domain in condensed matter physics. As a new information carrier with orbital angular momentum, it demonstrates remarkable ability for reliable, non-contact particle manipulation. In this paper, an acoustic system is constructed to exhibit the acoustic valley state, and traps particles using acoustic radiation force generated by the acoustic valley. Considering temperature effect on the acoustic system, the band structures for temperature fluctuations are discussed. An active controlled method for manipulating particle movement is proposed. Numerical simulations confirm that the particles can be captured by acoustic valley state, and can be repositioned through alterations in temperature, while maintaining a constant excitation frequency.
2D neural geometry underpins hierarchical organization of seq in working memory
ScholarlyArticle: "Two-dimensional neural geometry underpins hierarchical organization of sequence in human working memory" (2024) https://www.nature.com/articles/s41562-024-02047-8
What about representational drift; are the observed structures of the learned sequences stable over space and time?
From "Discovery of Memory "Glue" Explains Lifelong Recall: KIBRA Protein, PKMzeta" (2024) https://news.ycombinator.com/item?id=41921715 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift
Genetic repair via CRISPR can inadvertently introduce other defects
ScholarlyArticle: "Gene editing of NCF1 loci is associated with homologous recombination and chromosomal rearrangements" (2024) https://www.nature.com/articles/s42003-024-06959-z
Does it matter which CRISPR or CRISPR-like method is applied; or is there in general a dynamic response to gene editing in DNA/RNA?
The article covers this and I think the title is a bit too general. It is a byproduct of how CRISPR works as it targets a specific sequence. In this case the sequence is also present in areas that were non-targeted. Essentially, the sequence was not unique so the process impacted other areas in unintended ways.
> Here we evaluated diverse corrective CRISPR-based editing approaches that target the ΔGT mutation in NCF1, with the goal of evaluating post-editing genomic outcomes. These approaches include Cas9 ribonucleoprotein (RNP) delivery with single-stranded oligodeoxynucleotide (ssODN) repair templates [11,12], dead Cas9 (dCas9) shielding of NCF113, multiplex single-strand nicks using Cas9 nickase (nCas9) [14,15], or staggered double-strand breaks (DSBs) using Cas12a16.
- "A NICER approach to genome editing with less mutations than CRISPR" (2023) [cas13d] https://news.ycombinator.com/item?id=37698196 :
"Prediction of on-target and off-target activity of CRISPR–Cas13d guide RNAs using deep learning" (2023) https://www.nature.com/articles/s41587-023-01830-8
Defibrillation devices save lives using 1k times less electricity
The article is about internal defibrillators. External ones are still the same as (good grief) 35 years ago (well maybe down from 300J to 200J). The only change I've noticed is moving from a gel for the pads to a gel pad (which feel like a frog, chuck one in your partners bed and let them find it!) which reduced the possibility of burning and odd smells in your ambulance. Fortunately my sense of smell wasn't great and often had a partner who smoked (and was allowed to in the olden days) in the ambulance to dull it. You kids don't know how it was having to actually manually read the trace instead of all this new-fangled automation that guides you through it.
"New defib placement increases chance of surviving heart attack by 264%" (2024) https://newatlas.com/medical/defibrillator-pads-anterior-pos... :
> Placing [AED,] defibrillator pads on the chest and back, rather than the usual method of putting two on the chest, increases the odds of surviving an out-of-hospital cardiac arrest by more than two-and-a-half times, according to a new study.
"Initial Defibrillator Pad Position and Outcomes for Shockable Out-of-Hospital Cardiac Arrest" (2024) https://jamanetwork.com/journals/jamanetworkopen/fullarticle...
I know article authors don't write their own headlines, but for all who read this: it's about out-of-hospital cardiac arrest, which can be caused by a heart attack, but is in no way the most likely presentation of a heart attack.
The AED should measure the rhythms before applying defibrillation.
An emergency AED operator doesn't need to make that distinction (doesn't need to differentially diagnose a HA as a CA) , do they?
You just put the AED pads on the patient and push the button if they're having a heart attack.
(and stand clear such that you are not a conductor to the ground or between the pads)
Grounding isn't an issue, as AED's are battery-powered once they are pulled off the wall.
But they do pump out a lot of juice. If you're touching the patient, it will HURT.
One can certainly shock onesself with a battery-powered car starter jump pack, particularly if one is a conductor to the ground or the circuit connects through the heart (which it sounds like anterior-posterior helps with).
Potential Energy charge in a battery wants to return to the ground just the same.
Oh, yeah, you can shock yourself very hard. But between two battery contacts, there is no ground. You can touch either one with no problem. It's when you touch both that you get the blast.
There's no return circuit even with your feet in salt water if you touch only one post of a battery.
I don't think that electron identity is relevant to whether there's e.g. arc discharge between + and - charges of sufficient strength?
Connecting just 1.5V AA battery contacts with steel wool causes fire. But doesn't just connecting the positive terminal of a battery to the ground result in current, regardless of the negative terminal of the battery?
(FWIU that's basically why we're advised to wear a grounding strap when operating on electronics with or without discharged capacitors)
Grounding straps prevent static charges from building up on you. A battery doesn’t really have a ground. The body of cars is hooked to the negative pole of the battery, so it’s called the “ground” of the car, but that’s for corrosion reasons.
Evaluating the world model implicit in a generative model
An LLM necessarily has to create some sort of internal "model" / representations pursuant to its "predict next word" training goal, given the depth and sophistication of context recognition needed to to well. This isn't an N-gram model restricted to just looking at surface word sequences.
However, the question should be what sort of internal "model" has it built? It seems fashionable to refer to this as a "world model", but IMO this isn't really appropriate, and certainly it's going to be quite different to the predictive representations that any animal that interacts with the world, and learns from those interactions, will have built.
The thing is that an LLM is an auto-regressive model - it is trying to predict continuations of training set samples solely based on word sequences, and is not privy to the world that is actually being described by those word sequences. It can't model the generative process of the humans who created those training set samples because that generative process has different inputs - sensory ones (in addition to auto-regressive ones).
The "world model" of a human, or any other animal, is built pursuant to predicting the environment, but not in a purely passive way (such as a multi-modal LLM predicting next frame in a video). The animal is primarily concerned with predicting the outcomes of it's interactions with the environment, driven by the evolutionary pressure to learn to act in way that maximizes survival and proliferation of its DNA. This is the nature of a real "world model" - it's modelling the world (as perceived thru sensory inputs) as a dynamical process reacting to the actions of the animal. This is very different to the passive "context patterns" learnt by an LLM that are merely predicting auto-regressive continuations (whether just words, or multi-modal video frames/etc).
> The "world model" of a human, or any other animal, is built pursuant to predicting the environment
What do you make of Immanuel Kant's claim that all thinking has as a basis the presumption of the "Categories"--fundamental concepts like quantity, quality and causality[1]. Do LLMs need to develop a deep understanding of these?
Embodied cognition implies that we understand our world in terms of embodied metaphor "categories".
LLMs don't reason, they emulate. RLHF could cause an LLM to discard text that doesn't look like reasoning according to the words in the response, but that's still not reasoning or inference.
"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285
Conceptual metaphor: https://en.wikipedia.org/wiki/Conceptual_metaphor
Embodied cognition: https://en.wikipedia.org/wiki/Embodied_cognition
Clean language: https://en.wikipedia.org/wiki/Clean_language
Given human embodied cognition as the basis for LLM training data, there are bound to be weird outputs about bodies from robot LLMs.
[wrong link] Parental leave at early stage startups
the link keeps defaulting to jqueryui.com. here is the blog post: https://www.jasgarcha.com/blog/on-parental-leave-at-early-st...
/? list of companies by paternity policy; parental leave https://www.google.com/search?q=list+of+companies+by+paterni...
/? paternity leave: https://hn.algolia.com/?q=paternity+leave
Parental leave in the United States: https://en.wikipedia.org/wiki/Parental_leave_in_the_United_S... :
> The United States is the only industrialized nation that does not offer paid parental leave. [109] Of countries in the United Nations only the United States, Suriname, Papua New Guinea, and a few Pacific island countries do not provide paid time off for new parents either through the employer's benefits or government paid maternity and parental benefits.[110] A study of 168 countries found that 163 guarantee paid leave to women, and 45 guarantee paid paternity leave. [111]
- "How my Universal Child Care and Early Learning plan works" https://elizabethwarren.com/plans/universal-child-care
Using Ghidra and Python to reverse engineer Ecco the Dolphin
When the original Ecco came out on the Megadrive (Genesis), I spent all my hard-earned money to buy it. That game is obscenely hard. I got frustrated, so I sat down for the afternoon with a pen and paper and somehow managed to decode the password system. I teleported to the final level and completed it the next day.
Then I was wracked with guilt about spending all my money on a game I completed in two days.
Philosophically, I would argue that you did not complete the game.
You skipped several levels and saw only some percentage of the intended content, gameplay, story, etc. Games in general, and Ecco the Dolphin is no exception, are very much about the journey and not just the destination. You missed out on themes & experiences like isolation, making friends with those outside of your in-group, conservation, time travel, communing with dinosaurs and, of course, space travel.
So, you really shouldn't have felt so guilty.
There are plenty of people who would argue that getting to end credits means beating the game.
You can however say that skipping levels also skips the story, so they did not finish the story.
Ecco the Dolphin (video game) > Plot: https://en.wikipedia.org/wiki/Ecco_the_Dolphin_(video_game)
Robustly learning Hamiltonian dynamics of a superconducting quantum processor
"Robustly learning the Hamiltonian dynamics of a superconducting quantum processor" (2024) https://www.nature.com/articles/s41467-024-52629-3 :
> Abstract: Precise means of characterizing analog quantum simulators are key to developing quantum simulators capable of beyond-classical computations. Here, we precisely estimate the free Hamiltonian parameters of a superconducting-qubit analog quantum simulator from measured time-series data on up to 14 qubits. To achieve this, we develop a scalable Hamiltonian learning algorithm that is robust against state-preparation and measurement (SPAM) errors and yields tomographic information about those SPAM errors. The key subroutines are a novel super-resolution technique for frequency extraction from matrix time-series, tensorESPRIT, and constrained manifold optimization. Our learning results verify the Hamiltonian dynamics on a Sycamore processor up to sub-MHz accuracy, and allow us to construct a spatial implementation error map for a grid of 27 qubits. Our results constitute an accurate implementation of a dynamical quantum simulation that is precisely characterized using a new diagnostic toolkit for understanding, calibrating, and improving analog quantum processors.
Functional ultrasound through the skull
This is fun, and the modeling is cool for sure, but it's well known that ultrasound can be used with surgical precision in the human brain.
Focused ultrasound is already used for non-invasive neuromodulation. Raag Airan's lab at Stanford does this for example using ultrasound uncaging.
https://www.frontiersin.org/journals/neuroscience/articles/1...
https://www.sciencedirect.com/science/article/pii/S089662731...
Also see the work by Urvi Vyas, eg
https://pubmed.ncbi.nlm.nih.gov/27587047/
I don't mean to discount the cool imaging-related reconstruction of a point spread function, but rather to say that ultrasound attenuation through the skull an soft tissue has already been well characterized and it's not a surprise that it is viable to pass through.
OpenWater wiki > Neuromodulation: https://wiki.openwater.health/index.php/Neuromodulation
OpenwaterHealth/opw_neuromod_sw: https://github.com/OpenwaterHealth/opw_neuromod_sw :
> OpenWater's Transcranial Focused Ultrasound Platform. open-LIFU is an ultrasound platform designed to help researchers transmit focused ultrasound beams into subject’s brains, so that those researchers can learn more about how different types of ultrasound beams interact with the neurons in the brain. Unlike other focused ultrasound systems which are aimed only by their placement on the head, open-LIFU uses an array to precisely steer the ultrasound focus to the target location, while its wearable small size allows transmission through the forehead into a precise spot location in the brain even while the patient is moving.
FWIU NIRS is sufficient for most nontherepeautic diagnostics though. (Non-optogenetically, infrared light stimulates neuronal growth, and blue and green lights inhibit neuronal growth)
Instabilities in Black Holes Could Rewrite Spacetime Theories
NewsArticle: "Defying Einstein: Hidden Instabilities in Black Holes Could Rewrite Spacetime Theories" (2024) https://scitechdaily.com/defying-einstein-hidden-instabiliti...
ScholarlyArticle: "Mass Inflation without Cauchy Horizons" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
OTOH, there's a gravity to photon conversion which should likely affect mass conditions in black holes.
And also, again, Fedi's SQR Superfluid Quantum Relativity (.it), FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Intel Releases x86-SIMD-sort 6.0 for 10x faster AVX2/AVX-512 Sorting
intel/x86-simd-sort v6.0: https://github.com/intel/x86-simd-sort/releases/tag/v6.0 :
> Release 6.0 introduces several key enhancements to our software. The update brings new APIs for partial sorting of key-value pairs, along with comprehensive AVX2 support across all routines. Additionally, PyTorch now leverages AVX-512 and AVX2 key-value sort routines to boost the performance of `torch.sort` and `torch.argsort` functions by up-to 10x (see pytorch/pytorch#127936).
"Adds support for accelerated sorting with x86-simd-sort" https://github.com/pytorch/pytorch/pull/127936
Traceroute Isn't Real
traceroute is a useful tool for (amongst other things) determining where a system is physically. It doesn't always give you enough information.
Traceroute to news.ycombinator.com:
3 96.108.68.141 (po-200-xar01.maynard.ma.boston.comcast.net) 12.105ms 11.724ms 11.931ms
4 96.108.68.141 (po-200-xar01.maynard.ma.boston.comcast.net) 13.011ms 10.727ms 19.861ms
5 162.151.52.34 (be-501-ar01.needham.ma.boston.comcast.net) 13.988ms 14.721ms 12.921ms
6 162.151.52.34 (be-501-ar01.needham.ma.boston.comcast.net) 14.999ms 16.688ms 12.997ms
7 4.69.146.65 (ae0.11.bar1.SanDiego1.net.lumen.tech) 76.044ms 79.624ms 78.017ms
8 4.69.146.65 (ae0.11.bar1.SanDiego1.net.lumen.tech) 83.962ms 108.675ms 78.987ms
9 \* \* \*
I can conclude that the server is probably on the west coast -- maybe San Diego.I recall using this during a sales pitch by some IP Geolocation company that were very proud of their technology. The example that they used, they claimed was in Morristown, NJ. A quick traceroute (from Massachusetts) revealed that the IP was somewhere in the UK as the last hop was close to Heathrow Airport. We did not purchase their solution!
This fails in a lot of different ways, not the least of which is anycast. The most common way is probably dns-based geographic steering.
Can you tell me where 8.8.8.8 is?
8.8.8.8 is in many places. It's hard to get a list of exactly where[1], but the more places you can traceroute from, the more places you can add to your list. For 8.8.8.8 in particular, if you've got probes in major PoPs, you can probably get response times of 1 ms or less and be pretty sure that Google has a presence there. Of course, if you get a longer response time, they may have a presence there but your network may not have their local advertisement and your query routes elsewhere; or maybe the response from Google is being routed through another location. Traceroute might help debug a longer route to Google, but it'll be hard to debug the return.
For dns steering to unicast ips, you can get a reasonable idea of where the ips you can see are, although you'll need to make dns requests from different locations to see more of the available dns answers.
[1] Unless you're an insider, or maybe Google publishes a list somewhere. Their peeringdb listing of locations is probably a good start, though.
$ dig @8.8.8.8 +nsid news.ycombinator.com.
; NSID: 67 70 64 6e 73 2d 61 6d 73 ("gpdns-ams")
news.ycombinator.com. 1 IN A 209.216.230.207
Repeating the same query returned: ; NSID: 67 70 64 6e 73 2d 67 72 71 ("gpdns-grq")
A few other resolvers implement this as well, e.g. 1.1.1.1 and 9.9.9.9: 1.1.1.1: ; NSID: 35 32 31 6d 32 37 34 ("521m274")
9.9.9.9: ; NSID: 72 65 73 31 32 31 2e 61 6d 73 2e 72 72 64 6e 73 2e 70 63 68 2e 6e 65 74 ("res121.ams.rrdns.pch.net")
with, as you can see, various degrees of human-readable information on the actual replying resolver.(more DNS resolver introspection tricks can be found with DNSDiag https://dnsdiag.org/ )
DNSSEC:
delv @8.8.8.8 +dnssec news.ycombinator.com
delv @8.8.8.8 +dnssec AAAA news.ycombinator.com
If you run traceroute multiple times - for example throughout the day - and overlay the results, you will have a better idea of the routes that [ICMP or TCP or UDP] packets are taking between src and dest.
It is trivial to add or hide hops to traceroute by just changing the ttl on certain or all packets.
mtr does traceroute.
There's a 3d traceroute in scapy.
Scapy docs > TCP traceroute: https://scapy.readthedocs.io/en/latest/usage.html#tcp-tracer... :
> If you have VPython installed, you also can have a 3D representation of the traceroute.
Useful built-in macOS command-line utilities
upgrade_mac.sh: https://github.com/westurner/dotfiles/blob/develop/scripts/u... :
upgrade_macos() {
softwareupdate --list
softwareupdate --download
softwareupdate --install --all --restart
}
The first wooden satellite launched into space
/? What are the temperature extremes on the sun side of a satellite https://www.google.com/search?q=What+are+the+temperature+ext... ... 250° F
"Low Temperature Ignition of Wood" https://www.warrenforensics.com/wp-content/uploads/2016/06/P... :
> ignition of wood will occur under exposure temperatures of as little as 256ºF for periods of 12 to 16 hours per day in as little as 623 days or approximately 21 months.
It's also possible to encounter focused solar radiation due to various lensing effects.
Starlite: https://en.wikipedia.org/wiki/Starlite :
> Starlite was also claimed to have been able to withstand a laser beam that could produce a temperature of 10,000 °C
And, hemp is far more tensile than wood; which probably matters given the hazard of space debris.
Wood does not ignite in space because there's no air in space.
There's non-oxidative types of decomposition, if you can reach high enough temperatures:
Discussion (44 points, 42 comments) https://news.ycombinator.com/item?id=42051687
/? wood satellite https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Ask HN: What happened to the No Paid Prioritization net neutrality rule?
Net neutrality law: https://en.wikipedia.org/wiki/Net_neutrality_law
Net neutrality in the United States > Regulatory History: https://en.wikipedia.org/wiki/Net_neutrality_in_the_United_S...
How does "No Paid Prioritization" compare to "Equal Time", and which groups have opposed and supported which?
What is "equal time"? I've never heard of an "equal time" net neutrality rule.
The only FCC "Equal Time" rule I'm familiar with is with respect to political party representation during election coverage.
FWIU the Equal-time rule is distinct from the cancelled "Fairness doctrine", and the Equal Time rule applies only to broadcast radio and tv.
The "No Paid Prioritization" rule applies to all Title II Common Carrier Internet Service Providers but not to Information Service Providers.
The 2015 FCC net neutrality rules for ISPs were cancelled by a Verizon lobbyist until April 2024, when the FCC reinstituted the net neutrality rules; "No blocking", "No throttling", "No Paid Prioritization".
ISPs are again considered Title II Common Carriers.
But now the court has stayed the rules due to corporate lobbyists.
So, the "No Paid Prioritization" rule does not apply to the Internet ATM because corporate lobbyists.
Equal-time rule: https://en.wikipedia.org/wiki/Equal-time_rule
(Comcast in Philadelphia acquired NBC from GE in 2010; and a bunch of shows died, but not SNL. Comcast is an ISP and also owns NBC and this SNL. Tom Wheeler - who got the 2015 Open Internet Rules passed - was an attorney for Comcast prior to the FCC post.
Pai was a Verizon lobbyist prior to the FCC post. Pai worked to cancel the "No blocking", "No throttling", and "No Paid Prioritization" rules that protected the Internet.)
There is no equal time if there is paid prioritization; to do paid prioritization the ISP prioritizes packets according to payola (which is already illegal in the music industry, for example).
From https://en.wikipedia.org/wiki/Payola :
> Payola, in the music industry, is the name given to the illegal practice of paying a commercial radio station to play a song without the station disclosing the payment
Are there disclosure rules for Paid Prioritization by ISPs? Are my school videos slower because adult sites just pay the man?
Equal time has nothing to do with net neutrality…
Do you know what a priority queue is compared to FIFO or LIFO? (Edit: Network scheduler: https://en.wikipedia.org/wiki/Network_scheduler )
How would you implement each with code?
I think if you support the values of Equal Time, you should also support No Paid Prioritization.
If there is paid prioritization, there is not equal time.
Equal time is about campaigns having equal amount of airtime on broadcast television/radio. It says nothing about the internet or network packet prioritization.
Revealing the superconducting limit of twisted bilayer graphene
"Low-Energy Optical Sum Rule in Moiré Graphene" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... https://arxiv.org/abs/2312.03819
But what about rhombohedral trilayer graphene?
"Revealing the complex phases of rhombohedral trilayer graphene" (2024) https://www.nature.com/articles/s41567-024-02561-6 .. https://news.ycombinator.com/item?id=40919269
SharpNEAT – evolving NN topologies and weights with a genetic algorithm
/? parameter-free network: https://scholar.google.com/scholar?q=Parameter-free%20networ...
"Ask HN: Parameter-free neural network models: Limits, Challenges, Opportunities?" (2024) https://news.ycombinator.com/item?id=41794249 :
> Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
Do these GAs for (hyper)parameter estimation converge given different random seeds?
A React Renderer for Gnome JavaScript
From https://github.com/react-gjs/renderer#elements-of-gtk3 :
> List of all GTK3 Widgets provided as JSX Components by this renderer:
I couldn't find a free, no-login, no-AI checklist app–so I built one
TodoMVC/examples has 40+ open source todo apps written with various popular and obscure frameworks: https://github.com/tastejs/todomvc/tree/master/examples
Eighty Years of the Finite Element Method (2022)
From "Chaos researchers can now predict perilous points of no return" (2022) https://news.ycombinator.com/item?id=32862414 :
> FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method
>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))
>> FEA: Finite Element Analysis (applied FEM)
> awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea
And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171
Direct Sockets API in Chrome 131
From "Chrome 130: Direct Sockets API" (2024-09) https://news.ycombinator.com/item?id=41418718 :
> I can understand FF's position on Direct Sockets [...] Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.
> Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.
But HTTP Signed Exchanges is cancelled, so arbitrary code with sockets if one ad network?
...
> Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431
> Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy
> docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...
Using Large Language Models to Catch Vulnerabilities
awesome-code-llm > Vulnerability Detection, Program Proof,: https://github.com/codefuse-ai/Awesome-Code-LLM#vulnerabilit...
Half of Young Voters Say They've Lied about Which Candidates They Support
In the presented statistics, it's not so much about party as it is about age. Which tracks with my experience that young people are much more conflict averse.
Also, interesting that men are twice as likely to lie as women. Which goes against accepted logic that women would be more likely to conform socially.
> Which goes against accepted logic that women would be more likely to conform socially.
Citation needed. Where is that coming from?
Heels, yucky lipstick
Conformity > Specific predictors: https://en.wikipedia.org/wiki/Conformity#Specific_predictors
Using SQLite as storage for web server static content
I did some experiments around this idea a few years ago, partly inspired by the "35% Faster Than The Filesystem" article: https://www.sqlite.org/fasterthanfs.html
Here are the notes I made at the time: https://simonwillison.net/2020/Jul/30/fun-binary-data-and-sq...
I built https://datasette.io/plugins/datasette-media as a plugin for serving static files from SQLite via Datasette and it works fine, but honestly I've not used it much since I built it.
A related concept is using SQLite to serve map tiles - this plugin https://datasette.io/plugins/datasette-tiles does that, using the MBTiles format which it turns out is a SQLite database full of PNGs.
If you want to experiment with SQLite to serve files you may find my "sqlite-utils insert-files" CLI tool useful for bootstrapping the database: https://sqlite-utils.datasette.io/en/stable/cli.html#inserti...
Is there a way to sendfile from a SQLite database like xsendfile?
requests-cache caches requests in SQLite by (date,URI) IIRC. https://github.com/requests-cache/requests-cache/blob/main/r...
/? pyfilesystem SQLite: https://www.google.com/search?q=pyfilesystem+sqlite
/? sendfile mmap SQLite: https://www.google.com/search?q=sendfile+mmap+sqlite
https://github.com/adamobeng/wddbfs :
> webdavfs provider which can read the contents of sqlite databases
There's probably a good way to implement a filesystem with Unix file permissions and xattrs extended file attribute permissions atop SQLite?
Would SQLite be faster or more convenient than e.g. ngx_http_memcached_module.c ? Does SQLite have per-cell ACLs either?
Not all you asked but Sqlite archive can be mounted using fuse, https://sqlite.org/sqlar/doc/trunk/README.md
FUSE does local filesystems, and there are also HTTP remote VFS for sqlite.
sqlite-wasm-http is based on phiresky/sql.js-httpvfs, "a read-only HTTP-Range-request based virtual file system for SQLite. It allows hosting an SQLite database on a static file hoster and querying that database from the browser without fully downloading it." "but uses the new official SQLite WASM distribution." [1][2]
[2] sql.js-httpvfs: https://github.com/phiresky/sql.js-httpvfs
[1] sqlite-wasm-http: https://www.npmjs.com/package/sqlite-wasm-http
[3] sqlite3vfshttp: https://github.com/psanford/sqlite3vfshttp
HTTP Range requests: https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requ...
Hear the sounds of Earth's magnetic field from 41,000 years ago
If we look into a reflection in space 0.5 light years away from Earth, we should see 1 year ago (given zero motion relative to earth, and 0.5+0.5=1 year.)
Where there is relative motion with photons at least isn't there a Redshift and the Doppler effect (edit: and a different parametric or chaotic convolution over large distances without e.g. helical polarization to shield the signal)
Magnetic fields reportedly have a maximum propagation speed that is also the speed of light c in a vacuum.
So, to recall Earth's magnetic field from 41,000 years ago with such a method would presumably require a reflection (41,000/2 = 20,500) light years away
Python PGP proposal poses packaging puzzles
This leaves out the important context that key verification for these packages isn't functional.
In the last 3 years, about 50k signatures had been uploaded to PyPI by 1069 unique keys. Of those 1069 unique keys, about 30% of them were not discoverable on major public keyservers, making it difficult or impossible to meaningfully verify those signatures. Of the remaining 71%, nearly half of them were unable to be meaningfully verified at the time of the audit (2023-05-19) 2.
More, recently, on this thread:
This is related to the distribution of CPython itself, the key verification for those artifacts does work and has worked forever. The packaging referred to by the article is about packaging Python itself by upstream distributions.
Python packages developed by third party developers and uploaded to PyPi are indeed not verifiable due to the key issues you mentioned, and is a minor note in the article.
W3C DIDs are verifiable e.g with blockchain-certificates/cert-verifier-js and blockchain-certificates/cert-verifier (Python).
If PyPI is not a keyserver, if it only hosts the attestations and checks checksums, can it fully solve for [Python] software supply chain security?
A table comparing the various known solutions might be good; including md5, sha3, GPG .ASC signatures, TUF, Uptane, Sigstore (Cosign + Rekor), PyPI w/w/o attestations, VC Verifiable Credentials, and Blockcerts (Verifiable Credentials (DIDs))
Where Did the Term '86' Come From?
86 (term) since the 1930s: https://en.wikipedia.org/wiki/86_(term)
Intel 8086 (1978) ... X86 (and AMD64/X86-64)
Motorola 680x0 / m68k (6800: 1974, 68000: 1979) https://en.wikipedia.org/wiki/Motorola_6800 https://en.wikipedia.org/wiki/Motorola_68000 .. "Motorola's pioneering 8-bit 6800: Origins and architecture" https://news.ycombinator.com/item?id=38616591
X86: https://en.wikipedia.org/wiki/X86
FWIW, "Smoky and the Bandit" was released in 1977 and it is about Prohibition in the US shortly after the ending of the Bretton-Woods gold-reserve system (1971) and then the CSA (1971) (Which failed, and caused violent organized crime, and was withdrawn by Constitutional amendment in part due to Al Smith).
How the human brain contends with the strangeness of zero
There was nothing strange about zero for our distant ancestors.
There are only two things that have been understood much later about zero, that it is a quantity that in many circumstances should be treated in the same way as any other numbers and that it requires a special symbol in order to implement a positional system for writing numbers.
The concept of zero itself had already been known many thousands of years before the invention of the positional system for writing numbers.
All the recorded ancient languages were able to answer to the question "How many sheep do you see there?" not only with words meaning "one", "two", "three", "four", "few", "many" and so on, but also with a word meaning "none". Similarly for the answers to a question like "How much barley do you have?".
Nevertheless, the concept of the quantity "zero" must have been understood later than the concepts of "one", "two", "many" and of negation, because in most languages the words similar in meaning to "none", "nobody", "nothing", "null" are derived from negation words together with words meaning "one" or denoting small or indefinite quantities.
Because the first few numbers, especially 0, 1 and 2 have many distinctive properties in comparison with bigger numbers, not only 0 was initially perceived as being in a class somewhat separate from other numbers, but usually also 1 and 2 and sometimes also 3 or even 4.
In many old languages the grammatical behavior of the first few numbers can be different between themselves and also quite different from that of the bigger numbers, which behave more or less in the same way, consistent with the expectation that the big numbers have been added later to the language and at a time when they were perceived as a uniform category.
There are other factual problems with this article. Most jarring is that it repeats this old incorrect information:
> Inside the Chaturbhuj Temple in India (left), a wall inscription features the oldest known instance of the digit zero, dated to 876 CE (right). It is part of the number 270.
This is the real oldest known zero:
> Because we know that the Çaka dynasty began in AD 78, we can date the artifact exactly: to the year AD 683. This makes the “0” in “605” the oldest zero ever found of our base-10, “Hindu-Arabic” number system.
> But the Cambodian stone inscription bears the first known zero within the system that evolved into the numbers we use today.
Maybe the Khmers came up with it or maybe they got the idea from India. Either way, we should set the facts straight.
0 (number) > Classical antiquity; https://en.wikipedia.org/wiki/0#Classical_antiquity :
> By AD 150, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for zero ( — ° ) [25][26] in his work on mathematical astronomy called the Syntaxis Mathematica, also known as the Almagest. [27] This Hellenistic zero was perhaps the earliest documented use of a numeral representing zero in the Old World. [28]
Astronomers discover complex carbon molecules in interstellar space
> This discovery also links to another important finding of the last decade – the first chiral molecule in the interstellar medium, propylene oxide. We need chiral molecules to make the evolution of simple lifeforms work on the surface of the early Earth.
It would be really amazing if we were able to know if both chiralities are equally represented in the space. Apart from the life itself it is astonishingly interesting how life evolved to be monochiral.
Does propylene oxide demonstrate a "propeller effect" like some other handed chiral molecules?
From https://news.ycombinator.com/item?id=41873531 :
> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Electro-agriculture: Revolutionizing farming for a sustainable future
For those wondering what electro-ag is: convert CO2 into acetate—a carbon-rich compound that can fuel crop growth without sunlight.
"Electro-ag"
"Electronic soil boosts crop growth" (2023) https://news.ycombinator.com/item?id=38767561#38768499 :
> Electroculture
> "Electrical currents associated with arbuscular mycorrhizal interactions" (1995) https://scholar.google.com/scholar?cites=3517382204909176031...
Electrotropism: https://en.wikipedia.org/wiki/Electrotropism
What the op article suggests is using GMO plants that no longer use photosynthesis but instead drive their metabolism based on products derived from CO2 electrolysis and simple chemical man made compounds.
Plant-made plants, like industrial chemical plant-made.
Is that more efficient than [solar-powered] industrial production processes that synthesize directly from CO2? https://news.ycombinator.com/item?id=40914350#40959068
What about topsoil depletion and compost production?
It's 4x more efficient at solar to food production than regular plants with the potential to get a 10x improvement.
What are the downsides?
I read that it was [Vitamin E] acetate in carts that was causing EVALI lung conditions?
What nutrients does it require synthetic or natural production of, and how sustainable are those processes?
Have the given organisms co-evolved with earth ecology for millions of billions of years?
Acetate > Biology: https://en.wikipedia.org/wiki/Acetate
PV modules don't grow themselves. Also, the CO2 has to be captured somehow. Plants double as CO2 collectors.
An upside is vastly lower water requirements, since no transpiration.
Hmm, this article is about using electricity and CO2/H2O to make chemical feedstocks that would grow plants. The plants would "eat" the chemical feedstocks, instead of "eating" light.
It's not about using electric fields to direct plant growth; that's a different thing
Pygfx
pygfx/pygfx: https://github.com/pygfx/pygfx :
> Pygfx (pronounced “py-graphics”) is built on wgpu, enabling superior performance and reliability compared to OpenGL-based solutions.
pygfx/wgpu-py: https://github.com/pygfx/wgpu-py/ :
> A Python implementation of WebGPU
gfx-rs/wgpu: https://github.com/gfx-rs/wgpu :
> wgpu is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
> The API is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox and Deno
I was/am a bit confused: I think this is unrelated to the wgpu cross-API toolkit in rust, it just abbreviated WebGpu the same way?
Same, I had assumed they weren't independent.
/?PyPI wgpu: https://pypi.org/search/?q=wgpu
Looks like xgpu is where it's actually at.
xgpu: https://github.com/pyrym/xgpu :
> xgpu is an aggressively typed, red-squiggle-free Python binding of wgpu-native, autogenerated from the upstream C headers
wgpu-py has a conda-forge package: https://anaconda.org/conda-forge/wgpu-py
Plastic chemical phthalate causes DNA breakage, chromosome defects, study finds
Apart from food packaging, one great way to easily ingest plastic is to use synthetic clothing. Just a basic rubbing of a synthetic sleeve on your nose causes thousands of polyester particles to release in thin air, readily breathable.
Not just clothing, but also bedding is a huge issue. With pillows, mattresses and towels mostly made of synthetic fibers.
My usual instinct is: try rubbing the synthetic material; if it releases thousands of particles in thin air, stay away from it
Clothing industry has somehow gotten by unscathed during all the environmental awareness that has spread in the past 20 years. I am pretty sure clothes are the #1 cause of the microplastics that have inundated the ocean and our water supply.
We have been heavily pushed to drive less, recycle more, and use less water, but I have not seen messaging about not buying new clothes you don't need.
France passed a law back in 2020 to require new washing machines to have a microplastics filter by 2025:
https://www.europarl.europa.eu/doceo/document/E-9-2020-00137...
It has also begun to subsidize the clothing repair industry:
https://www.cnn.com/2023/07/13/business/france-shoe-clothing...
Maybe we should subsidize plastic-free fibers instead. Cotton, hemp, wool...
Linen; linen is made from the Flax plant.
Natural fibers: https://en.wikipedia.org/wiki/Natural_fiber
Green textiles: https://en.wikipedia.org/wiki/Green_textile
There are newer more sustainable production processes for various natural fibers.
TIL that there are special laundry detergents for synthetic fabrics like most activewear like jerseys; and that fabric softener attracts mosquitos and adheres to synthetic fibers causing stank.
Universal optimality of Dijkstra via beyond-worst-case heaps
"Universal Optimality of Dijkstra via Beyond-Worst-Case Heaps" (2024) https://arxiv.org/abs/2311.11793 :
> Abstract: This paper proves that Dijkstra's shortest-path algorithm is universally optimal in both its running time and number of comparisons when combined with a sufficiently efficient heap data structure.
Dijkstra's algorithm: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
NetworkX docs > Reference > Algorithms > Shortest Paths: https://networkx.org/documentation/stable/reference/algorith...
networkX.algorithms.shortest_path.dijkstra_path: https://networkx.org/documentation/stable/reference/algorith... https://github.com/networkx/networkx/blob/main/networkx/algo...
/? Dijkstra manim: https://www.google.com/search?q=dijkstra%20manim
A deep dive into Linux's new mseal syscall
- "Memory Sealing "Mseal" System Call Merged for Linux 6.10" (2024) https://news.ycombinator.com/item?id=40474510#40474551 :
> How should CPython support the mseal() syscall?
LibLISA – Instruction Discovery and Analysis on x86-64
I've long wanted to have a way to see what actually happens inside a CPU when a set of instructions are executed. I'm pretty excited after skimming this paper as it looks like they developed a technique to automatically determine how the x86-64 instructions actually work by observing real world CPU behavior.
From https://news.ycombinator.com/item?id=33563857 :
> Memory Debugger
Valgrind > Memcheck, None: https://en.wikipedia.org/wiki/Valgrind ; TIL Memcheck's `none` provides a traceback where the shell would normally just print "Segmentation fault"
DynamoRio > Dr Memory: https://en.wikipedia.org/wiki/DynamoRIO#Dr._Memory
Intel Pin: https://en.wikipedia.org/wiki/Pin_(computer_program)
https://news.ycombinator.com/item?id=22095435, : SoftICE, EPT, Hypervisor, HyperDbg, PulseDbg, BugChecker, pyvmidbg (libVMI + GDB), libVMI Python, volatilityfoundation/volatility, Google/rekall -> yara, winpmem, Microsoft/linpmem, AVML,
rr, Ghidra Trace Format: https://github.com/NationalSecurityAgency/ghidra/discussions... https://github.com/NationalSecurityAgency/ghidra/discussions... : appliepie, orange_slice, cannoli
GDB can help with register introspection: https://web.stanford.edu/class/archive/cs/cs107/cs107.1202/l... :
> Auto-display and Printing Registers: The `info reg` command [and `info all-registers` (`i r a`)]
emu86 implements X86 instructions in Python, optionally in Jupyter notebooks; still w/o X86S, SIMD, AVX-512, x86-84-v4
> Valgrind `none` provides a traceback where the shell would normally just print "Segmentation fault"
What would it take to get `bash` to print a --traceback after "Segmentation fault\n", and then possibly also --gdb like pytest --pdb?
- [ ] ENH: bash: add valgrind `none`-style ~ --traceback after "Segmentation fault" given an env var or by default?
A DSL for peephole transformation rules of integer operations in the PyPy JIT
math.fma fused multiply add is in Python 3.13. Are there already rules to transform expressions to math.fma?
And does Z3 verification indicate differences in output due to minimizing float-rounding error?
https://docs.python.org/3/library/math.html#math.fma :
> math.fma(x, y, z)
> Fused multiply-add operation. Return (x * y) + z, computed as though with infinite precision and range followed by a single round to the float format. This operation often provides better accuracy than the direct expression (x * y) + z.
> This function follows the specification of the fusedMultiplyAdd operation described in the IEEE 754 standard.
I only support integer operations in the DSL so far.
(but yes, turning the python expression x*y+z into an fma call would not be a legal optimization for the jit anyway. And Z3 would rightfully complain. The results must be bitwise identical before and after optimization)
Smarter Than 'Ctrl+F': Linking Directly to Web Page Content
Is there a specified / supported way to include other parameters in the URI fragment if the fragment part of the URI starts with :~:text=?
https://example.com/page.html#:~:text=[prefix-,]textStart[,textEnd][,-suffix]
https://example.com/page.html#:~:text=...
https://example.com/page.html#:~:text=...¶m2=two&:~:param3=three
w3c/web-annotation#442: "Selector JSON to IRI/URI Mapping: supporting compound fragments so that SPAs work" (2019): https://github.com/w3c/web-annotation/issues/442WICG/scroll-to-text-fragment#4: "Integration with W3C Web Annotations" (2019): https://github.com/WICG/scroll-to-text-fragment/issues/4#iss...
Room temp chirality switching and detection in a helimagnetic MnAu2 thin film
"Room temperature chirality switching and detection in a helimagnetic MnAu2 thin film" (2024) https://www.nature.com/articles/s41467-024-46326-4 :
> Helimagnetic structures, in which the magnetic moments are spirally ordered, host an internal degree of freedom called chirality corresponding to the handedness of the helix. The chirality seems quite robust against disturbances and is therefore promising for next-generation magnetic memory. While the chirality control was recently achieved by the magnetic field sweep with the application of an electric current at low temperature in a conducting helimagnet, problems such as low working temperature and cumbersome control and detection methods have to be solved in practical applications. Here we show chirality switching by electric current pulses at room temperature in a thin-film MnAu2 helimagnetic conductor. Moreover, we have succeeded in detecting the chirality at zero magnetic fields by means of simple transverse resistance measurement utilizing the spin Berry phase in a bilayer device composed of MnAu2 and a spin Hall material Pt. These results may pave the way to helimagnet-based spintronics.
Computer Organization, free online textbook
emu86 emulates x86, ARM, RISC-V, and WASM instruction sets in Python and in Jupyter Notebooks (which can be graded with ottergrader, nbgrader,)
assembler/virtual_machine.py: https://github.com/gcallah/Emu86/blob/master/assembler/virtu... :
- class RISCVMachine(VirtualMachine): https://github.com/gcallah/Emu86/blob/master/assembler/virtu...
- class WASMMachine(VirtualMachine): https://github.com/gcallah/Emu86/blob/master/assembler/virtu...
assembler/RISCV/key_words.py: https://github.com/gcallah/Emu86/blob/master/assembler/RISCV...
assembler/RISCV/arithmetic.py: https://github.com/gcallah/Emu86/blob/master/assembler/RISCV...
simd, avx, X86S, x86-64-v4,
x86-64 > Microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
The new x86 Alliance, Intel-AMD x86 ISA overhaul org.
"Show HN: RISC-V Linux Terminal emulated via WASM" (2023) and ARM Tetris: https://news.ycombinator.com/item?id=37286019 https://westurner.github.io/hnlog/#story-37286019
From https://news.ycombinator.com/item?id=37086102 :
> [ Bindiff, Diaphora, Ghidra + GDB, ]
> Category:Instruction_set_listings has x86 but no aarch64 [or RISC-V] https://en.wikipedia.org/wiki/Category:Instruction_set_listi...
> /? jupyter asm [kernel]:
> - "Introduction to Assembly Language Tutorial.ipynb" w/ the emu86 jupyter kernel which shows register state after ops: https://github.com/gcallah/Emu86/blob/master/kernels/Introdu...
> - it looks like emu86 already also supports RISC, MIPS, and WASM but not yet ARM:
> - DeepHorizons/iarm: https://github.com/DeepHorizons/iarm/blob/master/iarm_kernel...
JupyterLite docs > adding other kernels: https://jupyterlite.readthedocs.io/en/latest/howto/configure...
A bit lower level, but possibly now with graphene FET transistors:
From "A carbon-nanotube-based tensor processing unit" (2024) https://news.ycombinator.com/item?id=41322070#41322134 :
> "Ask HN: How much would it cost to build a RISC CPU out of carbon?"
"Show HN: RISC-V assembly tabletop board game (hack your opponent)" (2023) https://news.ycombinator.com/item?id=37704760
Migration of the build system to autosetup
OP here!
Richard Hipp, of sqlite fame, has announced that the sqlite project plans to reimplement its canonical build process in the 3.48 release cycle. The project will, if all goes as planned, be moving away from the GNU Autotools and towards Steve Bennett's Autosetup, which we've used to good effect since 2011 in SQLite's sibling project, the Fossil SCM.
The primary motivation behind this HN post is to draw eyes to what Richard points out in the linked forum post: this port _will_ break some folks' build processes and we'd like to know, in advance, what we'll be breaking so that we can make that transition as painless as we can. If you do "unusual" things with the SQLite build, please let us know about it (preferably in the linked-to forum, as that's less likely to be overlooked than comments on HN are, but we'll be following this thread for at least a few days for those of you who would prefer to voice your thoughts here).
Conda-forge/sqlite-feedstock; recipe/meta.yml,
recipe/build.sh: https://github.com/conda-forge/sqlite-feedstock/blob/main/re...
recipe/build.bat: https://github.com/conda-forge/sqlite-feedstock/blob/main/re...
Is there a [multi-stage] Dockerfile to build, install and test SQLite with the new build system?
Is there SLSA signing for the SQLite build artifacts? There should be a way to sign OBS Open Build Service packages with sigstore.dev's Cosign to Rekor, too IIUC.
slsa-framework/slsa-github-generator: https://github.com/slsa-framework/slsa-github-generator
> Conda-forge/sqlite-feedstock; recipe/meta.yml,
Aside from the cross-compilation, which is most definitely part of the process but has not yet been explored in depth[^1], nothing in your build immediately sticks out at as problematic, but i've bookmarked it for future reference during the port.
The .bat build (Windows) will not change via this port. Only Unix-like environments will potentially be affected.
> Is there a [multi-stage] Dockerfile to build, install and test SQLite with the new build system?
Not that the sqlite project maintains, no. None of us use docker in any capacity. We work only from the canonical source tree and we like to think that it's easy enough to do that other folks can too.
> Is there SLSA signing for the SQLite build artifacts?
That's out of scope for, and unrelated to, this particular sub-project.
> slsa-framework/slsa-github-generator: https://github.com/slsa-framework/slsa-github-generator
says:
> This repository contains free tools to generate and verify SLSA Build Level 3 provenance for native GitHub projects using GitHub Actions.
We don't use github except to post a read-only mirror of the canonical source tree.
[^1]: Autosetup has strong support for cross-compilation, it just hasn't yet been tested in the being-ported build process.
You can sign built artifacts with cosign without actions or podman[-desktop]/docker.
It's probably possible to trigger gh actions or gl pipelines builds (container, command) with git repo mirroring.
datasette-lite depends on a WASM build of SQLite; what about WASM and e.g. emscripten-forge and micromamba?
Edit
emscripten-forge/recipes/recipes_emscripten/SQLite/build.sh: https://github.com/emscripten-forge/recipes/blob/main/recipe... ... recipe.yml says 2022: https://github.com/emscripten-forge/recipes/blob/main/recipe...
> You can sign built artifacts with cosign without actions or podman[-desktop]/docker.
That's all out of scope for, and unrelated to, this build port.
> It's probably possible to trigger gh actions or gl pipelines builds (container, command) with git repo mirroring.
We maintain our own SCM, which keeps us far removed from the github ways of doing things. None of the SQLite project members actively use github beyond posting the read-only project mirror and occasionally cloning other projects and/or participating in tickets, and that's unlikely to change in any foreseeable future.
> emscripten-forge/recipes/recipes_emscripten/SQLite/build.sh:
Thank you - bookmarked for later perusal and testing.
Same. Thanks
What about extensions; does the new build port affect extensions?
/? hnlog sqlite (190+ mentions)
From https://news.ycombinator.com/item?id=40837610#40842365 :
- > There are many extensions of SQLite; rqlite (Raft in Go,), cr-sqlite (CRDT in C), postlite (Postgres wire protocol for SQLite), electricsql (Postgres), sqledge (Postgres), and also WASM: sqlite-wasm, sqlite-wasm-http, dqlite (Raft in Rust),
awesome-sqlite
> What about extensions; does the new bill port affect extensions?
It won't, in any way, affect how third parties use the conventional project headers and amalgamated sources. It only has the potential to affect folks who use the "./configure && make" build process, as differences between the Autotools and Autosetup make it impossible to retain 100% semantic compatibility with the build processes (for reasons described one link removed from the forum post this HN post links to).
Discovery of Memory "Glue" Explains Lifelong Recall: KIBRA Protein, PKMzeta
ScholarlyArticle: "KIBRA anchoring the action of PKMζ maintains the persistence of memory" (2024) https://www.science.org/doi/10.1126/sciadv.adl0030 :
> How can short-lived molecules selectively maintain the potentiation of activated synapses to sustain long-term memory? Here, we find kidney and brain expressed adaptor protein (KIBRA), a postsynaptic scaffolding protein genetically linked to human memory performance, complexes with protein kinase Mzeta (PKMζ), anchoring the kinase’s potentiating action to maintain late-phase long-term potentiation (late-LTP) at activated synapses. Two structurally distinct antagonists of KIBRA-PKMζ dimerization disrupt established late-LTP and long-term spatial memory, yet neither measurably affects basal synaptic transmission. Neither antagonist affects PKMζ-independent LTP or memory that are maintained by compensating PKCs in ζ-knockout mice; thus, both agents require PKMζ for their effect. KIBRA-PKMζ complexes maintain 1-month-old memory despite PKMζ turnover. Therefore, it is not PKMζ alone, nor KIBRA alone, but the continual interaction between the two that maintains late-LTP and long-term memory
Are these the primary pathways of long-term forgetting rate, and what about NSCs, hippocampal neurogeneration, memory reconsolidation, and representation drift (connectome temporal instability)?
From https://news.ycombinator.com/item?id=35886145 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift
Perhaps also useful for treating memory disorders:
From "Neuroscientists discover mechanism that can reactivate dormant neural stem cells" NSCs (2024) https://news.ycombinator.com/item?id=41875693 :
> "SUMOylation of Warts kinase promotes neural stem cell reactivation" (2024) https://www.nature.com/articles/s41467-024-52569-y
Why Verifiable Credentials Aren't Widely Adopted and Why Trinsic Pivoted
FWIW Blockcerts.org is an open standard for certs verifiable with just cert-verifier-js, and grantable with blockchain-certificates/cert-issuer: https://GitHub.com/blockchain-certificates
Is there a way to store Blockcerts (W3C Verifiable Credentials) in popular wallets yet?
Superconducting flux qubit with ferromagnetic π-junction with 0 magnetic field
"Superconducting flux qubit with ferromagnetic Josephson π-junction operating at zero magnetic field" (2024) https://www.nature.com/articles/s43246-024-00659-1 :
> Abstract: Conventional superconducting flux qubits require the application of a precisely tuned magnetic field to set the operation point at half a flux quantum through the qubit loop, which complicates the on-chip integration of this type of device. It has been proposed that by inducing a π-phase shift in the superconducting order parameter using a precisely controlled nanoscale-thickness superconductor/ferromagnet/superconductor Josephson junction, commonly referred to as π-junction, it is possible to realize a flux qubit operating at zero magnetic flux. Here, we report the realization of a zero-flux-biased flux qubit based on three NbN/AlN/NbN Josephson junctions and a NbN/PdNi/NbN ferromagnetic π-junction. The qubit lifetime is in the microsecond range, which we argue is limited by quasiparticle excitations in the metallic ferromagnet layer. Our results pave the way for developing quantum coherent devices, including qubits and sensors, that utilize the interplay between ferromagnetism and superconductivity.
How do the new Type III superconductors affect this approach?
"Topological gauge theory of vortices in type-III superconductors" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.110.0... .. https://news.ycombinator.com/item?id=41803654#41803662
Understanding Linux Message Queues
What about ebpf and mq? How do network packet io buffers differe from MQ patterns and implementations? Isn't ebpf faster than e.g. kpoll?
/? ebpf "message queue"
laminarmq: https://github.com/arindas/laminarmq :
> - [ ] Single node, multi threaded, eBPF based request to thread routed message queue
A step toward fully 3D-printed active electronics
Reading the article, it looks like so far they only have a working resettable fuse (a passive device), and only hypothesize that a transistor was possible with the copper-infused PLA filament. So no actual working active electronics.
And from the paper linked in the article[1], it seems the actual breakthrough is the discovery that copper-infused PLA filament exhibits a PTC-effect, which is noteworthy, but definitely not "3D-Printed Active Electronics" newsworthy.
[1] https://www.tandfonline.com/doi/full/10.1080/17452759.2024.2...
From https://news.ycombinator.com/item?id=40759133 :
> In addition to nanolithography and nanoassembly, there is 3d printing with graphene.
And conductive aerogels, and carbon nanotube production at scale
From https://news.ycombinator.com/item?id=41210021 :
> There's already conductive graphene 3d printing filament (and far less conductive graphene). Looks like 0.8ohm*cm may be the least resistive graphene filament available: https://www.google.com/search?q=graphene+3d+printer+filament...
> Are there yet CNT or Twisted SWCNT Twisted Single-Walled Carbon Nanotube substitutes for copper wiring?
Aren't there carbon nanotube superconducting cables?
Instead of copper, there are plastic waveguides
Language is not essential for the cognitive processes that underlie thought
This is an important result.
The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.
What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.
This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.
[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...
"Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35877402#35886145 :
> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift.
[...]
So, I'm not sure how conclusive this fmri activation study is either.
Though, is there a proto language that's not even necessary for the given measured aspects of condition?
Which artificial network architecture best approximates which functionally specialized biological neutral networks?
OpenCogPrime:KnowledgeRepresentation > Four Types of Knowledge: https://wiki.opencog.org/w/OpenCogPrime:KnowledgeRepresentat... :
> Sensory, Procedural, Episodic, Declarative
From https://news.ycombinator.com/item?id=40105068#40107537 re: cognitive hierarchy and specialization :
> But FWIU none of these models of cognitive hierarchy or instruction are informed by newer developments in topological study of neural connectivity;
Common mistakes to avoid when managing a cap table
I’m currently working on the cap table for my early-stage startup and want to make sure we get it right from the start. What are some common mistakes or pitfalls to avoid when structuring and maintaining a cap table? Any tools or best practices that have worked well for others?
- "Ask HN: Why don’t startups share their cap table and/or shares outstanding?" https://news.ycombinator.com/item?id=29141668#29141796
Capitalization table: https://en.wikipedia.org/wiki/Capitalization_table
Removing PGP from PyPI (2023)
This is slightly old news. For those curious, PGP support on the modern PyPI (i.e. the new codebase that began to be used in 2017-18) was always vestigial, and this change merely polished off a component that was, empirically[1], doing very little to improve the security of the packaging ecosystem.
Since then, PyPI has been working to adopt PEP 740[2], which both enforces a more modern cryptographic suite and signature scheme (built on Sigstore, although the design is adaptable) and is bootstrapped on PyPI's support for Trusted Publishing[3], meaning that it doesn't have the fundamental "identity" problem that PyPI-hosted PGP signatures have.
The hard next step from there is putting verification in client hands, which is the #1 thing that actually makes any signature scheme actually useful.
[1]: https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI...
It's good that PyPI signs whatever is uploaded to PyPI using PyPI's key now.
GPG ASC support on PyPI was nearly as useful as uploading signatures to sigstore.
1. Is it yet possible to - with pypa/twine - sign a package uploaded to PyPI, using a key that users somehow know to trust as a release key for that package?
2. Does pip check software publisher keys at package install time? Which keys does pip trust to sign which package?
3. Is there any way to specify which keys to trust for a given package in a requirements.txt file?
4. Is there any way to specify which keys to trust for a version of a given package with different bdist releases, with Pipfile.lock, or pixi or uv?
People probably didn't GPG sign packages on PyPI because it wasn't easy or required to sign a package using a registered key/DID in order to upload.
Anyone can upload a signature for any artifact to sigstore. Sigstore is a centralized cryptographic signature database for any file.
Why should package installers trust that a software artifact publisher key [on sigstore or the GPG keyserver] is a release key?
gpg --recv-key downloads a public key for a given key fingerprint over HKP (HTTPS with the same CA cert bundle as everything else).
GPG keys can be wrapped as W3C DIDs FWIU.
W3C DIDs can optionally be centrally generated (like LetsEncrypt with ACME protocol).
W3C DIDs can optionally be centrally registered.
GPG or not, each software artifact publisher key must be retrieved over a different channel than the packages.
If PYPI acts as the (package,release_signing_key) directory and/or the keyserver, is that any better than hosting .asc signatures next to the downloads?
GPG signatures and wheel signatures were and are still better than just checksums.
Why should we trust that a given key is a release signing key for that package?
Why should we trust that a release signing key used at the end of a [SLSA] CI build hasn't been compromised?
How do clients grant and revoke their trust of a package release signing key with this system?
... With GPG or [GPG] W3C DIDs or whichever key algo and signed directory service.
> It's good that PyPI signs whatever is uploaded to PyPI using PyPI's key now.
You might have misunderstood -- PyPI doesn't sign anything with PEP 740. It accepts attestations during upload, which are equivalent to bare PGP signatures. The big difference between the old PGP signature support and PEP 740 is that PyPI actually verifies the attestations on uploads, meaning that everything that gets stored and re-served by PyPI goes through a "can this actually be verified" sanity check first.
I'll try to answer the others piecewise:
1. Yes. You can use twine's `--attestations` flag to upload any attestations associated with distributions. To actually generate those attestations you'll need to use GitHub Actions or another OIDC provider currently supported as a Trusted Publisher on PyPI; the shortcut for doing that is to enable `attestations: true` while uploading with `gh-action-pypi-publish`. That's the happy path that we expect most users to take.
2. Not yet; the challenge there is primarily technical (`pip` can only vendor pure Python things, and most of PyCA cryptography has native dependencies). We're working on different workarounds for this; once they're ready `pip` will know which identity - not key - to trust based on each project's Trusted Publisher configuration.
3. Not yet, but this is needed to make downstream verification in a TOFU setting tractable. The current plan is to use the PEP 751 lockfile format for this, once it's finished.
4. That would be up to each of those tools to implement. They can follow PEP 740 to do it if they'd like.
I don't really know how to respond to the rest of the comment, sorry -- I find it a little hard to parse the connections you're drawing between PGP, DIDs, etc. The bottom line for PEP 740 is that we've intentionally constrained the initial valid signing identities to ones that can be substantiated via Trusted Publishing, since those can also be turned into verifiable Sigstore identities.
So PyPI acts as keyserver, and basically a CSR signer for sub-CA wildcard package signing certs, and the package+key mapping trusted authority; and Sigstore acts as signature server; and both are centralized?
And something has to call cosign (?) to determine what value to pass to `twine --attestations`?
Blockcerts with DID identities is the W3C way to do Software Supply Chain Security like what SLSA.dev describes FWIU.
And then now it's possible to upload packages and native containers to OCI container repositories, which support artifact signatures with TUF since docker notary; but also not yet JSON-LD/YAML-LD that simply joins with OSV OpenSSF and SBOM Linked Data on registered (GitHub, PyPI, conda-forge, deb-src,) namespace URIs.
GitHub supports requiring GPG signatures on commits.
Git commits are precedent to what gets merged and later released.
A rough chronology of specs around these problems: {SSH, GPG, TLS w/ CA cert bundles, WebID, OpenID, HOTP/TOTP, Bitcoin, WebID-TLS, TOTP, OIDC OpenID Connect (TLS, HTTP, JSON, OAuth2.0, JWT), TUF, Uptane, CT Logs, WebAuthn, W3C DID, Blockcerts, SOLID-OIDC, Shamir Backup, transferable passkeys, }
For PyPI: PyPI.org, TLS, OIDC OpenID Connect, twine, pip, cosign, and Sigstore.dev.
Sigstore Rekor has centralized Merkle hashes like google/trillian which centralizedly hosts Certificate Transparency logs of x509 certs grants and revocations by Certificate Authorities.
W3C DID Decentralized Identifiers [did-core] > 9.8 Verification Method Revocation : https://www.w3.org/TR/did-core/#verification-method-revocati... :
> If a verification method is no longer exclusively accessible to the controller or parties trusted to act on behalf of the controller, it is expected to be revoked immediately to reduce the risk of compromises such as masquerading, theft, and fraud
> So PyPI acts as keyserver, and basically a CSR signer for sub-CA wildcard package signing certs, and the package+key mapping trusted authority; and Sigstore acts as signature server; and both are centralized?
No -- there is no keyserver per se in PEP 740's design, because PEP 740 is built around identity binding instead of long-lived signing keys. PyPI does no signing at all and has no CA or CSR components; it acts only as an attestation store for attestations, which it verifies on upload.
PyPI signs uploaded packages with its signing key per PEP 458: https://peps.python.org/pep-0458/ :
> This [PEP 458 security] model supports verification of PyPI distributions that are signed with keys stored on PyPI
So that's deprecated by PEP 740 now?
PEP 740: https://peps.python.org/pep-0740/ :
> In addition to the current top-level `content` and `gpg_signature` fields, the index SHALL accept `attestations` as an additional multipart form field.
> The new `attestations` field SHALL be a JSON array.
> The `attestations` array SHALL have one or more items, each a JSON object representing an individual attestation.
> Each attestation object MUST be verifiable by the index. If the index fails to verify any attestation in attestations, it MUST reject the upload. The format of attestation objects is defined under Attestation objects and the process for verifying attestations is defined under Attestation verification.
What is the worst case resource cost of an attestation validation required of PyPI?
blockchain-certificates/cert-verifier-js; https://westurner.github.io/hnlog/ Ctrl-F verifier-js
`attestations` and/or `gpg_signature`;
From https://news.ycombinator.com/item?id=39204722 :
> An example of GPG signatures on linked data documents: https://gpg.jsld.org/contexts/#GpgSignature2020
W3C DID self-generated keys also work with VC linked data; could a new field like the `attestations` field solve for DID signatures on the JSON metadata; or is that still centralized and not zero trust?
Today is Ubuntu's 20th Anniversary
Ubuntu!
SchoolTool also started 20 years ago. Then Open edX, and it looks like e.g. Canvas for LMS these days.
To run an Ubuntu container with podman:
apt-get install -y podman distrobox
podman run --rm --it docker.io/ubuntu:24.04 bash --login
podman run --rm --it docker.io/ubuntu:24.10 bash --login
GitHub Codespaces are Ubuntu devcontainers (codespaces-linux)Windows Subsystem for Linux (WSL) installs Ubuntu (otherwise there's GitBash and podman-desktop for over there)
The jupyter/docker-stacks containers build FROM Ubuntu:24.04 LTS: https://github.com/jupyter/docker-stacks/blob/main/images/do... :
docker run -p 10000:8888 quay.io/jupyter/scipy-notebook:2024-10-07
All-optical switch device paves way for faster fiber-optic communication
It's funny to see this show up now. Optical-domain switching was a hot area of research in the early 2000s (e.g. [1]); it largely petered out as it became clear that digital-domain switching was more practical (because it didn't rely on exotic technology, and provided time for hardware to perform routing operations).
[1]: https://www.laserfocusworld.com/optics/article/16556781/many...
Is optical digital or analog?
Either/both. A nice feature of lightpath networks is that they are somewhat independent of the signal encoding.
Optical computation can also be digital or analog.
In this case, the control signal appears to be digital while the switched signal can be anything.
What is theoretical computer science?
This is a great article and I especially liked the notion:
>Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing.
as well as the emphasis on the difference between TCS in Europe and the US. I remember from the University of Crete that the professors all spent serious time in the labs coding and testing. Topics like Human-Computer Interaction, Operating Systems Research and lots of Hardware (VLSI etc) were core parts of the theoretical Computer Science research areas. This is why no UoC graduate could graduate without knowledge both in Algorithms and PL theory, for instance, AND circuit design (my experience is from 2002-2007).
I strongly believe that this breadth of concepts is essential to Computer Science, and the narrower emphasis of many US departments (not all) harms both the intellectual foundations and practical employment prospects of the graduate. [I will not debate this point online; I'll be happy to engage in hours long discussion in person]
>Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded.
This isn't really true, is it? There are mathematical models that predict but do not explain the real world. The most glaring of them is the transmission of EM waves without a medium, and the particle/wave duality of matter. In the former case, there was a concerted attempt to prove existence of the medium (luminiferous aether) that failed and ended up being discarded - we accept now that no medium is required, but we don't know the physical process of how that works.
On the Cave and the Light,
List of popular misconceptions and science > Science, technology, and mathematics : https://en.wikipedia.org/wiki/List_of_common_misconceptions :
> See also: Scientific misconceptions, Superseded theories in science, and List of topics characterized as pseudoscience
Allegory of the cave > See also,: https://en.wikipedia.org/wiki/Allegory_of_the_cave
The only true statement:
All models are wrong: https://en.wikipedia.org/wiki/All_models_are_wrong
Map–territory relation: https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation :
> A frequent coda to "all models are wrong" is that "all models are wrong (but some are useful)," which emphasizes the proper framing of recognizing map–territory differences—that is, how and why they are important, what to do about them, and how to live with them properly. The point is not that all maps are useless; rather, the point is simply to maintain critical thinking about the discrepancies: whether or not they are either negligible or significant in each context, how to reduce them (thus iterating a map, or any other model, to become a better version of itself), and so on.
Counterintuitive Properties of High Dimensional Space (2018)
One that isn't listed here, and which is critical to machine learning, is the idea of near-orthogonality. When you think of 2D or 3D space, you can only have 2 or 3 orthogonal directions, and allowing for near-orthogonality doesn't really gain you anything. But in higher dimensions, you can reasonably work with directions that are only somewhat orthogonal, and "somewhat" gets pretty silly large once you get to thousands of dimensions -- like 75 degrees is fine (I'm writing this from memory, don't quote me). And the number of orthogonal-enough dimensions you can have scales as maybe as much as 10^sqrt(dimension_count), meaning that yes, if your embeddings have 10,000 dimensions, you might be able to have literally 10^100 different orthogonal-enough dimensions. This is critical for turning embeddings + machine learning into LLMs.
Does distance in feature space require orthogonality?
With real space (x,y,z) we omit the redundant units from each feature when describing the distance in feature space.
But distance is just a metric, and often the space or paths through it are curvilinear.
By Taxicab distance, it's 3 cats, 4 dogs, and 5 glasses of water away.
Python now has math.dist() for Euclidean distance, for example.
Near-orthogonality allows fitting in more directions for distinct concepts than the dimension of the space. So even though the dimension of an LLM might be <2000, far far more than 2000 distinct directions can fit into that space.
The term most often used is "superposition." Here's some material on it that I'm working through right now:
https://arena3-chapter1-transformer-interp.streamlit.app/%5B...
Skew coordinates aren't orthogonal.
Skew coordinates: https://en.wikipedia.org/wiki/Skew_coordinates
Are the feature described with high-dimensional spaces really all 90° geometrically orthogonal?
How does the distance metric vary with feature order?
Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
Re: superposition in this context, too
Are there multiple particles in the same space, or is it measuring a point-in-time sampling of the possible states of one particle?
(Can photons actually occupy the same point in spacetime? Can electrons? But the plenoptic function describes all light passing through a point or all of the space)
Expectation values are or are not good estimators of wave function outputs from discrete quantum circuits and real quantum systems.
To describe the products of the histogram PDFs
> Are the [features] described with high-dimensional spaces really all 90° geometrically orthogonal?
If the features are not statistically independent, I don't think it's likely that they're truly orthogonal; which might not affect the utility of a distance metric that assumes that they are all orthogonal.
Neuroscientists discover mechanism that can reactivate dormant neural stem cells
"SUMOylation of Warts kinase promotes neural stem cell reactivation" (2024) https://www.nature.com/articles/s41467-024-52569-y :
> Intro: Drosophila NSCs in the central brain (CB) and thoracic ventral nerve cord (VNC) enter into quiescence at the end of embryogenesis, and subsequently exit quiescence (reactivate) in response to dietary amino acids, typically within 24 h after larval hatching (h ALH) (Fig. 1a) [14,15,16]. The nutritional signals originate from the Drosophila fat body—functional equivalent of mammalian liver and adipose tissue [17]—and lead to activation of the insulin/insulin-like growth factor (IGF) signaling pathway [18,19] and inactivation of the Hippo signaling pathway in NSCs for their reactivation [18,19,20,21].
Amplification of electromagnetic fields by a rotating body
Could this be used as an engine of some kind? The spinny thing giving off EM waves and those waves are caught by something like a solar sail?
ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
> Could this be used as an engine of some kind?
What about helical polarization?
"Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Sugar molecules are asymmetrical / handed, per 3blue1brown and Steve Mould. /? https://www.google.com/search?q=Sugar+molecules+are+asymmetr....
Is there a way to get to get the molecular propeller effect and thereby molecular locomotion, with molecules that contain sugar and a rotating field or a rotating molecule within a field?
Fallacies of Distributed Computing
Fallacies of distributed computing: https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu... :
> The originally listed fallacies are:
> The network is reliable; Latency is zero; Bandwidth is infinite; The network is secure; Topology doesn't change; There is one administrator; Transport cost is zero; The network is homogeneous;
All asteroids in Solar System, visualized
TIL about Kirkwood gaps. I knew about Jupiter leading to Hilda and Trojan groupings, but my understanding of the main belt was more in line with a representation like this one from Wikipedia[1]. However, this imaging shows a clear gap and a quick search led me to learn about Kirkwood gaps. Is there a specific reason why the gap is more evident in this imaging rather than the above one from Wikipedia?
[1] - https://en.wikipedia.org/wiki/File:InnerSolarSystem-en.png
TIL about Kirkwood gaps too: https://en.wikipedia.org/wiki/Kirkwood_gap :
> A Kirkwood gap is a gap or dip in the distribution of the semi-major axes (or equivalently of the orbital periods) of the orbits of main-belt asteroids. They correspond to the locations of orbital resonances with Jupiter.
(Similarly, shouldn't it be possible to infer photon phase without causing wave state collapse?)
/? Jupiter’s effect on Earth’s climate https://www.google.com/search?q=Jupiter%E2%80%99s+effect+on+...
Time Duration in JavaScript
Looks like Safari Technical Preview TP only? https://caniuse.com/?search=temporal
https://caniuse.com/mdn-javascript_builtins_temporal_duratio...
Temporal.Duration: https://tc39.es/proposal-temporal/docs/#Temporal-Duration
IIRC Python datetime objects initially lacked timezone support and a timezone database.
datetime.MINYEAR and the "can't represent datetimes before the Unix epoch, or before 0001-01-01" problem.
E.g. Astropy supports astronomical year numbering and year zero.
Year Zero: https://en.wikipedia.org/wiki/Year_zero
Python datetime module: https://docs.python.org/3/library/datetime.html
A free phonetic generator (spelling, IPA transcription, prononciation)
Ideas for language learning products w/ IPA: https://news.ycombinator.com/item?id=38317345
The deconstructed Standard Model equation
The original TeX representation of the formula was written by Thomas D. Gutierrez in 1999 [1]. It was discussed many times on HN, initially in 2016 a day after the post of this article on symmetrymagazine [2].
Are there (SymPy,) CAS versions of the Standard Model Lagrangians and corollaries? Maybe latex2sympy2 (antlr)?
Mathematical formulation of the Standard Model: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...
WebPKI – Introduce Schedule of Reducing Validity (Of TLS Server Certificates)
Letsencrypt wildcard certs are valid for 30 days, and regular certs are valid for 90 days but they recommend renewing them after 60 days.
Cert validity intervals directly affect the storage and bandwidth requirements for CT logs, which should be replicated.
Does anyone serve the CT Certificate Transparency logs for checking by browsers?
Topological gauge theory of vortices in type-III superconductors
"Topological gauge theory of vortices in type-III superconductors" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.110.0... :
> Abstract: Usual superconductors fall into two categories, type I, expelling magnetic fields, and type II, into which magnetic fields exceeding a lower critical field H_c1 penetrate in a form of vortices characterized by two scales, the size of the normal core, \Xi, and the London penetration depth \Lambda. Here we demonstrate that a type-III superconductivity, realized in granular media in any dimension, hosts vortex physics in which vortices have no cores, are logarithmically confined, and carry only a gauge scale \Lambda. Accordingly, in type-III superconductors H_c1=0 at zero temperature and the Ginzburg-Landau theory must be replaced by a topological gauge theory. Type-III superconductivity is destroyed not by Cooper pair breaking but by vortex proliferation generalizing the Berezinskii-Kosterlitz-Thouless mechanism to any dimension.
Berezinskii–Kosterlitz–Thouless (BKT) transition: https://en.wikipedia.org/wiki/Berezinskii%E2%80%93Kosterlitz...
Ask HN: Parameter-free neural network models: Limits, Challenges, Opportunities?
Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
> Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
The Asimov Institute > The Neural Network Zoo: https://www.asimovinstitute.org/neural-network-zoo
"A mostly complete chart of neural networks" (2019) includes Hopfield nets! https://www.asimovinstitute.org/wp-content/uploads/2019/04/N...
Category:Neural network architectures: https://en.wikipedia.org/wiki/Category:Neural_network_archit...
Types of artificial neural networks: https://en.wikipedia.org/wiki/Types_of_artificial_neural_net...
Systems Modeling to Refine Strategy
From "Architectural Retrospectives: The Key to Getting Better at Architecting" https://news.ycombinator.com/item?id=41234471 :
> Is there already a good way to link an ADR Architectural Decision Record with Threat Modeling primitives and considerations?
Awesome-threat-modeling > Tools: https://github.com/hysnsec/awesome-threat-modelling#free-too...
- pytm: https://github.com/izar/pytm
- threatspec has a docstring spec for adding threat modeling annotations to source code: https://threatspec.org/
- OWASP Threat Dragon : https://owasp.org/www-project-threat-dragon/ :
> Threat Dragon supports STRIDE / LINDDUN / CIA / DIE / PLOT4ai, provides modeling diagrams and implements a rule engine to auto-generate threats and their mitigations.
- OWASP Threat Modeling Cheat Sheet > Systems Modeling: https://cheatsheetseries.owasp.org/cheatsheets/Threat_Modeli...
- https://github.com/dehydr8/elevation-of-privilege:
> An online multiplayer version of the Elevation of Privilege (EoP) threat modeling card game
Re: "Thinking in Systems" by Donella Meadows:
- "Limits to Growth: The 30-Year Update" (2012) by Donella H. Meadows. https://a.co/7MgO0bv
- "Leverage Points: Places to Intervene in a System" (2018) https://news.ycombinator.com/item?id=17781927 & wikipedia links to systems theory
Google must open Android for third-party stores, rules Epic judge
Epic's "First Run" program does all the things they got mad at Apple and Google about.
You don't have to pay any license fees for Unreal Engine if you use Epic exclusively for payments. They give you 100% revshare for 6 months if you agree to not ship your game on any other app store.
Let's not kid ourselves, Epic never cared about consumer choice or a fair playing field, they only want the ability to profit without having to invest in building a hardware platform.
All I care about is having some healthy competition in the marketplace again.
If that takes tying Google's hands behind its back for a few years, fine.
Taking a 30% cut should have been, prima facie, evidence of monopoly abuse.
Tying Google's hands while giving Apple a clean bill of health isn't going to increase competition, it's going to solidify Apple's lead in the US.
The competition that actually matters is between whole platforms, it's only Epic's lawyers who want everyone to get fixated on the idea that the app store markets are the whole story (or even really a significant portion of the story). I could totally get behind efforts to prevent Google and Apple from together duopolizing the entire mobile phone space, but this is not that.
The important competition isn't Google v Apple.
It's everyone else v Google and/or Apple.
Ironically, forcing this onto Google might be the best thing for Android, as it's the openness the platform really needed to compete against Apple, but costs Google too much revenue to ever do it if left to their own decision.
> it's the openness the platform really needed to compete against Apple
It’s hard to believe the best way to compete with Apple is by being more open then they are.
If that worked, we would all be running some unix os on our phones.
It’s clearly an utterly false premise.
This is going to hurt google and do literally nothing to Apple, except perhaps scare some people into the “saftey” of its curated ecosystem.
apologies for the pendantic nitpick, but:
iOS is based on Darwin which is BSD based, and Android is based on Linux, which is heavily inspired by Unix.
So technically, we are "all running some unix os on our phones"... perhaps you mean some open-source OS? Even then, Android markets itself as being open-source (although I imagine they must add some closed source stuff on top of it).
Apple XCode is required to compile code for Apple iOS. Android Studio is not required to compile code for Android.
Android Studio doesn't run on Android. XCode IDE doesn't run on iOS.
XCode IDE requires an MacOS device. Android Studio works on Win/Mac/Linux/Chromebook_with_containers.
Android Studio runs in a Linux container. XCode only runs on MacOS.
Apple does not allow, and per a recent ruling, and DOES NOT HAVE TO allow 3rd party app stores.
(There are already 3rd party app "stores" for Android like FDroid.)
Android already allows, and per a recent ruling, MUST allow 3rd party app stores.
In terms of Application stores, (after trying to trademark the phrase "app store" to anticompete Amazon) Apple gets protectionism for their walled garden, and Android may not have a walled garden.
Apple went with Objective-C; C with standard macros; IIRC before C++?
Apple built a new language called Swift.
The Java pitch was "write once, run anywhere" because you port the JVM, and J2ME.
Google rewrote many parts of the Java stack.
Google ported to Apache Harmony/ OpenJDK to port away from Sun then Oracle Java.
Google built a language called Kotlin.
Swift runs on MacOS, iOS, and Android.
Kotlin runs on Android/iOS/Win/Mac/Linux/JS. Dart lang also runs on any platform. Flutter also runs on any platform.
Gamma radiation is produced in large tropical thunderstorms
This somehow reminds me of the fact that you can produce (surprisingly high-quality) x-rays by unrolling scotch tape in a vacuum chamber[0][1]. I wonder if it turns out to be related in any way. Thunderstorms aren't a vacuum of course, but I dunno, maybe all that frozen hail being thrown around can bumping into each other still involves a similar underlying mechanism somewhere.
What about guitar pedal velcro tape?
[0] "Correlation between nanosecond X-ray flashes and stick–slip friction in peeling tape" (2008) https://www.nature.com/articles/nature07378
> Triboluminescence is a phenomenon in which light is generated when a material is mechanically pulled apart, ripped, scratched, crushed, or rubbed (see tribology). [...] Triboluminescence is often a synonym for fractoluminescence (a term mainly used when referring only to light emitted from fractured crystals). Triboluminescence differs from piezoluminescence in that a piezoluminescent material emits light when deformed, as opposed to broken. These are examples of mechanoluminescence, which is luminescence resulting from any mechanical action on a solid. [...] See also:
> - Earthquake light: https://en.wikipedia.org/wiki/Earthquake_light
- re: gold from earthquake-induced piezoelectricity in quartz: https://news.ycombinator.com/item?id=41442489
- re: sustainable alternatives to quartz countertops: https://news.ycombinator.com/item?id=36857929 :
> [...] make recycled paper outdoor waterproof vert ramps, countertops, siding, and flooring; but not yet roofing FWIU?
> - Sonoluminesence: https://en.wikipedia.org/wiki/Sonoluminescence
If you dropped a thing through a storm to the water, would it charge the thing, from gamma radiation?
TIL carbon nano yarn absorbs electricity, probably from storm clouds too.
What are the volt and charge observations for lightning from large tropical thunderstorms?
(And why is it dangerous to attract arc discharge toward a local attractor? And what sort of supercapacitors and anodes can handle charge from a lightning bolt? Lightning!)
Tardigrades can handle Gamma radiation.
"Researchers create new type of composite material for shielding against neutron and gamma radiation" (2024) https://phys.org/news/2024-05-composite-material-shielding-n... :
"Sm2O3 micron plates/B4C/HDPE composites containing high specific surface area fillers for neutron and gamma-ray complex radiation shielding" https://www.sciencedirect.com/science/article/abs/pii/S02663...
On lightning and safety and electrostatics: "Answer to Is there a device that attracts lightning when storms are near? How can I make a lightning rod to do experiments with lightning?" (202_ yrs ago) https://www.quora.com/Is-there-a-device-that-attracts-lightn... :
Electrostatics: https://en.wikipedia.org/wiki/Electrostatics
The Photoelectric effect that supports the Particle-Wave duality > History: https://en.wikipedia.org/wiki/Photoelectric_effect :
> Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a step in the development of quantum mechanics. In 1914, Robert A. Millikan's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". [47] Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", [48] and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect".[49]
Does this help to explain the genetic diversity of the tropical latitudes; is the genetic mutation rate higher in the presence of gamma radiation?
So many of our plants and flowers (here in North America) originate from rainforests and tropical latitudes, but survive at current temps for northern latitudes.
No, insignificant Is diverse because there is more biomass
/? is the genetic mutation rate higher in the presence of gamma radiation: https://www.google.com/search?q=is%20the%20genetic%20mutatio... :
- "Frequency and Spectrum of Mutations Induced by Gamma Rays Revealed by Phenotype Screening and Whole-Genome Re-Sequencing in Arabidopsis thaliana" (2022) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8775868/ :
> Gamma rays have been widely used as a physical agent for mutation creation in plants, and their mutagenic effect has attracted extensive attention. However, few studies are available on the comprehensive mutation profile at both the large-scale phenotype mutation screening and whole-genome mutation scanning.
- "Bedrock radioactivity influences the rate and spectrum of mutation" (2020) https://elifesciences.org/articles/56830
- "Comparison of mutation spectra induced by gamma-rays and carbon ion beams" (2024) https://academic.oup.com/jrr/article/65/4/491/7701037 :
> These results suggest that carbon ion beams produce complex DNA damage, and gamma-rays are prone to single oxidative base damage, such as 8-oxoguanine. Carbon ion beams can also introduce oxidative base damage, and the damage species is 5-hydroxycytosine. This was consistent with our previous results of DNA damage caused by heavy ion beams. We confirmed the causal DNA damage by mass spectrometry for these mutations.
Homemade AI drone software finds people when search and rescue teams can't
{Code-and-Response, Call-for-Code}/DroneAid : "DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2010) https://github.com/Code-and-Response/DroneAid .. https://westurner.github.io/hnlog/#story-22707347 :
CORRECTION: All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.
Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?
Show HN: Compiling C in the browser using WebAssembly
Cling (the interactive C++ interpreter) should also compile to WASM.
There's a xeus-cling Jupyter kernel, which supports interactive C++ in notebooks: https://github.com/jupyter-xeus/xeus-cling
There's not yet a JupyterLite (WASM) kernel for C or C++.
Forget Superconductors: Electrons Living on the Edge Could Unlock Perfect Power
ScholarlyArticle: "Observation of chiral edge transport in a rapidly rotating quantum gas" (2024) https://www.nature.com/articles/s41567-024-02617-7
- "Electrical switching of the edge current chirality in quantum Hall insulators" (2023) https://news.ycombinator.com/item?id=38139569
Does this affect the standing issue of whether more current flows mostly through a wire or mostly down the edges, and whether stranded wire is better for a given application?
Copper conductor: https://en.wikipedia.org/wiki/Copper_conductor
Skin effect: https://en.wikipedia.org/wiki/Skin_effect :
> In electromagnetism, skin effect is the tendency of an alternating electric current (AC) to become distributed within a conductor such that the current density is largest near the surface of the conductor and decreases exponentially with greater depths in the conductor. It is caused by opposing eddy currents induced by the changing magnetic field resulting from the alternating current. The electric current flows mainly at the skin of the conductor, between the outer surface and a level called the skin depth.
No evidence social media time is correlated with teen mental health problems
The APA has lost it's credibility some time ago with it's political stances. It doesn't take a researcher to see a correlation between phone time and youth issues.
It takes a professional to determine whether harassing them about their electronics is the actual cause of the behaviors in question.
From "New Mexico: Psychologists to dress up as wizards when providing expert testimony" https://news.ycombinator.com/item?id=40085678 :
> Clean Language
LG Chem develops material capable of suppressing thermal runaway in batteries
Does this prevent lithium - water reaction? Is that fire, or?
When current gen EV batteries catch fire in fresh and saltwater flood waters, is that due to thermal runaway?
(Twisted SWCNT carbon nanotubes don't have these risks at all FWIU)
Twisted SWCNT is still mostly a concept. Doesn't even have proof of concept in the lab. According to people involved in it, it may take a couple of decades to go to market. But I agree, it would be great to have " wind up" cars, for example.
Business idea: form dense shipping containers of addressed arrays of [twisted SWCNT, rhombohedral trilayer graphene, or double-gated graphene,]? They would need: anodes, industry standard commercial and residential solar connectors, water-safe EV charge connectors, and/or "Mega pack" connectors to jump charge semi-trucks from a container on a flatbed or a sprinter van.
Branded, stackable Intermodal shipping containers with charge controllers
It's probably already possible to thermoform and ground biocomposite shipping containers, with ocean recovery loops?
Bast fiber anodes for super capacitors are inexpensive, and there may be something to learn from their naturally branching shape. https://www.google.com/search?q=bast%20fiber%20supercapacito...
"Scientists grow carbon nanotube forest much longer than any other" (2020) https://www.nanowerk.com/nanotechnology-news2/newsid=56546.p...
"Growing ultra-long carbon nanotubes" https://youtube.com/watch?v=VyULskYuGvg
"Ultra-long carbon nanotube forest via in situ supplements of iron and aluminum vapor sources" (2020) https://dx.doi.org/doi:10.1016/j.carbon.2020.10.066 .. https://scholar.google.com/citations?hl=en&user=1zqRM3UAAAAJ...
https://news.ycombinator.com/item?id=41159447 :
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
> 583 Wh/kg
Yes, obvious once. Only light years away, I'm afraid. Try and get funding for that. GLWT
> In its announcement, LG Chem has reported that in both battery impact and penetration tests, the batteries equipped with the thermal runaway suppression material either did not catch fire at all or extinguished the flames shortly after they appeared, preventing a full-blown thermal runaway event.
> The material comes in the form of a thin layer, just 1 micrometer (1μm) thick – about one hundredth the thickness of human hair – positioned between the cathode layer and the current collector (an aluminum foil that acts as the electron pathway). When the battery’s temperature rises beyond the normal range, between 90°C and 130°C, the material reacts to the heat, altering its molecular structure and effectively suppressing the flow of current, LG Chem said.
> The material is decribed as highly responsive to temperature, with its electrical resistance increasing by 5,000 ohms (Ω) for every 1°C rise in temperature. The material’s maximum resistance is over 1,000 times higher than at normal temperatures, and it also features reversibility, meaning the resistance decreases and returns to its original state, allowing the current to flow normally again once the temperature drops.
To the best of my searching, a typical lithium ion battery has roughly 20-30 cathode-and-current-conductor layers. So naively this would add less than 50 microns to the battery thickness.
Also, the open-access article is here: https://www.nature.com/articles/s41467-024-52766-9
ScholarlyArticle: "Thermal runaway prevention through scalable fabrication of safety reinforced layer in practical Li-ion batteries" (2024) https://www.nature.com/articles/s41467-024-52766-9
Dance training superior to physical exercise in inducing brain plasticity (2018)
I do MRI work, and my gut is that none of the claims about dance vs. exercise would replicate. The behavioral data suggests that activity of some type will improve cognitive function (main effects of time). Such beneficial effects of activity on the brain have been shown before, and this is generally accepted. However, the authors' behavioral data doesn't show any difference between the dance vs. exercise groups. This means that the study is overall off to a pretty bad start if their goal is to study dance vs. exercise differences...
The brain data claims to show that the dance vs. exercise groups showed different levels of improvement in various regions. However, the brain effects are tiny and are probably just random noise (I'm referring to those red spots, which are very small and almost certainly don't reflect proper correction for multiple hypotheses given that the authors effectively tested 1000s or 10000s of different areas). The authors' claims about BDNF are supported by a p-value of p = .046, and having main conclusions hinge on p-values of p > .01 usually means the conclusions are rubbish.
In general, my priors on "we can detect subtle changes in brain matter over a 6-week period" are also very low. Perhaps, a study with this sample size could show that activity of some kind influences the brain over such a short length, but I am extremely skeptical that this type of study could detect differences between dance vs. exercise effects.
Cardiovascular exercise correlates with subsequent synthesis of endocannabinoids, which affect hippocampal neurogeneration and probably thereby neuroplasticity.
From "Environment shapes emotional cognitive abilities more than genes" (2024) https://news.ycombinator.com/item?id=40105068#40107602 :
> hippocampal plasticity and hippocampal neurogenesis also appear to be affected by dancing and omega-3,6 (which are transformed into endocannabinoids by the body): https://news.ycombinator.com/item?id=15109698
Also, isn't there selection bias to observational dance studies? If not in good general health, is a person likely to pursue regular dancing? Though, dance lifts mood and gets the cardiovascular system going.
Dance involves spatial sense and proprioception and probably all of the senses.
[deleted]
John Wheeler saw the tear in reality
/? "Wheeler's bags of gold" https://www.google.com/search?q=wheelers+bags+of+gold
Holographic principle: https://en.wikipedia.org/wiki/Holographic_principle :
> The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood.
Evidence of 'Negative Time' Found in Quantum Physics Experiment
"Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://arxiv.org/abs/2409.03680
Additional indications of retrocausality:
- "Robust continuous time crystal in an electron–nuclear spin system" (2024) https://www.nature.com/articles/s41567-023-02351-6 .. https://news.ycombinator.com/item?id=39291044
"3D waves flowing over Sierpinski carpets" gives intuition about (convolutional) wave products or outcomes but doesn't show any retrocausality from here: https://youtube.com/watch?v=8yddkbwrqss&
An adult fruit fly brain has been mapped
It was my understanding that all this connectome-based research was largely a deadend, because it doesnt capture dynamics, nor a vast array of interactions. if you've ever seen neurones being grown (go search YT), you'll see it's a massive gelatinous structure which is highly plastic and highly dynamic. Even in the simplest brains (eg., of elgans), you get 10^x exponential growth in number of neurones and their connections as it grows.
From https://news.ycombinator.com/item?id=35877402#35886145 :
> So, to run the same [fMRI, NIRS,] stimulus response activation observation/burn-in again weeks or months later with the same subjects is likely necessary given Representational drift
And isn't there n-ary entanglement?
Is the world really running out of sand?
We fools here in Germany sometimes _pay_ to get rid of excess electricity when it's very sunny and windy. How about having some rock crushing machines that instead use that cheap electricity to make more sand?
Thanks for the puns, too.
Free power doesn’t mean free production.
There’s also labor, wear on the machines and the lost opportunity of using your money to do something else. Building such crushing machines and only use them x% of the time (for, for now, fairly small values of x) may not be a good investment.
So, energy storage should be lucrative and incentivized for us to solve the Duck Curve, and Alligator Curve, and overcharging the grid forces the price to zero or unlucratively below problems.
For firms with depreciating datacenter assets that are underutilized, a Research Coin like Grid Coin or an @home distributed computation project can offset costs
Datacenters scrap old compute - that there's hardly electronics recycling for - rather than keep it online due to relative cost in FLOPS/kWhr and the cost of conditioned space.
Doesn't electronics recycling recover the silica?
Can we make CPUs out of graphene made out of recycled plastic?
Can we make superconductive carbon computers that waste less electricity as heat?
Hopefully, Proof of Work miners that aren't operating on grants have an incentive to maximize energy efficiency with graphene ASICs and FPGAs and now TPUs
Make Pottery at Home Without a Kiln (Or Anything Else) [video]
Title: "Without a Kiln" Video: Build a kiln at backyard.
Boo. It is not a solution if you live in multi-store house in the center of big city (and small electric kiln IS a solution in this case).
Pit firing pottery isn't with a kiln, they fire pottery to much lower temperatures.
And yes, not every solution is for everybody... For people that don't live in a city, doing this in their backyard is more accessible than buying equipment.
finding a class that uses a shared space kiln would likely be cheaper and more rewarding
Some pottery kiln places will only fire their own clay, which must be to spec.
And safety glasses to handle warm clay that's been heated at all in a kiln or a fire.
Looked at making unglazed terracotta ollas for irrigation and couldn't decide whether a 1/4" silicone microsprinkler tubing port should go through the lid or the side.
Terracotta filters water, so presumably ollas would need to be recycled eventually due to the brawndo in the tap water and rainwater.
/? how to filter water with a terracotta pot
It looks like only the Nat Geo pottery wheel has a spot to attach a wooden guide to turn against; the commercial pottery wheels don't have a place to attach attachments that are needed for pottery.
Also neat primitive pottery skills: Primitive Skills, Primitive Technology
"Primitive Skills: Piston Bellows (Fuigo)" https://youtube.com/watch?v=CHdmlnAA010&
"Primitive Technology: Water Bellows smelt" https://youtube.com/watch?v=UdjVnGoNvU4&
Megalithic Geopolymers require water glass FWIU
/? how to make concrete planters
But rectangularly-formed concrete doesn't filter water like unglazed terracotta
The first proposal for a solar system domain name system
An Interplanetary DNS Model: https://www.ietf.org/archive/id/draft-johnson-dtn-interplane...
W3C DID Decentralized Identifiers sk/pk can be generated offline and then the pk can optionally be registered online. If there is no connection to the broader internet - as is sometimes the case during solar storms in space - offline key generation without a central authority would ensure ongoing operations.
Blockcerts can be verified offline.
EDNS Ethereum DNS avoids various pitfalls of DNS, DNScrypt, DNSSEC, and DoH and DoT (which depend upon NTP which depends upon DoH which depends upon the X.509 cert validity interval and the system time being set correctly).
Dapps don't need DNS and are stored in the synchronized (*) chain.
From https://news.ycombinator.com/item?id=32753994 :
> W3C DID pubkeys can be stored in a W3C SOLID LDPC: https://solid.github.io/did-method-solid/
> Re: W3C DIDs and PKI systems; CT Certificate Transparency w/ Merkle hashes in google/trillian and edns,
And then actually handle money with keys in space because how do people pay for limited time leases on domain names in space?
In addition to Peering, Clearing, and Settlement, ILP Interledger Protocol Specifies Addresses: https://news.ycombinator.com/item?id=36503888
> ILP is not tied to a single company, payment network, or currency
ILP Addresses - v2.0.0 > Allocation Schemes: https://github.com/interledger/rfcs/blob/main/0015-ilp-addre...
Essential node in global semiconductor supply chain hit by Hurricane Helene
Possibly a dumb question, but why can't we grow synthetic quartz in the same way we grow silicon wafers? Or can we and it's just not cost effective vs mining?
We can but synthetic quartz faces the same problem as hydrocarbon fuels: we can make synthetic natural gas if we use enough energy, or we could exploit the geological processes that created it over millions of years and extract it.
High-purity quartz from areas like Spruce Pine typically forms in pegmatites, where slow cooling of magma allows large, defect-free crystals to form. Hydrothermal fluids permeate these rocks while they’re cooling, effectively leaching out impurities. If the geochemistry is just right, over millions of years, this process repeats several times creating very high purity quartz deposits that are very difficult to replicate in laboratory conditions.
Can nanoassembly produce quartz more efficiently?
/? nanoassembly of quartz https://www.google.com/search?q=nanoassembly%20of%20quartz mentions hydrothermal synthesis,
Which other natural processes affect the formation of quartz and gold?
From https://news.ycombinator.com/item?id=41437036#41442489 :
> "Gold nugget formation from earthquake-induced piezoelectricity in quartz" (2024) https://www.nature.com/articles/s41561-024-01514-1 [...]
> Are there phononic excitations from earthquake-induced piezoelectric and pyroelectric effects?
> Do (surface) plasmon polaritons SPhP affect the interactions between quartz and gold, given heat and vibration as from an earthquake?
> "Extreme light confinement and control in low-symmetry phonon-polaritonic crystals" like quartz https://arxiv.org/html/2312.06805v2
The best browser bookmarking system is files
I completely disagree.
If the built-in bookmark systems in browsers could support tags, then I would say yes. However, it currently only supports a basic tree concept, with "folders" for links.
This is very one dimensional. I read loads of articles that talks about multiple topics. Especially Hacker News type articles :). An article can talk about, say geo-politics. As an example, perhaps an article on the recent pagers that exploded in Lebenon. This article may also be discussing some cybersecurity topics too. In this case I may want to tag it with 1->n tags.
I currently use Raindrop.io. It kinda works, but it doesn't really have what I have in mind. It also has more features than I think I need from a bookmarking app.
I kinda feel that Digg (wayback, it was one of the first 'Web 2.0' sites had a model that could work.
If I had enough motivation, I think I could probably produce a simple app that does tagging, and only tagging, with bookmarks.
Not sure about other browsers, but Firefox's built-in bookmarks support tags - no need for external apps.
Firefox can store bookmark tags, but they don't save with the bookmark export without reading the SQLite database with a different tool: "Allow reading and writing bookmark tags" (9 years ago) https://bugzilla.mozilla.org/show_bug.cgi?id=1225916
With bookmarks as JSONLD Linked Data, it's simple to JOIN with additional data about a given URI.
The WebExtensions Bookmark API does not yet support tags.
Firefox's bookmarks manager exports tags just fine (whether to JSON or the bookmarks.html format). WebExtension APIs are a completely separate issue.
Show HN: Iceoryx2 – Fast IPC Library for Rust, C++, and C
Hello everyone,
Today we released iceoryx2 v0.4!
iceoryx2 is a service-based inter-process communication (IPC) library designed to make communication between processes as fast as possible - like Unix domain sockets or message queues, but orders of magnitude faster and easier to use. It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.
For example, if you're working in robotics and need to process frames from a camera across multiple processes, iceoryx2 makes it simple to set that up. Need to retain only the latest three camera images? No problem - circular buffers prevent your memory from overflowing, even if a process is lagging. The history feature ensures you get the last three images immediately after connecting to the camera service, as long as they’re still available.
Another great use case is for GUI applications, such as window managers or editors. If you want to support plugins in multiple languages, iceoryx2 allows you to connect processes - perhaps to remotely control your editor or window manager. Best of all, thanks to zero-copy communication, you can transfer gigabytes of data with incredibly low latency.
Speaking of latency, on some systems, we've achieved latency below 100ns when sending data between processes - and we haven't even begun serious performance optimizations yet. So, there’s still room for improvement! If you’re in high-frequency trading or any other use case where ultra-low latency matters, iceoryx2 might be just what you need.
If you’re curious to learn more about the new features and what’s coming next, check out the full iceoryx2 v0.4 release announcement.
Elfenpiff
Links:
* GitHub: https://github.com/eclipse-iceoryx/iceoryx2 * iceoryx2 v0.4 release announcement: https://ekxide.io/blog/iceoryx2-0-4-release/ * crates.io: https://crates.io/crates/iceoryx2 * docs.rs: https://docs.rs/iceoryx2/0.4.0/iceoryx2/
How does this compare to and/or integrate with OTOH Apache Arrow which had "arrow plasma IPC" and is supported by pandas with dtype_backend="pyarrow", lancedb/lancedb, and Serde.rs? https://serde.rs/#data-formats
Another IPC bus: "Varlink – IPC to replace D-Bus gradually in systemd" https://news.ycombinator.com/item?id=41687413
Serde does serialization and serialization with many formats in Rust
CBD shows promise as pesticide for mosquitoes
CBD is a differentiated byproduct of CBGA FWIU.
For: Epilepsy, Seizures, Schizophrenia, Mosquito control
Bats, Dragonflies, and Mosquito Fish all eat mosquitos.
Watercress roots filter water and harbor algae which fish eat too.
Algae produces fat soluble DHA and EPA Omega-3 PUFAs polyunsaturated fatty acids, which the mammalian (and almost drosophila) ECS Endocannabinoid system appears to synthesize endocannabinoids from. Endogenous cannabinoids are sensitive to consumption levels of Omega 6s and Omega 3; and endogenous cannabinoids are synthesized from Omegas.
Minnows, goldfish, guppies, bass, bluegill, and catfish all eat mosquito larvae.
CBD kills mosquitoes?
Early mammals - the water rat - probably ate small fish like minnows and sardines, which are high in Omega 3s.
Was this knowledge known by belligerents in a historical conflict?
FWIW this year I learned that dish soap kills wasps, ants, and other insects.
Does dish soap kill mosquitoes like other insects?
Scientific publishing has been gamed to advance scientists’ careers
Grants, Impact Factor, can't publish or apply for a grant because I don't have university affiliation, Exclusivity agreement, you can only upload the preprint to ArXiV because you paid a journal an application fee and they're going to take all of the returns from distributing your work with threaded comments.
What other industry must pay an application fee to give an exclusive to a government-granted Open Access research study?
You should tell them that they must pay you.
(And that they must host [linked] data, and reproducible containers, and code.)
Text2CAD: Generating sequential cad designs from text prompts
From https://news.ycombinator.com/item?id=40131766 re: LLM minifigs and parametric CAD parts libraries:
> Is there a blenderGPT-like tool trained on build123d Python models?
> ai-game-development tools lists a few CAD LLM apps like blenderGPT and blender-GPT: https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo...
REPL for Dart
Even though I don’t use Dart myself I very much approve of this initiative.
I live in my Python REPL (ipython) and I couldn’t live without my hot reloading without losing state. Allows for hacking about a bit directly in the REPL. Once something starts to take shape you can push it into a file, import it into your REPL then hack about with it in a text editor.
The hot reloading is great when you have a load of state in classes and you can change code in the editor and have that immediately updated on your existing objects in the REPL.
jupyter-dart-kernel: https://github.com/vickumar1981/jupyter-dart-kernel :
pip install jupyter-console jupyterlab -e https://github.com/vickumar1981/jupyter-dart-kernel#egg=jupyterdartkernel
jupyter console --kernel jupyterdartkernel
jupyter kernelspec --list
IPython/extensions/autoreload.py: https://github.com/ipython/ipython/blob/main/IPython/extensi... docs: https://ipython.readthedocs.io/en/stable/config/extensions/a...Ocean waves grow way beyond known limits
> The findings could have implications for how offshore structures are designed, weather forecasting and climate modeling, while also affecting our fundamental understanding of several ocean processes.
Does this translate to terrestrial and deep space signals? FWIU there's research in rogue waves applied to EM waves and DSN, too
What is the lowest power [parallel] transmission that results in such rogue wave effects, with consideration for local RF regulations?
> Professor Ton van den Bremer, a researcher from TU Delft, says the phenomenon is unprecedented, "Once a conventional wave breaks, it forms a white cap, and there is no way back. But when a wave with a high directional spreading breaks, it can keep growing."
> Three-dimensional waves occur due to waves propagating in different directions. The extreme form of this is when wave systems are "crossing," which occurs in situations where wave system meet or where winds suddenly change direction, such as during a hurricane. The more spread out the directions of these waves, the larger the resulting wave can become.
ScholarlyArticle: "Three-dimensional wave breaking" (2024) https://www.nature.com/articles/s41586-024-07886-z
Rogue wave > 21st century, Other uses of the term "rogue wave": https://en.wikipedia.org/wiki/Rogue_wave
' > See also links to Branched flow and Resonance
Branched flow: https://en.wikipedia.org/wiki/Branched_flow :
> Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way.
Are vortices in Compressible and Incompressible fluids area preserving?
(Which brings us to fluid dynamics and superhydrodynamics or better i.e. superfluid quantum gravity with Bernoulli and navier-stokes, and vortices of curl, and gravity (maybe with gravitons) if the particles are massful, otherwise we call it "radiation pressure" and "solar wind" and it also causes relative displacement)
Resonance: https://en.wikipedia.org/wiki/Resonance :
> For an oscillatory dynamical system driven by a time-varying external force, resonance occurs when the frequency of the external force coincides with the natural frequency of the system.
I doubt this is relevant for EM waves. Water waves are much more complicated because the wave speed in water is highly frequency dependent. That makes things much harder to model. Whilst in optics (and thus EM) people have been reasoning in terms of wavefronts in 3d for about half a century. Look at a YouTube channel like Huygens optics to see what a former professional can easily do in his shed.
Photon waves are EM waves and particles, by the Particle-Wave Duality.
Photons behave like fluids in superfluids; "liquid light".
Also, the paths of photons are affected by the media of transmission: gravity, gravitational waves, and water fluid waves bend light.
Photonic transmission and retransmission occurs at least in part by phononic excitation of solids and fluids; but in the vacuum of space if there is no mass, how do quanta propagate?
Optical rogue waves > Principles: https://en.wikipedia.org/wiki/Optical_rogue_waves :
> Supercontinuum generation with long pulses: Supercontinuum generation is a nonlinear process in which intense input light, usually pulsed, is broadened into a wideband spectrum. The broadening process can involve different pathways depending on the experimental conditions, yielding varying output properties. Especially large broadening factors can be realized by launching narrowband pump radiation (long pulses or continuous-wave radiation) into a nonlinear fiber at or near its zero-dispersion wavelength or in the anomalous dispersion regime. Such dispersive characteristics support modulation instability, which amplifies input noise and forms Stokes and anti-Stokes sidebands around the pump wavelength. This amplification process, manifested in the time domain as a growing modulation on the envelope of the input pulse, then leads to the generation of high-order solitons, which break apart into fundamental solitons and coupled dispersive radiation
FWIU photonic superradiance is also due to critical condition(s).
Re: Huygens and photons; https://news.ycombinator.com/item?id=40492160 :
>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity [given Huygens' applied]
TODO: remember the name of the photonic effect of self-convolution and nonlinearity and how the wave function interferes with itself in the interval [0,1.0] whereas EM waves are [-1,1] or log10 [-inf, inf].
Coherence, Wave diffraction vs. wave interference > Superposition principle: https://en.wikipedia.org/wiki/Superposition_principle#Wave_d...
Refactoring Python with Tree-sitter and Jedi
Would’ve been easy with fastmod: https://github.com/facebookincubator/fastmod
> I do wish tree-sitter had a mechanism to directly manipulate the AST. I was unable to simply rename/delete nodes and then write the AST back to disk. Instead I had to use Jedi or manually edit the source (and then deal with nasty off-set re-parsing logic).
Or libCST: https://github.com/Instagram/LibCST docs: https://libcst.readthedocs.io/en/latest/ :
> LibCST parses Python 3.0 -> 3.12 source code as a CST tree that keeps all formatting details (comments, whitespaces, parentheses, etc). It’s useful for building automated refactoring (codemod) applications and linters.
libcst_transformer.py: https://gist.github.com/sangwoo-joh/26e9007ebc2de256b0b3deed... :
> example code for renaming variables using libcst [w/ Visitors and Transformers]
Refactoring because it doesn't pass formal verification: https://deal.readthedocs.io/basic/verification.html#backgrou... :
> 2021. deal-solver. We released a tool that converts Python code (including deal contracts) into Z3 theorems that can be formally verified
Vim python-mode: https://github.com/python-mode/python-mode/blob/e01c27e8c17b... :
> Pymode can rename everything: classes, functions, modules, packages, methods, variables and keyword arguments.
> Keymap for rename method/function/class/variables under cursor
let g:pymode_rope_rename_bind = '<C-c>rr
python-rope/ropevim also has mappings for refactorings like renaming a variable: https://github.com/python-rope/ropevim#keybinding : C-c r r :RopeRename
C-c f find occurrences
https://github.com/python-rope/ropevim#finding-occurrencesTheir README now recommends pylsp-rope:
> If you are using ropevim, consider using pylsp-rope in Vim
python-rope/pylsp-rope: https://github.com/python-rope/pylsp-rope :
> Finding Occurrences: The find occurrences command (C-c f by default) can be used to find the occurrences of a python name. If unsure option is yes, it will also show unsure occurrences; unsure occurrences are indicated with a ? mark in the end. Note that ropevim uses the quickfix feature of vim for marking occurrence locations. [...]
> Rename: When Rename is triggered, rename the symbol under the cursor. If the symbol under the cursor points to a module/package, it will move that module/package files
SpaceVim > Available Layers > lang#python > LSP key Bindings: https://spacevim.org/layers/lang/python/#lsp-key-bindings :
SPC l e rename symbol
Vscode Python variable renaming:Vscode tips and tricks > Multi cursor selection: https://code.visualstudio.com/docs/getstarted/tips-and-trick... :
> You can add additional cursors to all occurrences of the current selection with Ctrl+Shift+L. [And then rename the occurrences in the local file]
https://code.visualstudio.com/docs/editor/refactoring#_renam... :
> Rename symbol: Renaming is a common operation related to refactoring source code, and VS Code has a separate Rename Symbol command (F2). Some languages support renaming a symbol across files. Press F2, type the new desired name, and press Enter. All instances of the symbol across all files will be renamed
But does Rope understand pytest fixtures? I doubt it, but would be happy to be proven wrong.
Yeah, there `sed` and `git diff` with one or more filenames in a variable might do.
Because pytest requires a preprocessing step, renaming fixtures is tough, and also for jupyter notebooks %%ipytest is necessary to call functions that start with test_ and upgrade assert keywords to expressions; e.g `assert a == b, error_expr` is preprocessed into `assertEqual(a,b, error_expr)` with an AssertionError message even for comparisons of large lists and strings.
Ask HN: What is the best online Vectorizer (.Png to .Svg)?
From https://news.ycombinator.com/item?id=38377307#38400435 (2023) :
> TIL about https://github.com/fogleman/primitive from "Comparison of raster-to-vector conversion software" https://en.wikipedia.org/wiki/Comparison_of_raster-to-vector... which does already list vtracer (2020)
The perils of transition to 64-bit time_t
The way this was handled on Mac OS X for `off_t` and `ino_t` might provide some insight: The existing calls and structures using the types retained their behavior, new calls and types with `64` suffixes were added, and you could use a preprocessor macro to choose which calls and structs were actually referenced—but they were hardly ever used directly.
Instead, the OS and its SDK are versioned, and at build time you can also specify the earliest OS version your compiled binary needes to run on. So using this, the headers ensured the proper macros were selected automatically. (This is the same mechanism by which new/deprecated-in-some-version annotations would get set to enable weak linking for a symbol or to generate warnings for it respectively.)
And it was all handled initially via the preprocessor, though now the compilers have a much more sophisticated understanding of what Apple refers to as “API availability.” So it should be feasible to use the same mechanisms on any other platform too.
Lol, that only works if you can force everyone on the platform to go along with it. It is a nice solution, but it requires you to control the c library. Gentoo doesn’t control what libc does; that’s either GNU libc or MUSL or some other thing that the user wants to use.
> requires you to control the c library
Which is why basically every other same operating system does that. BSDs, macOS, WinNT. Having the stable boundary be the kernel system call interface is fucking insane. And somehow the Linux userspace people keep failing to learn this lesson, no matter how many times they get clobbered in the face by the consequences of not learning it.
Making the stable boundary be C headers is insane!
It means that there's not actually any sort of ABI, only a C source API.
cibuildwheel builds manylinux packages for glibc>= and musl because of this ABI.
manylinux: https://github.com/pypa/manylinux :
> Python wheels that work on any linux (almost)
Static Go binaries that make direct syscalls and do not depend upon libc or musl run within very minimal containers.
Fuschia's syscall docs are nice too; Linux plus additional syscalls.
FFT-based ocean-wave rendering, implemented in Godot
20 years ago I could spend months tweaking ocean surface in renders and not get even close to that. Amazing how good this is!!
Although the demo clip feels a bit exaggerated (saying this having over 50k Nm open water ocean sailing in my logbook). Waves that sharp and high would need the wind blowing a lot stronger. But I am sure that is just a parameter adjustment away!
Since it is in Godot I assume the rendering is real time? Does it need a monster GPU?
This is not a criticism, just an observation: it looks like what I imagine an ocean of hot corn syrup would look like (after dyeing it blue). The viscosity seems right; possibly the surface tension is not what ocean water would have (a colloid of salty H2O and biomaterial, which is common in real-world experience but quite ugly for computational fluid dynamics).
Also note that the ocean spray here is a post-hoc effect, but for a real ocean the spray dulls the sharpness of the waves in a way that will be (vaguely) apparent visually.
Of course there's almost no "physics" in this elegant, simple, and highly effective model, so I want to emphasize that suggesting directions to poke around and try things should not be construed as an armchair criticism.
This is literally a criticism.
A) It would be a criticism if I thought these effects could be plausibly rendered with a similar FFT algorithm, but that seems unlikely to me. I think these results are "highly effective" given the toolset, which is not attempting to emulate the actual physics.
B) This project is not an all-out attempt to make lifelike water, it is described as an experiment. I am making an observation about the result of the experiment, not criticizing the project for failing to meet standards it wasn't holding itself to.
Neat that FFT yields great waves.
But ultimately, does that model model vortexes or other fluid dynamics?
Can this model a fluid vortex between 2-liter bottles with a 3d-printable plastic connector?
Curl, nonlinearity, Bernoulli, Navier-Stokes, and Gross-Pitaevskii are known tools for CFD computational fluid dynamics with Compressible and Incompressible fluids.
"Ocean waves grow way beyond known limits" (2024-09) https://news.ycombinator.com/item?id=41631177#41631975
"Gigantic Wave in Pacific Ocean Was the Most Extreme 'Rogue Wave' on Record" (2024-09) https://news.ycombinator.com/item?id=41548417#41550654
Keep in mind that this project is aimed at video game developers, not oceanographers :) The point is to get something cheap and plausible, not to solve Navier-Stokes with finite element methods.
Show HN: Test your website on 180+ device viewports (with multi-device mode)
Hey HN! We do a lot of frontend testing, and one thing we find frustrating is the limited tooling available to observe how a URL renders on different viewports/devices simultaneously.
For example, we can use Chrome Inspector's "Device" previews, but we run into the following:
- Chrome doesn't keep those devices/viewports up to date, and adding new ones is pretty annoying (we've open sourced our collection of 180+ viewports/dimensions under MIT).
- You can’t see multiple viewports at once (and it’s cumbersome switching back and forth)
- When you notice a problem, there's no simple way to deep-link someone on your team to take a look at that viewport (likely have to open a Jira/Linear with the specifics)
- You can’t favourite viewports (for tracking the viewports specific to your project/client)
We’ve built a product with the goal of addressing these limitations (among others), and would love to hear your feedback.
Finally, feel free to use the open sourced data for your own projects :)
This reminded me of a very similar one but for the desktop from a few years back. Here you go;
Source at https://github.com/responsively-org/responsively-app
shot-scraper is a CLI tool for HTML render snapshots with --height --width and --scale-factor (--retina ~= --scale-factor=2.0) args, and a -j/--javascript option to run JS before taking the screenshot.
There's not yet a way to resize the viewport and take another screenshot without loading the page again.
There's not yet a way to store (h,w,scale,showbrowserchrome) configurations in a YAML file; but shot-scraper does work in GitHub Actions.
Some web testing tools can record a video of a test session and save it if there's a test failure.
E.g. Playwright defaults to an 800x600 viewport for test videos to retain-on-failure or record only if on-first-retry: https://playwright.dev/docs/videos
Something like Cloudflare Browser Isolation - which splits the browser at an interface such that a Chrome browser server runs on a remote server and a Chrome client runs locally - might make it easier to compress recordings of n x test invocations (in isolated browsers or workers)
This is awesome: thanks for these resources! We'll take a look at Cloudflare Browser Isolation for sure. We've played with screenshot tools at specific viewport dimensions etc, but it's not often the right fit for when we're doing interactive frontend responsive updates. That aside though, the screenshot APIs could be a great integration to be able to download/save screenshots of specific viewports.
DevTools has a list of default viewport sizes, but they change from browser release to release. Is there a better way or an existing [YAML] file schema to generate a list of viewport configurations to import into DevTools?
We looked into this, and we couldn't find a way to do so, but we've open sourced the data (here: https://github.com/bitcomplete/labs-delta-viewport-tester-vi...) so if you are able to find a way to do so, that could be a great option.
Hey thanks! https://github.com/bitcomplete/labs-delta-viewport-tester-vi...
Use case: add which few of these to DevTools. IDK if there's a YAML parser in DevTools already, otherwise JSON + jq should work somehow
awesome-chrome-devtools: https://github.com/paulirish/awesome-chrome-devtools
Thanks for sharing this @westurner
Wasm2Mpy: Compiling WASM to MicroPython so it can run on Raspberrys
I feel like this project is a classic example of the README missing a "Why?" section. Not to justify the project, it just makes it hard to evaluate without understanding why they choose implementing a transpiler rather than an embedded WASM VM. Or why not transpile to native assembly? I'm sure they have good reasons but they aren't listed here.
i think your questions are implicitly answered in the top page of the readme, but by showing rather than telling
esp32 and stm32 run three different native assembly instruction sets (though this only supports two of them, the others listed in the first line of the readme are all arm)
the coremark results seem like an adequate answer to why you'd use a compiler rather than an interpreter
i do think it would be clearer, though, to explain up front that the module being built is not made of python code, but rather callable from python code
> i do think it would be clearer, though, to explain up front that the module being built is not made of python code, but rather callable from python code
I know nothing about micropython. Do the modules contain bytecode?
In this case, the modules contain native code compiled for the target architecture. Micropython has something approximating a dynamic linker to load them.
tools/mpy_ld.py: https://github.com/micropython/micropython/blob/master/tools...
tools/mpy-tool.py lists opcodes: https://github.com/micropython/micropython/blob/master/tools...
Can the same be done with .pyc files; what are the advantages of MicroPython native modules?
Why does it need wasm2c?
I don't think .pyc files can contain native code, but .mpy can! And that's exactly what I'm using here:
- compile any language to wasm
- translate wasm to C99
- use the native target toolchain to compile C99 to .mpy
- run that on the device
Linux 6.12 NFS Adds Localio Protocol for "Extreme" Performance Boost
> The LOCALIO protocol support allows the NFS client and server to reliably determine if they are on the same host. If they happen to be on the same host, the network RPC protocol for read/write/commit operations are bypassed. Due to bypassing the XDR and RPC for reads/writes/commits, there can be big performance gains to using the LOCALIO protocol.
> One of the intended scenarios where it can be common for having the NFS client and server on the same host is when running containerized workloads
What are the advantages to NFS [with localio] over OverlayFS/overlay2 and bind mounts across context boundaries for container filesystems?
Do the new faster direct read() and write() IO calls have different risks compared to disk io with network IO?
Are the file permissions the same with localio? How does that compare to subuids and subuids for rootless containers?
I can imagine say a media server running NFS that can be accessed by both local containers and remote hosts.
What I think would be interesting is if you could access NFS on a local machine from a container or VM without using a network. You could have a standalone VM with no networking that could access shared files. (I think some people use 9p for this?)
Show HN: Httpdbg – A tool to trace the HTTP requests sent by your Python code
Hi,
I created httpdbg, a tool for Python developers to easily debug HTTP(S) client requests in Python programs.
I developed it because I needed a tool that could help me trace the HTTP requests sent by my tests back to the corresponding methods in our API client.
The goal of this tool is to simplify the debugging process, so I designed it to be as simple as possible. It requires no external dependencies, no setup, no superuser privileges, and no code modifications.
I'm sharing it with you today because I use it regularly, and it seems like others have found it useful too—so it might be helpful for you as well.
Hope you will like it.
cle
Source: https://github.com/cle-b/httpdbg
Documentation: https://httpdbg.readthedocs.io/
A blog post on a use case: https://medium.com/@cle-b/trace-all-your-http-requests-in-py...
That's pretty cool! I was playing last night and implemented resumable downloads[0] for pip so that it could pick up where it stopped upon a network disconnect or a user interruption. It sucks when large packages, especially ML related, fail at the last second and pip has to download from scratch. This tool would have been nice to have. Thanks a bunch,
It is important to check checksums (and signatures, if there are any) of downloaded packages prior to installing them; especially when resuming interrupted downloads.
Pip has a hash-checking mode, but it only works if the hashes are listed in the requirements.txt file, and they're the hashes for the target platform. Pipfile.lock supports storeing hashes for multiple platforms, but requirements.txt does not.
If the package hashes are retrieved over the same channel as the package, they can be MITM'd too.
You can store PyPi package hashes in sigstore.
There should be a way for package uploaders to sign their package before uploading. (This is what .asc signatures on PyPi were for. But if they are retrieved over the same channel, cryptographic signatures can also be MITM'd).
IMHO (1) twine should prompt to sign the package (with a DID) before uploading the package to PyPi, and (2) after uploading packages, twine should download the package(s) it has uploaded to verify the signature.
; TCP RESET and Content-Range doesn't hash resources.
Thanks for the pointers. The diff is tiny and deals only with resuming downloads. i.e: everything else is left as is.
The Social Web Did Not Begin in 2008
Usenet was federated. And it is old...
Then there is https://irc-galleria.net which to me is clearly social media started in 2000... And I think there might be some older sites.
BBS Bulletin Board System > History: https://en.wikipedia.org/wiki/Bulletin_board_system
GNU/talk is a text chat system for users all on the same multiuser nix system. But it doesn't separate code and data; because of terminal control characters:
Talk (software) > History: https://en.wikipedia.org/wiki/Talk_(software)
Finger (protocol) was named finger. To update your finger "status message" you edit that file in your homedir: https://en.wikipedia.org/wiki/Finger_(protocol)
Finger was written in C and had overflow vulnerabilities, and I don't know whether it ever scaled beyond one multiuser system?
NNTP: https://en.wikipedia.org/wiki/Network_News_Transfer_Protocol
"Social Media" is distinct from "Media" in being a two-way* dialogue. Like only some newspaper sites, Social Media has comments (and UCC User Contributed Content).
If a newspaper site hosts comments on each article, will it need to be labeled as a potentially harmful social media outlet per proposed US media regulations? Does that make it social media?
User Contributed Content is seemingly inexpensive, but is it high-liability? Section 230 makes it possible to moderate without assuming liability for how people use the service to maim, defame, and harass others. Neither are arms dealers liable for crimes committed using the products that they sell. Media and Social Media operate without catastrophic liability for holding the mirror and abusive user-contributed content; instead of corporate doublespeak selling us into counterproductive wars that benefit only special interests (without a comment section) in a 24 hour news ticker that nobody checks.
But corporate content has displaced grassroots media by real people in so many of the now traditional social media outlets, and now everyone has to go to a different bar.
People grassroots campaigning for peace and equal rights; now replaced by the same old fear, greed, war, pestilence, and violence that TV and radio ad planners and newspaper editors use to sell ads.
Where "mass media" was an opiate for the masses, social media is a panacea for vanity and dysfunction.
Where "War of the Worlds" on the radio might've caused pandemonium back then, today people would likely debunk such claims of alien invasion via online trusted media outlets and bs rags.
FOAF: Friend of a Friend and/or schema.org/Person records describe a social graph of persons with attributes and CreativeWorks, but open record schema don't solve for the costs of indexing, search, video hosting, or federated moderation signals.
Scientists Propose Groundbreaking Method to Detect Single Gravitons
This is cool... One of the team’s proposed innovations is to use available data from LIGO as a data/background filter...
“The LIGO observatories are very good at detecting gravitational waves, but they cannot catch single gravitons,” notes Beitel, a Stevens doctoral student. “But we can use their data to cross-correlate with our proposed detector to isolate single gravitons.”
Wasn't there a new way to measure gravitational waves; Ctrl-F hnlog for LIGO and LISA:
"Mass-Independent Scheme to Test the Quantumness of a Massive Object" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. https://news.ycombinator.com/item?id=39048910 :
> This yields a striking result: a mass-independent violation of MR is possible for harmonic oscillator systems. In fact, our adaptation enables probing quantum violations for literally any mass, momentum, and frequency. Moreover, coarse-grained position measurements at an accuracy much worse than the standard quantum limit, as well as knowing the relevant parameters only to this precision, without requiring them to be tuned, suffice for our proposal. These should drastically simplify the experimental effort in testing the nonclassicality of massive objects ranging from atomic ions to macroscopic mirrors in LIGO.
From https://news.ycombinator.com/item?id=30847777 .. From "Massive Black Holes Shown to Act Like Quantum Particles" https://www.quantamagazine.org/massive-black-holes-shown-to-... :
> Physicists are using quantum math to understand what happens when black holes collide. In a surprise, they’ve shown that a single particle can describe a collision’s entire gravitational wave.
> "Scale invariance in quantum field theory" https://en.wikipedia.org/wiki/Scale_invariance#Scale_invaria...
"New ways to catch gravitational waves" https://news.ycombinator.com/item?id=40825994 :
- "Kerr-Enhanced Optical Spring" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. "Kerr-enhanced optical spring for next-generation gravitational wave detectors" (2024) https://news.ycombinator.com/item?id=39957123
- "Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" with a superconducting magnetic trap made out of Tantalum (2024) https://news.ycombinator.com/item?id=39495482
- "Measuring gravity with milligram levitated masses" (2024) https://www.science.org/doi/10.1126/sciadv.adk2949
"Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" https://news.ycombinator.com/item?id=39495482#39495570 .. https://news.ycombinator.com/item?id=30847777 :
> Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
"Distorted crystals use 'pseudogravity' to bend light like black holes do" https://news.ycombinator.com/item?id=38008449 :
"Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0... :
> We demonstrate electromagnetic waves following a gravitational field using a photonic crystal. We introduce spatially distorted photonic crystals (DPCs) capable of deflecting light waves owing to their pseudogravity caused by lattice distortion
Hybrid device does power generation and molecular solar thermal energy storage
"Hybrid solar energy device for simultaneous electric power generation and molecular solar thermal energy storage" (2024) https://www.cell.com/joule/fulltext/S2542-4351(24)00288-5 :
> Context & scale: [...] This paper proposes a hybrid device combining a molecular solar thermal (MOST) energy storage system with PV cell. The MOST system, made of elements like carbon, hydrogen, oxygen, fluorine, and nitrogen, avoids the need for rare materials. It serves as an optical filter and cooling agent for the PV cell, improving solar energy utilization and addressing the limitations of conventional PV and storage technologies.
- "Solar energy can now be stored for up to 18 years, say scientists" (2022) [ with MOST: Molecular Solar Thermal energy storage method ] https://news.ycombinator.com/item?id=34027606 :
- "Chip-scale solar thermal electrical power generation" (2022) https://www.cell.com/cell-reports-physical-science/fulltext/...
QED: A Powerful Query Equivalence Decider for SQL [pdf]
Compiling to Assembly from Scratch
Speaking of embedded systems, I have an old board, an offshoot of Zilog Z80 called Rabbit. I think recently Dave from EEVBlog took apart one of his ancient projects and I was floored to see a Rabbit. Talk about a left hook. Assuming this is Hacker News, I suspect someone probably knows what I’m talking about. The language used (called “Dynamic C”) has some unconventional additions, a kind of coroutine mechanism in addition to chaining functions to be called one after another in groups. It’s mostly C otherwise, so I suspect some macro shennanigans with interrupt vector priority for managing this coroutine stuff.
Anyhow, so I’ve got a bunch of .bin files around for it, no C source code, just straight assembly output ready to be flashed. And the text and data segments are often interwoven, each fetch being resolved to data or instruction in real time by the rabbit processor. So I’ve been thinking of sitting down, going through the assembly/processor manual for the board and just writing a board simulator hoping to get it back to source code by blackbox reversing in a sense. I’d have to rummage through JEDEC for the standard used by the EEPROM to figure out what pins it’s using there and the edge triggering sequences. Once I can execute it and see what registers and GPIOs are written when, I can probably figure out the original code path. Not sure if anyone has tips or guides or suggestions here.
Any chance a decompiler like ghidra might get you part ways there?
The closest I came across was a Z80 simulator, I forget the name of it. But it allowed you to step through command by command giving you ability to query the processor state in a terminal.
So I don’t know if it would be easier trying to find this, and updating it to support rabbit, vs trying to wedge something into ghidra which itself is an undertaking of a behemoth platform. As far as I know ghidra does not have Z80 support.
An Emu86.assembler.virtual_machine.Z8Machine (like MIPSMachine, RISCVMachine, and IntelMachine) could be stepped through in a notebook: https://news.ycombinator.com/item?id=41576922
Execsnoop: Monitors and logs all exec calls on system in real-time
You can accomplish the same with bpftrace:
bpftrace -e 'tracepoint:sched:sched_process_exec { time("%H:%M:%S"); printf(" uid = %d pid = %d cmd = %s \n", uid, pid, comm); } tracepoint:syscalls:sys_enter_execve { time("%H:%M:%S"); printf(" uid = %d pid = %d cmd_with_args = ", uid, pid); join(args->argv); }'
From https://news.ycombinator.com/item?id=37442312 re why not ptrace for tracing all exec() syscalls:
> The Falco docs list 3 syscall event drivers: Kernel module, Classic eBPF probe, and Modern eBPF probe: https://falco.org/docs/event-sources/kernel/
Consider adding ppid to the mix - cometimes _what_ started a process is also quite valuable (if firefox starts bash, worry more than, say, sshd)
It looks like the falco rules mention proc.ppid.duration, but there's not yet a rule that matches on ppid: rules/falco_rules.yaml https://github.com/falcosecurity/rules/blob/main/rules/falco... :
> Tuning suggestions include looking at the duration of the parent process (proc.ppid.duration) to define your long-running app processes. Checking for newer fields such as proc.vpgid.name and proc.vpgid.exe instead of the direct parent process being a non-shell application could make the rule more robust.
Glass Antenna Turns windows into 5G Base Stations
Would this work with peptide glass?
"A self-healing multispectral transparent adhesive peptide glass" https://www.nature.com/articles/s41586-024-07408-x :
> Moreover, the supramolecular glass is an extremely strong adhesive yet it is transparent in a wide spectral range from visible to mid-infrared. This exceptional set of characteristics is observed in a simple bioorganic peptide glass composed of natural amino acids, presenting a multi-functional material that could be highly advantageous for various applications in science and engineering.
Is there a phononic reason for why antenna + window?
Bass kickers, vibration speakers like SoundBug, and bone conductance microphones like Jawbone headsets are all transducers, too
Transducer: https://en.wikipedia.org/wiki/Transducer
https://news.ycombinator.com/item?id=41442489 :
> FWIU rotating the lingams causes vibrations which scare birds away.
Patch Proposed for Adding x86_64 Feature Levels to the Kernel
> The proposed patch adds the x86_64 micro-architecture feature levels v2 / v3 / v4 as new Kconfig options if wanting to cater to them when building the kernel with modern GCC and LLVM Clang compilers. It's just some Kconfig logic to then set the "-march=x86-64-v4" or similar compiler options for the kernel build.
How much would these compiler flags save a standard datacenter and then an HPV center?
A Friendly Introduction to Assembly for High-Level Programmers
Biased opinion(because this is how I did it in uni): Before looking at modern x86, you should learn 32-bit MIPS.
It's hopelessly outdated as an architecture, but the instruction set is way more suited for teaching (and you should look at the classic procedure from Harris and Harris where they actually design a pipelined MIPS processor component by component to handle the instruction set).
Emu86/assembler/MIPS/key_words.py: https://github.com/gcallah/Emu86/blob/master/assembler/MIPS/...
Emu86.assembler.virtual_machine > MIPSMachine(VirtualMachine) , RISCVMachine(VirtualMachine) , IntelMachine(VirtualMachine) https://github.com/gcallah/Emu86/blob/b48725898f37dede3ab254...
Learn x in y minutes > "Where X=MIPS Assembly" https://learnxinyminutes.com/docs/mips/
- "Ask HN: Best blog tutorial explaining Assembly code?" (2023) re ASM, HLA, WASM, WASI and POSIX, and docker/containerd/nerdctl support for various WASM runtimes: https://news.ycombinator.com/item?id=38772493
- "Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" (2023) https://news.ycombinator.com/item?id=37086102 ; the emu86 jupyter kernel supports X86, RISC, MIPS, and WASM; and the iarm jupyter kernel supports ARMv6
Rga: Ripgrep, but also search in PDFs, E-Books, Office documents, zip, etc.
To what extent does reading these formats accurately require the execution of code within the documents? In other words, not just stuff like zip expansion by a library dependency of rga, but for example macros inside office documents or JavaScript inside PDFs.
Note: I have no reason to believe such code execution is actually happening — so please don't take this as FUD. My assumption is that a secure design would involve running only external code and thus would sacrifice a small amount of accuracy, possibly negligible.
Also note that it's not necessarily safe to read these documents even if you don't intend on executing embedded code. For example, reading from pdfs uses poppler, which has had a few CVEs that could result in arbitrary code execution, mostly around image decoding. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=poppler
(No shade to poppler intended, just the first tool on the list I looked at.)
Couldn't or shouldn't each parser be run in a container with systemd-nspawn or LXC or another container runtime? (Even if all it's doing is reading a file format into process space as NX data not code as the current user)
NX bit: https://en.wikipedia.org/wiki/NX_bit
Executable-space protection > Limitations mentions JITs and ROP: https://en.wikipedia.org/wiki/Executable-space_protection
mprotect(), VirtualAlloc[Ex] and VirtualProtect[Ex],
"NX bit: does it protect the stack?" https://security.stackexchange.com/questions/47807/nx-bit-do...
A word about systemd (2016)
A central point of their thesis is that Systemd folks bullied others into accepting it.
That's not really the case, though. The Debian folks had a very transparent discussion and vote about using it:
Related, it's also blatantly ignoring that it's the first software that even allows interoperability in its space, and was thus in huge demand by both developers and administrators.
(Edit: Other anti-systemd rant sites sometimes mention these obliquely as "upstream dependencies" and "workplace-mandated", while ignoring the fact that those people have reasons to demand those things)
But HN really doesn't need to litigate this all again.
what do you mean by interoperability?
in the bad old times, every linux distro implemented its init scripts in a slightly different way, often with distro specific tooling and commands for accomplishing most management tasks
systemd unit files require minimal modification by distro repackagers.
With SysV-init, it was/is necessary to modify /etc/init.d/servicenamed shell scripts to get consistent logging and respawning behavior. With SysV init it's not possible to add edges between things to say that networkd must be started before apached; there are 6 numeric runlevels and filenames like S86apache, which you then can't diff or debsums against because the package default is S80apache and Apache starts with A which alphabetically sorts before the N in networkd.
And then add cgroups and namespaces to everything in /etc/init.d please
There were and are other options other than systemd and SysV years prior in fact. OpenRC and upstart predate it and s6 was not much later.
It's possible to pipe stdout to syslog through logger by modifying the exec args. But I'd rather take journald's word for it.
Supervisord (and supervisorctl) are like s6. I don't think any of these support a config file syntax for namespaces, cgroups, seccomp, or SyscallFilter. Though it's possible to use process isolation with any init system by modifying the init script or config file to call the process with systemd-nspawn (which is like chroot, which most SysV-init scripts also fail to do at all, though a process can drop privs and chroot at init regardless of init system).
systemd works in containers now without weird setup. Though is it better to run a 'sidecar' container (for e.g. certbot) than to run multiple processes in a container?
The HTTP Query Method
There is currently no way to determine whether there's, for example, an application/json representation of a URL that returns text/html by default.
OPTIONS and PROPFIND don't get it.
There should be an HTTP method (or a .well-known url path prefix) to query and list every Content-Type available for a given URL.
From https://x.com/westurner/status/1111000098174050304 :
> So, given the current web standards, it's still not possible to determine what's lurking behind a URL given the correct headers? Seems like that could've been the first task for structured data on the internet
Servers can return something like:
Link: </foo>; rel="alternate" type="application/json"
You could return multiple of these as a response to a HEAD request for example.You could also use the Accept header in response to a HTTP OPTIONS request, or as part of a 415 error response.
Accept: application/json, text/html
https://httpwg.org/specs/rfc9110.html#field.accept (yes Accept can be used in responses).well-known is not a good place for this kind of thing. That discovery mechanism should only be used in situations where only a domainname is known, and a feature or endpoint needs to be discovered for a full domain. In most cases you want a link.
The building blocks for most of this stuff is certainly there. There's a lot of wheels being reinvented all the time.
> https://httpwg.org/specs/rfc9110.html#field.accept (yes Accept can be used in responses)
I think that would solve.
HTTP servers SHOULD send one or more Accept: {content_type} HTTP headers in the HTTP Response to an HTTP OPTIONS Request in order to indicate which content types can be requested from that path.
Yeah no way to ask for openapi schema definition.
https://github.com/schemaorg/schemaorg/issues/1423#issuecomm... :
> Examples of how to represent GraphQL, SPARQL, LDP, and SOLID APIs [as properties of one or more rdfs:subClassOf schema:WebAPI (as RDFa in HTML or JSON-LD)]
GraalPy – A high-performance embeddable Python 3 runtime for Java
Took a little digging to find that it targets 3.11. Didn’t see anything about a GIL. If you’re a Python person, don’t click the quick start link unless you want to look at some xml.
Python implementations naturally don't have any GIL in regards to JVM or CLR variants, there is no such thing on those platforms.
YAML and JSON have both tried to replicate the XML tooling experience, only worse.
Schemas, comments, parsing and schema conversions tools.
I think GraalPython does have a GIL, see https://github.com/oracle/graalpython/blob/master/docs/contr... - and if by "there is no such thing on those platforms" you mean JVM/CLR not having a GIL, C also does not have a GIL but CPython does.
"PEP 703 – Making the Global Interpreter Lock Optional in CPython" (2023) https://peps.python.org/pep-0703/
CPython built with --disable-gil does not have a GIL (as long as PYTHONGIL=0 and all loaded C extensions are built for --disable-gil mode) https://peps.python.org/pep-0703/#py-mod-gil-slot
"Intent to approve PEP 703: making the GIL optional" (2023) https://news.ycombinator.com/item?id=36913328#36917709 https://news.ycombinator.com/item?id=36913328#36921625
This is pretty beside the point. The point is that X not having a GIL doesn't inherently mean Python on X also doesn't have a GIL.
CPython does not have a GIL Global Interpreter Lock GC Garbage Collection phase with --gil-disabled. GraalVM does have a GIL, like CPython without --gil-disabled.
How CPython accomplished nogil in their - the original and reference - fork is described in the topical linked PEP 703.
Yes, I know. What I'm saying is that:
It's possible to have a language that doesn't have a GIL, which you implement Python in, but that Python implementation then has a GIL.
The point being that you can't say things like: Jython is written in Java so it doesn't have a GIL. CPython is written in C so doesn't have a GIL. And so on.
If this isn't clear, I apologize.
Oh okay. Yeah I would say that the Java GC and the ported CPython GIL are probably limits to the performance of any Python in Java implementation.
But are there even nogil builds of CPython C extensions on PyPi yet anyway.
Re: Ghidraal and various methods of Python in Java: https://news.ycombinator.com/item?id=36454485
Chain of Thought empowers transformers to solve inherently serial problems
In the words of an author:
"What is the performance limit when scaling LLM inference? Sky's the limit.
We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed. Remarkably, constant depth is sufficient.
http://arxiv.org/abs/2402.12875 (ICLR 2024)"
> We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed.
That seems like a bit of a leap here to make this seem more impressive than it is (IMO). You can say the same thing about humans, provided they are allowed to think across as many years/generations as needed.
Wake me up when a LLM figures out stable fusion or room temperature superconductors.
> You can say the same thing about humans, provided they are allowed to think across as many years/generations as needed.
Isn’t this a good thing since compute can be scaled so that the LLM can do generations of human thinking in a much shorter amount of time?
Say humans can solve quantum gravity in 100 years of thinking by 10,000 really smart people. If one AGI is equal to 1 really smart person. Scale enough compute for 1 million AGI and we can solve quantum gravity in a year.
The major assumption here is that transformers can indeed solve every problem humans can.
> Scale enough compute for 1 million AGI and we can solve quantum gravity in a year.
That is wrong, it misses the point. We learn from the environment, we don't secrete quantum gravity from our pure brains. It's a RL setting of exploration and exploitation, a search process in the space of ideas based on validation in reality. A LLM alone is like a human locked away in a cell, with no access to test ideas.
If you take child Einstein and put him on a remote island, and come back 30 years later, do you think he would impress you with is deep insights? It's not the brain alone that made Einstein so smart. It's also his environment that had a major contribution.
if you told child Einstein that light travels at a constant speed in all inertial frames and taught him algebra, then yes, he would come up with special relativity.
in general, an AGI might want to perform experiments to guide its exploration, but it's possible that the hypotheses that it would want to check have already been probed/constrained sufficiently. which is to say, a theoretical physicist might still stumble upon the right theory without further experiments.
Labeling of observations better than a list of column label strings at the top would make it possible to mine for insights in or produce a universal theory that covers what has been observed instead of the presumed limits of theory.
CSVW is CSV on the Web as Linked Data.
With 7 metadata header rows at the top, a CSV could be converted to CSVW; with URIs for units like metre or meter or feet.
If a ScholarlyArticle publisher does not indicate that a given CSV or better :Dataset that is :partOf an article is a :premiseTo the presented argument, a human grad student or an LLM needs to identify the links or textual citations to the dataset CSV(s).
Easy: Identify all of the pandas.read_csv() calls in a notebook,
Expensive: Find the citation in a PDF, search for the text in "quotation marks" and try and guess which search result contains the dataset premise to an article;
Or, identify each premise in the article, pull the primary datasets, and run an unbiased automl report to identify linear and nonlinear variance relations and test the data dredged causal chart before or after manually reading an abstract.
STORM: Get a Wikipedia-like report on your topic
Very cool! I asked it to create an article on the topic of my thesis and it was very good, but it lacked nuance and second order thinking i.e. here's the thing, what are the consequences of it and potential mitigations. It was able to pull existing thinking on a topic but not really synthesise a novel insight.
Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking.
From the paper it seems like this is only marginally better than the benchmark approach they used to compare against:
>Outline-driven RAG (oRAG), which is identical to RAG in outline creation, but
>further searches additional information with section titles to generate the article section by section
It seems like the key ingredients are:
- generating questions
- addressing the topic from multiple perspectives
- querying similar wikipedia articles (A high quality RAG source for facts)
- breaking the problem down by first writing an outline.
Which we can all do at home and swap out the wikipedia articles with our own data sets.
I was able to mimic this in GPT with out the RAG component with this custom instruction prompt, it does indeed write decent content, better than other writing prompts I have seen.
PROMPT: create 3 diverse personas who would know about the user prompt generate 5 questions that each persona would ask or clarify use the questions to create a document outline, write the document with $your_role as the intended audience.
Nyxpsi – A Next-Gen Network Protocol for Extreme Packet Loss
Re: information sharing by entanglement, CHSH, and the Bell test: https://news.ycombinator.com/item?id=41480957#41515663 :
> And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?
Chrome switching to NIST-approved ML-KEM quantum encryption
Fractran: Computer architecture based on the multiplication of fractions
I think the Wikipedia page (https://en.wikipedia.org/wiki/FRACTRAN) is easier for understanding Fractan.
By Church-Turing, is Fractran sufficient to host a Hilbert logic? https://en.wikipedia.org/wiki/Hilbert_system
Gigantic Wave in Pacific Ocean Was the Most Extreme 'Rogue Wave' on Record
> Scientists define a rogue wave as any wave more than twice the height of the waves surrounding it. The Draupner wave, for instance, was 25.6 meters tall, while its neighbors were only 12 meters tall.
> In comparison, the Ucluelet wave was nearly three times the size of its peers.
> The buoy that picked up the Ucluelet wave was placed offshore along with dozens of others by a research institute called MarineLabs in an attempt to learn more about hazards out in the deep.
> [...] For centuries, rogue waves were considered nothing but nautical folklore. It wasn't until 1995 that myth became fact.
ScholarlyArticle: "Generation mechanism and prediction of an observed extreme rogue wave" (2024) https://www.nature.com/articles/s41598-022-05671-4
Microsoft plan to end kernel-level anti-cheat could be massive for Linux Gaming
Personally, for Linux support, an offline-only mode with no anti-cheat would be fine.
We don't even need to compete online as others can.
Unfortunately [EA,] games like FIFA are unplayable even offline in Linux (with Proton WINE) due to anti-cheat FWIU.
EU has a new "preserving PC games" directive. Will any of these games work without versions of Operating Systems that there are no security patches for anymore?
The "Stop Destroying Videogames" petition is open through 2025-07-03: https://citizens-initiative.europa.eu/initiatives/details/20... :
> This initiative calls to require publishers that sell or license videogames to consumers in the European Union (or related features and assets sold for videogames they operate) to leave said videogames in a functional (playable) state.
OS-kernel-integrated anti-cheat makes a game unplayable offline today.
Those probably won't archive well.
VirtualBox supports a "virtual TPM" device so that Windows 11 will run in a VM.
Internet Archive has so many games preserved with DOSbox, in HTML5 and WASM years later.
We shouldn't need a 120 FPS 4K (4k120) sunshine server remote desktop for gaming with old games.
> Specifically, the initiative seeks to prevent the remote disabling of videogames by the publishers, before providing reasonable means to continue functioning of said videogames without the involvement from the side of the publisher.
The initiative does not seek to acquire ownership of said videogames, associated intellectual rights or monetization rights, neither does it expect the publisher to provide resources for the said videogame once they discontinue it while leaving it in a reasonably functional (playable) state.
New evidence upends contentious Easter Island theory, scientists say
Grounding AI in reality with a little help from Data Commons
> Retrieval Interleaved Generation (RIG): This approach fine-tunes Gemma 2 to identify statistics within its responses and annotate them with a call to Data Commons, including a relevant query and the model's initial answer for comparison. Think of it as the model double-checking its work against a trusted source.
> [...] Trade-offs of the RAG approach: [...] In addition, the effectiveness of grounding depends on the quality of the generated queries to Data Commons.
Gotta say, this kinda feels like "giving it correct data didn't work, so what if we did that twice?".
Like, two layers of duct tape are better than one.
Seems reasonable and I can believe it helps, just also seems like it doesn't do much to improve confidence in the system as a whole. Particularly since they're basically asking it to find things worth checking, then have it write the checking query, and have it interpret the results. When it's the thing that screwed it up in the first place.
eh, I think this is pretty reasonable and not a "hack". It matches what we do as people. I think there probably needs to be research into how to tell it when it doesn't know something, however.
I think if you remember that LLMs are not databases, but they do contain a super lossy-compressed version of (it's training) knowledge, this feels less like a hack. If you ask someone, "who won the World Cup in 2000?", they may say "I think it was X, but let me google it first". That person isn't screwed up, using tools isn't a failure.
If the context is a work setting, or somewhere that is data-centric, it totally makes sense to check it. Like a Chat Bot for a store, or company that is helping someone troubleshoot or research. Anything where it really obvious answers that are easy to learn from volumes of data ("what company makes the corolla?"), probably don't need fact checking as often, but why not have the system check its work?
Meanwhile, programming, writing prose, etc are not things you generally fact-check mid-way, and are things that can be "learned" well from statistical volume. Most programmers can get "pretty good" syntax on first try, and any dedicated syntax tool will get to basically 100%, and the same makes sense for an LLM.
This is similar to the difference between data dredging and scientific hypothesis testing.
'But what bias did we infer with [LLM knowledgebase] background research prior to formulating a hypothesis, and who measured?'
There are various methods of Inference: Inductive, Deductive, and Abductive
What are the limits of Null Hypothesis methods of scientific inquiry?
Better-performing “25519” elliptic-curve cryptography
> The x25519 algorithm also plays a role in post-quantum safe cryptographic solutions, having been included as the classical algorithm in the TLS 1.3 and SSH hybrid scheme specifications for post-quantum key agreement.
Really though? This mostly-untrue statement is the line that warrants adding hashtag #post-quantum-cryptography to the blogpost?
Actually, e.g. rustls added X25519Kyber768Draft00 support this year: https://news.ycombinator.com/item?id=41534500
/?q X25519Kyber768Draft00: https://www.google.com/search?q=X25519Kyber768Draft00
Kyber768 is the post-quantum algorithm in that example, not x25519.
From "OpenSSL 3.4 Alpha 1 Released with New Features" (8 days ago) https://news.ycombinator.com/item?id=41456447#41456774 :
> Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140 -4?
> Are there additional ways to implement NIST PQ finalist algos with openssl?
- open-quantum-safe/oqs-provider [implements mlkem512 through mlkem1024 and x25519_mlkem768]
Not sure what you're trying to say here . x25519 is objectively not PQC and never claimed to be, and this isn't debatable.
Defend against vampires with 10 gbps network encryption
A notebook with pandas would have had a df.plot().
9.71 Gbps with wg on a 10GBps link with sysctl tunings, custom MTUs,.
I had heard of token ring, but not 10BASE5: https://en.wikipedia.org/wiki/10BASE5
Microsoft's quantum-resistant cryptography is here
> With NIST releasing an initial group of finalized post-quantum encryption standards, we are excited to bring these into SymCrypt, starting with ML-KEM (FIPS 203, formerly Kyber), a lattice-based key encapsulation mechanism (KEM). In the coming months, we will incorporate ML-DSA (FIPS 204, formerly Dilithium), a lattice-based digital signature scheme and SLH-DSA (FIPS 205, formerly SPHINCS+), a stateless hash-based signature scheme.
> In addition to the above PQC FIPS standards, in 2020 NIST published the SP 800-208 recommendation for stateful hash-based signature schemes which are also resistant to quantum computers. As NIST themselves called out, these algorithms are not suitable for general use because their security depends on careful state management, however, they can be useful in specific contexts like firmware signing. In accordance with the above NIST recommendation we have added eXtended Merkle Signature Scheme (XMSS) to SymCrypt, and the Leighton-Micali Signature Scheme (LMS) will be added soon along with the other algorithms mentioned above.
microsoft/SymCrypt /CHANGELOG.md: https://github.com/microsoft/SymCrypt/blob/main/CHANGELOG.md
TIL that SymCrypt builds on Ubuntu: https://github.com/microsoft/SymCrypt/releases :
> Generic Linux AMD64 (x86-64) and ARM64 - built and validated on Ubuntu, but because SymCrypt has very few standard library dependencies, it should work on most Linux distributions
The Rustls TLS Library Adds Post-Quantum Key Exchange Support
- cf article about PQ (2024) https://blog.cloudflare.com/pq-2024/
- rustls-post-quantum: https://crates.io/crates/rustls-post-quantum
- rustls-post-quantum docs: https://docs.rs/rustls-post-quantum/latest/rustls_post_quant... :
> This crate provides a rustls::crypto::CryptoProvider that includes a hybrid [1], post-quantum-secure [2] key exchange algorithm – specifically X25519Kyber768Draft00.
> X25519Kyber768Draft00 is pre-standardization, so you should treat this as experimental. You may see unexpected interop failures, and the algorithm implemented here may not be the one that eventually becomes widely deployed.
> However, the two components of this key exchange are well regarded: X25519 alone is already used by default by rustls, and tends to have higher quality implementations than other elliptic curves. Kyber768 was recently standardized by NIST as ML-KEM-768.
"Module-Lattice-Based Key-Encapsulation Mechanism Standard" KEM: https://csrc.nist.gov/pubs/fips/203/final :
> The security of ML-KEM is related to the computational difficulty of the Module Learning with Errors problem. [...] This standard specifies three parameter sets for ML-KEM. In order of increasing security strength and decreasing performance, these are ML-KEM-512, ML-KEM-768, and ML-KEM-1024.
Breaking Bell's Inequality with Monte Carlo Simulations in Python
Hidden variable theory: https://en.wikipedia.org/wiki/Hidden-variable_theory
Bell test: https://en.wikipedia.org/wiki/Bell_test :
> To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency η [\eta], defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of η > 2*sqrt(2)/2~= 0.83 is required for a loophole-free violation.[51] Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for η>2/3~=0.67 which is the optimal bound for the CHSH inequality.[53] Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for η>(sqrt(5)-1)/2~=0.62 [54]
CHSH inequality: https://en.wikipedia.org/wiki/CHSH_inequality
/sbin/chsh
Isn't it possible to measure the wake of a photon instead of measuring the photon itself; to measure the wake without affecting the boat that has already passed? And shouldn't a simple beam splitter be enough to demonstrate entanglement if there is an instrument with sufficient sensitivity to infer the phase of a passed photon?
This says that intensity is sufficient to read phase: https://news.ycombinator.com/item?id=40492160 :
> "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity
And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?
From "Violation of Bell inequality by photon scattering on a two-level emitter" https://news.ycombinator.com/item?id=40917761 ... From "Scientists show that there is indeed an 'entropy' of quantum entanglement" (2024) https://news.ycombinator.com/item?id=40396001#40396211 :
> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
That was probably the "Bell test" article, which - IIUC - does indeed indicate that if you can read 62% of the photons you are likely to find a loophole-free violation.
What is the photon detection rate in this and other simulators?
Show HN: Pyrtls, rustls-based modern TLS for Python
"PEP 543 – A Unified TLS API for Python" specified interfaces that would make it easier to use different TLS implementations: https://peps.python.org/pep-0543/#interfaces
alphaXiv: Open research discussion on top of arXiv
Hey alphaxiv, you won’t let me claim some of my preprints, because there’s no match with the email address. Which there can’t be, as we’re only listing a generic first.last@org addresses in the papers. Tried the claiming process twice, nothing happened. Not all papers are on Orcid, so that doesn’t help.
I think it’ll be hard growing a discussion platform, if there’s barriers of entry like that to even populate your profile.
How would you propose making claiming possible without the risk of hijacking/misrepresentation?
The only way I see this working is for paper authors to include their public keys in the paper; preferably as metadata and have them produce a signed message using their private key which allows them to claim the paper.
While the grandparent is understandably disappointed with the current implementation, relying on emails was always doomed from the start.
Given that the paper would have be changed regardless, including the full email address is a relatively easy solution. ORCID is probably easier than requiring public keys and a lot of journals already require them.
W3D Decentralized Identifiers are designed for this use case.
Decentralized identifier: https://en.wikipedia.org/wiki/Decentralized_identifier
W3C TR did-core: "DID Decentralized Identifiers 1.0": https://www.w3.org/TR/did-core/
W3C TR did-use-cases: "Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/
"Email addresses are not good 'permanent' identifiers for accounts" (2024) https://news.ycombinator.com/item?id=38823817#38831952
I'm sure that would work, but most researchers already have an ORCID are required to provide it other places anyway.
Charging lithium-ion batteries at high currents first increases lifespan by 50%
But are there risks and thus costs?
Initial Litium deactivation is 30 % compared to 9 % with slow formation loading.
How much capacity is lost as a result of this?
Lithium deactivation is inversely proportional to capacity. We could just add extra capacity to make up for it, though. From there, the battery would maintain capacity for a longer time than before.
Isn't there how much fire risk from charging a _ battery at higher than spec currents?
It's just the single inital charging that has to be at high current.
So for SAFETY then there also shouldn't there be battery fire containment [vessels] for the production manufacturing process?
Absolutely, but I hope there already are measures taken to prevent and contain battery fires during manufacturing.
Same origin: Quantum and GR from Riemannian geometry and Planck scale formalism
ScholarlyArticle: "On the same origin of quantum physics and general relativity from Riemannian geometry and Planck scale formalism" (2024) https://www.sciencedirect.com/science/article/pii/S092765052...
- NewsArticle: "Magical equation unites quantum physics, Einstein’s general relativity in a first" https://interestingengineering.com/science/general-relativit... :
> “We proved that the Einstein field equation from general relativity is actually a relativistic quantum mechanical equation,” the researchers note in their study.
> [...] To link them, the researchers developed a mathematical framework that “Redefined the mass and charge of leptons (fundamental particles) in terms of the interactions between the energy of the field and the curvature of the spacetime.”
> “The obtained equation is covariant in space-time and invariant with respect to any Planck scale. Therefore, the constants of the universe can be reduced to only two quantities: Planck length and Planck time,” the researchers note.
- NewsArticle: "Magical equation unites quantum physics, Einstein’s general relativity in a first" https://interestingengineering.com/science/general-relativit... :
> “We proved that the Einstein field equation from general relativity is actually a relativistic quantum mechanical equation,” the researchers note in their study.
the reddit discussion claiming this is AI nonsense: https://www.reddit.com/r/TheoreticalPhysics/comments/1fbij1e...
Someone could also or instead read:
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
From https://news.ycombinator.com/item?id=38871054 .. "Light and gravitational waves don't arrive simultaneously" (2023) https://news.ycombinator.com/item?id=38056295 :
>> In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics)
Newly Discovered Antibody Protects Against All Covid-19 Variants
> The technology used to isolate the antibody, termed Ig-Seq, gives researchers a closer look at the antibody response to infection and vaccination using a combination of single-cell DNA sequencing and proteomics.
/? ig-seq [ site:github.com ] : https://www.google.com/search?q=ig-seq+site%3Agithub.com
https://www.illumina.com/science/sequencing-method-explorer/... :
> Rep-Seq is a collective term for repertoire sequencing technologies. DNA sequencing of immunoglobulin genes (Ig-seq) and molecular amplification fingerprinting
> [ Ig-seq] is a targeted gDNA amplification method performed with primers complementary to the rearranged V-region gene (VDJ recombinant). Amplification of cDNA is then performed with the appropriate 5’ primers.
[deleted]
Ten simple rules for scientific code review
> Rule 1: Review code just like you review other elements of your research
> Rule 2: Don’t leave code review to the end of the project
> Rule 3: The ideal reviewer may be closer than you think
> Rule 4: Make it easy to review your code
> Rule 5: Do it in person and synchronously… A. Circulate code in advance. B. Provide necessary context. C. Ask for specific feedback if needed. D. Walk through the code. E. Gather actionable comments and suggestions
> Rule 6:…and also remotely and asynchronously
> Rule 7: Review systematically
> A. Run the code and aim to reproduce the results. B. Read the code through—first all of it, with a focus on the API. C. Ask questions. D. Read the details—focus on modularity and design, E. Read the details—focus on the math, F. Read the details—focus on performance, G. Read the details—focus on formatting, typos, comments, documentation, and overall code clarity
> Rule 8: Know your limits
> Rule 9: Be kind
> Rule 10: Reciprocate
Quantum error correction below the surface code threshold
"Quantum error correction below the surface code threshold" (2024) https://arxiv.org/abs/2408.13687 :
> Abstract: Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of Λ = 2.14 ± 0.02 when increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% ± 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its best physical qubit's lifetime by a factor of 2.4 ± 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 μs at distance-5 up to a million cycles, with a cycle time of 1.1 μs. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated error events occurring approximately once every hour, or 3 × 109 cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.
Cavity-Mediated Entanglement of Parametrically Driven Spin Qubits via Sidebands
From "Cavity-Mediated Entanglement of Parametrically Driven Spin Qubits via Sidebands" (2024) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan... :
> We show that the sidebands generated via the driving fields enable highly tunable qubit-qubit entanglement using only ac control and without requiring the qubit and cavity frequencies to be tuned into simultaneous resonance. The model we derive can be mapped to a variety of qubit types, including detuning-driven one-electron spin qubits in double quantum dots and three-electron resonant exchange qubits in triple quantum dots. The high degree of nonlinearity inherent in spin qubits renders these systems particularly favorable for parametric drive-activated entanglement.
- Article: "Radical quantum computing theory could lead to more powerful machines than previously imagined" https://www.livescience.com/technology/computing/radical-qua... :
> Each qubit operates in a given frequency.
> These qubits can then be stitched together through quantum entanglement — where their data is linked across vast separations over time or space — to process calculations in parallel. The more qubits are entangled, the more exponentially powerful a quantum computer will become.
> Entangled qubits must share the same frequency. But the study proposes giving them "extra" operating frequencies so they can resonate with other qubits or work on their own if needed.
Is there an infinite amount of quantum computational resources in the future that could handle today's [quantum] workload?
Rpi-open-firmware: open-source VPU side bootloader for Raspberry Pi
> Additionally, there is a second-stage chainloader running on ARM capable of initializing eMMC, FAT, and the Linux kernel.
There's now (SecureBoot) UEFI firmware for Rpi3+, so grub and systemd-boot should work on Raspberry Pis: https://raspberrypi.stackexchange.com/questions/99473/why-is... :
> The Raspberry Pi is special that the primary (on-chip ROM) , secondary (bootcode.bin) and third bootloader (start.elf) are executed on its GPU, one chainloading the other. The instruction set is not properly documented and start.elf
> What can be done (as SuSE and Microsoft have demonstrated) is to replace kernel.img at will - even with a custom version of TianoCore (an open-source UEFI implementation) or U-Boot. This can then be used to start an UEFI-compatible GRUB2 or BOOTMGR binary.
"UEFI Secure Boot on the Raspberry Pi" (2023) https://news.ycombinator.com/item?id=35815382
AlphaProteo generates novel proteins for biology and health research
> Trained on vast amounts of protein data from the Protein Data Bank (PDB) and more than 100 million predicted structures from AlphaFold, AlphaProteo has learned the myriad ways molecules bind to each other. Given the structure of a target molecule and a set of preferred binding locations on that molecule, AlphaProteo generates a candidate protein that binds to the target at those locations.
Show HN: An open-source implementation of AlphaFold3
Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.
Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...)
AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.
AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.
There’s just one problem: when DeepMind published AlphaFold3 in May (https://www.nature.com/articles/s41586-024-07487-w), there was no code. This brought up questions about reproducibility (https://www.nature.com/articles/d41586-024-01463-0) as well as complaints from the scientific community (https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...).
AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:
- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;
- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.
- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.
Unfortunately, no companies can use it since it is under a non-commercial license!
Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.
Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!
When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:
- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.
- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?
- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.
More about those issues here: https://github.com/Ligo-Biosciences/AlphaFold3
How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.
For those on Twitter, there was a good thread a few days ago that has more information: https://twitter.com/ArdaGoreci/status/1830744265007480934.
A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (https://batisio.co.uk) for his amazing animations!
We’re around to answer questions and look forward to hearing from you!
Does this win the Folding@home competition, or is/was that a different goal than what AlphaFold3 and ligo-/AlphaFold3 already solve for?
Folding@Home https://en.wikipedia.org/wiki/Folding@home :
> making it the world's first exaflop computing system
Folding@home and protein structure prediction methods such as AlphaFold address related but different questions. The former intends to describe the process of a protein undergoing folding over time, while the latter tries to determine the most stable conformation of a protein (the end result of folding).
Folding@home uses Rosetta, a physics-based approach that is outperformed by deep learning methods such as AlphaFold2/3.
Folding@home uses Rosetta only to generate initial conformations[1], but the actual simulation is based on Markov State Models. Note that there is another distributed computing project for Rosetta, Rosetta@home.
[1]: https://foldingathome.org/dig-deeper/#:~:text=employing%20Ro...
Kids Should Be Taught to Think Logically
I agree with the article. But I'd go further and say that children should be introduced to, and taught, multiple styles of thinking. And they should understand the strengths and weaknesses of those styles for different tasks and situations.
But logical thought is definitely important. Judging by some people I encounter, even understanding the concepts of "necessary but not sufficient" or "X is-a Y does not mean that Y is-a X" would make a big difference.
There are plenty of lists of thinking styles. I doubt that any of them are exhaustive or discrete. For example:
Critical, creative, analytical, abstract, concrete, divergent/lateral, convergent/vertical.
Or
Synthesist, idealist, pragmatic, analytic, realist.
There are lots of options. My point was really about awareness of the different styles and their advantages/disadvantages.
Inductive, Deductive, Abductive Inference
Reason > Logical reasoning methods and argumentation: https://en.wikipedia.org/wiki/Reason#Logical_reasoning_metho...
Critical Thinking > Logic and rationality > Deduction, abduction and induction ; Critical thinking and rationality: https://en.wikipedia.org/wiki/Critical_thinking#Deduction,_a...
Logic: https://en.wikipedia.org/wiki/Logic :
> Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language.
Logical reasoning: https://en.wikipedia.org/wiki/Logical_reasoning
An argument can be Sound or Unsound; or Cogent or Uncogent.
Exercises I recall:
Underline or Highlight or Annotate the topic sentence / precis sentence (which can be the last sentence of an introductory paragraph),
Underline the conclusion,
Underline and label the premises; P1, P2, P3
Don't trust; Verify the logical form
If P1 then Q
P2
therefore Q
If P1, P2, and P3
P1 kinda
we all like ____
therefore Q
Logic puzzles,"Pete, it's a fool that looks for logic in the chambers of the human heart.", money x3, posturing
Coding on iPad using self-hosted VSCode, Caddy, and code-server
Is it ergonomic to code on a tablet without bci?
https://vscode.dev can connect to a remote vscode instance in a container e.g. over Remote Tunnels ; but browsers trap so many keyboard shortcuts.
Which container with code-server to run to connect to from vscode client?
You can specify a development container that contains code-server with devcontainer.json.
vscode, Codespaces and these tools support devcontainer.json, too:
coder/envbuilder: https://github.com/coder/envbuilder
loft-sh/devpod: https://github.com/loft-sh/devpod
lapce/lapdev: https://github.com/lapce/lapdev
JupyterHub and BinderHub can spawn containers that also run code-server. Though repo2docker and REES don't yet support devcontainer.json, they do support bringing your own Dockerfile.
> but browsers trap so many keyboard shortcuts.
As a result, unfortunately the F1 keyboard shortcut calls browser help not vscode help.
Aren't there browser-based RDP apps that don't mangle all of the shortcuts, or does it have to be fulscreen in a browser to support a VM-like escape sequence to return keyboard input to the host?
OpenSSL 3.4 Alpha 1 Released with New Features
The roadmap on the site is 404'ing and the openssl/web repo is archived?
The v3.4 roadmap entry PR mentions QUIC (which HTTP/3 requires)? https://github.com/openssl/web/pull/481/files
Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140 -4?
Are there additional ways to implement NIST PQ finalist algos with openssl?
open-quantum-safe/oqs-provider: https://github.com/open-quantum-safe/oqs-provider :
openssl list -signature-algorithms -provider oqsprovider
openssl list -kem-algorithms -provider oqsprovider
openssl list -tls-signature-algorithms
Gold nugget formation from earthquake-induced piezoelectricity in quartz
"Gold nugget formation from earthquake-induced piezoelectricity in quartz" (2024) https://www.nature.com/articles/s41561-024-01514-1
Piezoelectric effects are how crystal radios work; AM Amplitude Modulated, shortwave, longwave
Quartz is dielectric.
Quartz clocks, quartz voltimeter
From https://news.ycombinator.com/item?id=40859142 :
> Ancient lingams had Copper (Cu) and Gold (Au), and crystal FWIU.
> From "Can you pump water without any electricity?" https://news.ycombinator.com/item?id=40619745 :
>> - /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
FWIU rotating the lingams causes vibrations which scare birds away.
Piezoelectricity: https://en.wikipedia.org/wiki/Piezoelectricity
"New "X-Ray Vision" Technique Sees Inside Crystals" (2024) https://news.ycombinator.com/item?id=40630832
"Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691 :
> Electron–electron interactions in high-mobility conductors can give rise to transport signatures resembling those described by classical hydrodynamics. Using a nanoscale scanning magnetometer, we imaged a distinctive hydrodynamic transport pattern—stationary current vortices—in a monolayer graphene device at room temperature.
"Goldene: New 2D form of gold makes graphene look boring" (2024) https://news.ycombinator.com/item?id=40079905
"Gold nanoparticles kill cancer – but not as thought" (2024) https://news.ycombinator.com/item?id=40819854 :
> Star-shaped gold nanoparticles up to 200nm kill at least some forms of cancer; https://news.ycombinator.com/item?id=40819854
Are there phononic excitations from earthquake-induced piezoelectric and pyroelectric effects?
Do (surface) plasmon polaritons SPhP affect the interactions between quartz and gold, given heat and vibration as from an earthquake?
"Extreme light confinement and control in low-symmetry phonon-polaritonic crystals" like quartz https://arxiv.org/html/2312.06805v2
Dublin Core, what is it good for?
Do regular search engine index DCMI dcterms:? Doe Google Scholar or Google Search index schema.org/CreativeWork yet?
Chrome 130: Direct Sockets API
Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431
Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy
docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...
I applaud the Chrome team for implementing
Isolated Web Apps seems like it mitigates majority of the significant concerns Mozilla has.
Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.
Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.
IMHO they're way too confident in application sandboxing but we already know that we need containers, Gvisor or Kata, and container-selinux to isolate server processes.
Chrome and Firefox and Edge all have the same app sandbox now FWIU. It is contributed to pwn2own ever year. I don't think the application-level browser sandbox has a better record of vulns than containers or VMs.
So, IDK about trusting in-browser isolation features, or sockets with unsigned cross-domain policies.
OTOH things that would work with Direct Sockets IIUC: P2P VPN server/client, blind proxy relay without user confirmation, HTTP server, behind the firewall port scanner that uploads scans,
I can understand FF's position on Direct Sockets.
There used to be a "https server for apps" Firefox extension.
It is still necessary to install e.g Metamask to add millions of lines of unverified browser code and a JS/WASM interpretor to an otherwise secured Zero Trust chain. Without a Wallet Browser extension like Metamask explicitly installed, browsers otherwise must use vulnerable regular DNS instead of EDNS. Without Metamask installed, it's not possible for a compromised browser to hack at a blockchain without a relay because most blockchains specifically avoid regular HTTPS. Existing browsers do not support blockchain protocols without the user approving install of e.g. Metamask over PKI SSL.
FWIU there are many examples of people hijacking computers to mine PoW coins in JS or WASM, and we don't want that to be easier to do without requiring confirmation from easily-fooled users.
Browsers SHOULD indicate when a browser tab is PoW mining in the background as the current user in the background.
Are there downgrade attacks on this?
Don't you need HTTPS to serve the origin policy before switching to Direct Sockets anyway?
HTTP/3 QUIC is built on UDP. Can apps work with WS or WebRTC over HTTP/3 instead of sockets?
Edit: (now I can read the spec in question)
Thanks for your considered response. I will digest it a bit when I have time! :)
Show HN: A retro terminal text editor for GNU/Linux coded in C (C-edit)
I set about coding my own version of the classic MS-DOS EDIT.COM for GNU/Linux systems four years ago and it this is where the project is at... still rough around the edges but works well with Termux! :) Demo: https://www.youtube.com/watch?v=H7bneUX_kVA
QB64 is an EDIT.COM-style IDE and a compiler for QuickBasic .BAS programs: https://github.com/QB64Official/qb64#usage
There's a QBjs, for QuickBasic on the web.
There's a QB64 vscode extension: https://github.com/QB64Official/vscode
Textual has a MarkdownViewer TUI control with syntax highlighting and a file tree in a side panel like NERDtree, but not yet a markdown editor.
QuickBasic was my first programming language and EDIT.COM was my first IDE. I love going back down memory lane, thanks!
Same. `edit` to edit. These days perhaps not coincidentally I have a script called `e` for edit that opens vim: https://github.com/westurner/dotfiles/blob/develop/scripts/e
GORILLA.BAS! https://en.wikipedia.org/wiki/Gorillas_(video_game)
gorilla.bas with dosbox in html: https://archive.org/details/GorillasQbasic
rewritten with jquery: https://github.com/theraccoonbear/BrowserGORILLAS.BAS/blob/m...
Basically the same thing but for learning, except you can't change the constants in the simulator by editing the source of the game with Ctrl-C and running it with F5:
- PHET > Projectile Data Lab https://phet.colorado.edu/en/simulations/projectile-data-lab
- sensorcraft is like minecraft but in python with pyglet for OpenGL 3D; self.add_block(), gravity, ai, circuits: https://sensorcraft.readthedocs.io/en/stable/
PyTorch 2.4 Now Supports Intel GPUs for Faster Workloads
Which Intel GPUs? Is this for the Max Series GPUs and/or the Gaudi 3 launching in 3Q?
"Intel To Sunset First-Gen Max Series GPU To Focus On Gaudi, Falcon Shores Chips" (2024-05) https://www.crn.com/news/components-peripherals/2024/intel-s...
It’ll be something that supports oneAPI, so the PVC 1100/1500, the Flex GPUs, the A-series desktop GPUs (and B-series when it comes out). Not Gaudi 3 afaict, that has a custom toolchain (but Falcon Shores should be on oneAPI).
SDL3 new GPU API merged
SDL3 is still in preview, but the new GPU API is now merged into the main branch while SDL3 maintainers apply some final tweaks.
As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.
WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)
Why is SDL API needed vs gfx-rs / wgpu though? I.e. was there a need to make yet another one?
Having a C API like that is always nice. I don't wanna fight Rust.
WebGPU has a (mostly) standardized C API: https://github.com/webgpu-native/webgpu-headers
wgpu supports WebGPU: https://github.com/gfx-rs/wgpu :
> While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.
That’s just for the shading language
The Rust wgpu project has an alternative C API which is identical (or at least closely matches, I haven't looked in detail at it yet) the official webgpu.h header. For instance all examples in here are written in C:
https://github.com/gfx-rs/wgpu-native/tree/trunk/examples
There's definitely also people using wgpu from Zig via the C bindings.
I found:
shlomnissan/sdl-wasm: https://github.com/shlomnissan/sdl-wasm :
> A simple example of compiling C/SDL to WebAssembly and binding it to an HTML5 canvas.
erik-larsen/emscripten-sdl2-ogles2: https://github.com/erik-larsen/emscripten-sdl2-ogles2 :
> C++/SDL2/OpenGLES2 samples running in the browser via Emscripten
IDK how much work there is to migrate these to SDL3?
Are there WASM compilation advantages to SDL3 vs SDL2?
There are rust SDL2 bindings: https://github.com/Rust-SDL2/rust-sdl2#use-sdl2render
use::sdl2render, gl-rs for raw OpenGL: https://github.com/Rust-SDL2/rust-sdl2?tab=readme-ov-file#op...
*sdl2::render
src/sdl2/render.rs: https://github.com/Rust-SDL2/rust-sdl2/blob/master/src/sdl2/...
SDL/test /testautomation_render.c: https://github.com/libsdl-org/SDL/blob/main/test/testautomat...
SDL is for gamedevs, it supports consoles, wgpu is not, it doesn't
SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs.
Can the SDL terminal emulator handle up-arrow /slash commands, and cool CLI things like Textual and IPython's prompt_toolkit readline (.inputrc) alternative which supports multi line editing, argument tab completion, and syntax highlighting?, in a game and/or on a PC?
I think you're confusing the roles of terminal emulator and shell. The emulator mainly hosts the window for a text-based application: print to the screen, send input, implement escape sequences, offer scrollback, handle OS copy-paste, etc. The features you mentioned would be implemented by the hosted application, such as a shell (which they've also implemented separately).
Does the SDL terminal emulator support enough of VT100, is it, to host an [in-game] console shell TUI with advanced features?
I'm not related to hnlmorg, but I'm assuming the project they refer to is mxtty [1], so check for yourself.
I’d love to see Raylib get an SDL GPU backend. I’d pick it up in a heartbeat.
raylib > "How to compile against the [SDL2] backend" https://github.com/raysan5/raylib/discussions/3764
Rust solves the problem of incomplete Kernel Linux API docs
This is one of the biggest advantages of the newer wave of more expressively typed languages like Rust and Swift.
They remove a lot of ambiguity in how something should be held.
Is this data type or method thread safe? Well I don’t need to go look up the docs only to find it’s not mentioned anywhere but in some community discussion. The compiler tells me.
Reviewing code? I don’t need to verify every use of a pointer is safe because the code tells me itself at that exact local point.
This isn’t unique to the Linux kernel. This is every codebase that doesn’t use a memory safe language.
With memory safe languages you can focus so much more on the implementation of your business logic than making sure all your codebases invariants are in your head at a given time.
It's not _just_ memory safety. In my experience, Rust is also liberating in the sense of mutation safety. With memory safe languages such as Java or Python or JavaScript, I must paranoidly clone stuff when passing stuff to various functions whose behaviour I don't intimately know of, and that is a source of constant stress for me.
Also newtype wrappers.
If you have code that deals e.g. with pounds and kilograms, Dollars and Euros, screen coordinates and window coordinates, plain text and HTML and so on, those values are usually encapsulated in safe wrapper structs instead of being passed as raw ints or floats.
This prevents you from accidentally passing the wrong kind of value into a function, and potentially blowing up your $125 million spacecraft[1].
I also find that such wrappers also make the code far more readable, as there's no confusion exactly what kind of value is expected.
[1] https://www.simscale.com/blog/nasa-mars-climate-orbiter-metr...
As much as I like Rust, I don’t think that it would have solved the Mars Climate Orbiter problem. That was caused by one party writing numbers out into a CSV file in one unit, and a different party reading the CSV file but assuming that the numbers were in a different unit. Both parties could have been using Rust, and using types to encode physical units, and the problem could still have happened.
W3C CSVW supports per-column schema.
Serialize a dict containing a value with uncertainties and/or Pint (or astropy.units) and complex values to JSON, then read it from JSON back to the same types. Handle datetimes, complex values, and categoricals
"CSVW: CSV on the Web" https://github.com/jazzband/tablib/issues/305
7 Columnar metadata header rows for a CSV: column label, property URI, datatype, quantity/unit, accuracy, precision, significant figures https://wrdrd.github.io/docs/consulting/linkedreproducibilit...
CSV on the Web: A Primer > 6. Advanced Use > 6.1 How do you support units of measure? https://www.w3.org/TR/tabular-data-primer/#units-of-measure
You can specify units in a CSVW file with QUDT, an RDFS schema and vocabulary for Quantities, Units, Dimensions, and Types
Schema.org has StructuredValue and rdfs:subPropertyOf like QuantitativeValue and QuantitativeValueDistribution: https://schema.org/StructuredValue
There are linked data schema for units, and there are various in-code in-RAM typed primitive and compound type serialization libraries for various programming languages; but they're not integrated, so we're unable to share data with units between apps and fall back to CSVW.
There are zero-copy solutions for sharing variables between cells of e.g. a polyglot notebook with input cells written in more than one programing language without reshaping or serializing.
Sandbox Python notebooks with uv and marimo
Package checksums can be specified per-platform in requirements.txt and Pipfile.lock files. Is it advisable to only list package names, in TOML that can't be parsed from source comments with the AST parser?
.ipynb nbformat inlines binary data outputs from e.g. _repr_png_() as base64 data, rather than delimiting code and binary data in .py files.
One file zipapp PEX Python Executables are buildable with Twitter Pants build, Buck, and Bazel. PEX files are executable ZIP files with a Python header.
There's a GitHub issue about a Markdown format for notebooks because percent format (`# %%`) and myst-nb formats don't include outputs.
There's also GitHub issue about Jupytext storing outputs
Containers are sandboxes. repo2docker with repo2podman sandboxes a git repo with notebooks by creating a container with a recent version of the notebook software on top.
How does your solution differ from ipyflow and rxpy, for example? Are ipywidgets supported or supportable?
> Is it advisable to only list package names, in TOML that can't be parsed from source comments with the AST parser?
This at the top of a notebook is less reproducible and less secure than a requirements.txt with checksums or better:
%pip install ipytest pytest-cov jupyterlab-miami-nights
%pip install -q -r requirements.txt
%pip?
%pip --help
But you don't need to run pip every time you run a notebook, so it's better to comment out the install steps. But then it requires manual input to run the notebook in a CI task, if it doesn't install everything in a requirements.txt and/or environment.yml or [Jupyter REES] first before running the notebook like repo2docker: #%pip install -r requirements.txt
#!pip --help
#!mamba env update -f environment.yml
#!pixi install --manifest-path pyproject.toml_or_pixi.toml
Lucee: A light-weight dynamic CFML scripting language for the JVM
Similar to ColdFusion CFML and Open Source: ZPT Zope Templates: https://zope.readthedocs.io/en/latest/zopebook/ZPT.html
https://zope.readthedocs.io/en/latest/zopebook/AppendixC.htm... :
> Zope Page Templates are an HTML/XML generation tool. This appendix is a reference to Zope Page Templates standards: Template Attribute Language (TAL), TAL Expression Syntax (TALES), and Macro Expansion TAL (METAL).
> TAL Overview: The Template Attribute Language (TAL) standard is an attribute language used to create dynamic templates. It allows elements of a document to be replaced, repeated, or omitted
Repoze is Zope backwards. PyPI is built on the Pyramid web framework, which comes from repoze.bfg, which was written in response to Zope2 and Plone and Zope3.
Chameleon ZPT templates with Pyramid pyramid.renderers.render_to_response(request): https://docs.pylonsproject.org/projects/pyramid/en/latest/na...
FCast: Casting Made Open Source
How does FCast differ from Matter Casting?
https://news.ycombinator.com/item?id=41171060&p=2#41172407
"What Is Matter Casting and How Is It Different From AirPlay or Chromecast?" (2024) https://www.howtogeek.com/what-is-matter-casting-and-how-is-... :
> You can also potentially use the new casting standard to control some of your TV’s functions while casting media on it, a task at which both AirPlay and Chromecast are somewhat limited.
Feature ideas: PIP Picture-in-Picture, The ability to find additional videos and add to a [queue] playlist without stopping the playing video
Instead of waiting, hoping that big companies will implement the standard. Just make it as easy as possible to adopt by having receivers for all platforms and making client libraries that can cast to AirPlay, Chromecast, FCast and others seamlessly.
Doest current app support AirPlay and Chromecast as different receivers/backends? Websites doesn't mention anything about it. Is there any plan also for iOS app?
Some feedback: I would also add dedicated buttons for downloading MacOS and Windows binaries - for typical users gitlab button will be to scary. Website also not clear if there is and SDK for developers (the one that supports also airplay and chromecast) and what languages bindings it supports.
GitHub has package repos for hosting package downloads.
A SLSA Builder or Generator can sign packages and container images with sigstore/cosign.
It's probably also possible to build and sign a repo metadata index with GitHub release attachment URLs and host that on GitHub Pages, but at scale to host releases you need a CDN and release signing keys to sign the repo metadata, and clients that update only when the release attachment signature matches the per-release per-platform key; but the app store does that for you
Show HN: bpfquery – experimenting with compiling SQL to bpf(trace)
Hello! The last few weeks I've been experimenting with compiling sql queries to bpftrace programs and then working with the results. bpfquery.com is the result of that, source available at https://github.com/zmaril/bpfquery. It's a very minimal sql to bpftrace compiler that lets you explore what's going on with your systems. It implements queries, expressions, and filters/wheres/predicates, and has a streaming pivot table interface built on https://perspective.finos.org. I am still figuring out how to do windows, aggregations and joins though, but the pivot table interface actually lets you get surprisingly far. I hope you enjoy it!
RIL about how the ebpf verifier attempts to prevent infinite loops given rule ordering and rewriting transformations.
There are many open query planners; maybe most are hardly reusable.
There's a wasm-bpf; and also duckdb-wasm, sqlite in WASM with replication and synchronization, datasette-lite, JupyterLite
wasm-bpf: https://github.com/eunomia-bpf/wasm-bpf#how-it-works
Does this make databases faster or more efficient? Is there process or query isolation?
The Surprising Cause of Qubit Decay in Quantum Computers
> The transmission of supercurrents is made possible by the Josephson effect, where two closely spaced superconducting materials can support a current with no applied voltage. As a result of the study, previously unattributed energy loss can be traced to thermal radiation originating at the qubits and propagating down the leads.
> Think of a campfire warming someone at the beach – the ambient air stays cold, but the person still feels the warmth radiating from the fire. Karimi says this same type of radiation leads to dissipation in the qubit.
> This loss has been noted before by physicists who have conducted experiments on large arrays of hundreds of Josephson junctions placed in circuit. Like a game of telephone, one of these junctions would seem to destabilize the rest further down the line.
ScholarlyArticle: "Bolometric detection of Josephson radiation" (2024) https://www.nature.com/articles/s41565-024-01770-7
"Computer Scientists Prove That Heat Destroys Quantum Entanglement" (2024) https://news.ycombinator.com/item?id=41381849
The Future of TLA+ [pdf]
A TLA+ alternative people might find curious.
What are other limits and opportunities for TLA+ and similar tools?
Limits of TLA+
- It cannot compile to working code
- Steep learning curve
Opportunities for TLA+
- Helps you understand complex abstractions & systems clearly.
- It's extremely effective at communicating the components that make up a system with others.
Let get give you a real practical example.
In the AI models there is this component called a "Transformer". It under pins ChatGPT (the "T" in ChatGPT).
If you are to read the 2018 Transfomer paper "Attention is all you need".
They use human language, diagrams, and mathematics to describe their idea.
However if your try to build you own "Transformer" using that paper as your only resource your going to struggle interpreting what they are saying to get working code.
Even if you get the code working, how sure are you that what you have created is EXACTLY what the authors are talking about?
English is too verbose, diagrams are open to interpretation & mathematics is too ambiguous/abstract. And already written code is too dense.
TLA+ is a notation that tends to be used to "specify systems".
In TLA+ everything is a defined in terms of a state machine. Hardware, software algorithms, consensus algorithms (paxos, raft etc).
So why TLA+?
If something is "specified" in TLA+;
- You know exactly what it is — just by interpreting the TLA+ spec
- If you have an idea to communicate. TLA+ literate people can understand exactly what your talking about.
- You can find bugs in an algorithms, hardware, proceseses just by modeling them in TLA+. So before building Hardware or software you can check it's validity & fix flaws in its design before committing expensive resources only to subsequently find issues in production.
Is that a practical example? Has anyone specified a transformer using TLA+? More generally, is TLA+ practical for code that uses a lot of matrix multiplication?
The most practical examples I’m aware of are the usage of TLA+ to specify systems at AWS: https://lamport.azurewebsites.net/tla/formal-methods-amazon....
From "Use of Formal Methods at Amazon Web Services" (2014) https://lamport.azurewebsites.net/tla/formal-methods-amazon.... :
> What Formal Specification Is Not Good For: We are concerned with two major classes of problems with large distributed systems: 1) bugs and operator errors that cause a departure from the logical intent of the system, and 2) surprising ‘sustained emergent performance degradation’ of complex systems that inevitably contain feedback loops. We know how to use formal specification to find the first class of problems. However, problems in the second category can cripple a system even though no logic bug is involved. A common example is when a momentary slowdown in a server (perhaps due to Java garbage collection) causes timeouts to be breached on clients, which causes the clients to retry requests, which adds more load to the server, which causes further slowdown. In such scenarios the system will eventually make progress; it is not stuck in a logical deadlock, livelock, or other cycle. But from the customer's perspective it is effectively unavailable due to sustained unacceptable response times. TLA+ could be used to specify an upper bound on response time, as a real-time safety property. However, our systems are built on infrastructure (disks, operating systems, network) that do not support hard real-time scheduling or guarantees, so real-time safety properties would not be realistic. We build soft real-time systems in which very short periods of slow responses are not considered errors. However, prolonged severe slowdowns are considered errors. We don’t yet know of a feasible way to model a real system that would enable tools to predict such emergent behavior. We use other techniques to mitigate those risks.
Delay, cycles, feedback; [complex] [adaptive] nonlinearity
Formal methods including TLA+ also can't/don't prevent or can only workaround side channels in hardware and firmware that is not verified. But that's a different layer.
> This raised a challenge; how to convey the purpose and benefits of formal methods to an audience of software engineers? Engineers think in terms of debugging rather than ‘verification’, so we called the presentation “Debugging Designs” [8] . Continuing that metaphor, we have found that software engineers more readily grasp the concept and practical value of TLA+ if we dub it:
Exhaustively testable pseudo-code
> We initially avoid the words ‘formal’, ‘verification’, and ‘proof’, due to the widespread view that formal
methods are impractical. We also initially avoid mentioning what the acronym ‘TLA’ stands for, as doing
so would give an incorrect impression of complexity.Isn't there a hello world with vector clocks tutorial? A simple, formally-verified hello world kernel module with each of the potential methods would be demonstrative, but then don't you need to model the kernel with abstract distributed concurrency primitives too?
From https://news.ycombinator.com/item?id=40980370 ;
> - [ ] DOC: learnxinyminutes for tlaplus
> TLAplus: https://en.wikipedia.org/wiki/TLA%2B
> awesome-tlaplus > Books, (University) courses teaching (with) TLA+: https://github.com/tlaplus/awesome-tlaplus#books
FizzBee, Nagini, deal-solver, z3, dafny; https://news.ycombinator.com/item?id=39904256#39938759 ,
"Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=40680722
awesome-safety-critical:
Computer Scientists Prove That Heat Destroys Quantum Entanglement
This is really not a surprise. And I'd like to clarify more misconceptions of modern science:
It isn't their identity the atoms give up, their hyperdimensional vibration synchronizes.
This is puzzling and spooky enough to warrant all of those flavored adjectives because science is convinced the Universe is smallest states built up into our universe, when in actuality the Universe is a singularity of potential which distributes itself (over nothing), and state continuously resolves through constructive and destructive interference (all space/time manifestation), probably bound by spin characteristic (like little knots which cannot unwind themselves.)
As universal potential exists outside of space and time (giving rise to it, an alt theory to the big bang), when particles are synchronizes (at any distance) the disposition (not identity), of their potentials are bound (identity would be the localized existential particle). Any destructive interference upon the hyperdimensional vibration will destroy the entanglement.
The domain to be explored is, what can we do with constructive interference?
Modern science worshipers will have to bite on this one, and admit their sacred axioms are wrong.
So, modulating thermal insulation of (a non-superconducting and non-superfluidic or any) quantum simulator results in loss of entanglement.
How, then, can entanglement across astronomical distances occur without cooler temps the whole way there, if heat destroys all entanglement?
Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
Btw, entangled swarms can be re-entangled over and over and over. The same entangled scope.
The entanglement is the vibrational synchrony in the zero dimensional Universal Potential (or just capacity for existential Potential to sound less grandeur.)
So everything at that vibrational axis will bias to the same disposition.
There are more vibrational axis than atoms in the Universe, these are the purely random Q we expect when made decoharent.
Synchronized, our technology will come by how we may constructively and destructively interfere with the informational content of the "disposition". Any purturbence is informational , and I think you can broadcast analog motion pictures, in hyperdimensional clarity with sound and warm lighting; qubits are irrelevant and a dead end.
Quantum holography is a sieve of some sort, where shadows may be cast upon extradimentional walls.
Hash collisions may be found by superimposing the probabilities factored by their algorithms such that constructive and destructive interference reduce the multidimensional model to the smallest probable sets.
Cyphers may be broken by combining constructive key attempts (thousands of millions at a time?) in a "singular domain" with the hash collision solution.
Heat at some level, not every level. Same goes for magnets or heck, even biological specimens.
Computer haxor discovers that heat destroys all signs of life in organic material.
Sensitive apparatus requires insulation.
98% accuracy in predicting diseases by the colour of the tongue
"Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
> This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%
"Development of Deep Ensembles to Screen for Autism and Symptom Severity Using Retinal Photographs" (2023) https://jamanetwork.com/journals/jamanetworkopen/fullarticle... ; 100% accuracy where a recently-published best method with charts is 80% accurate: "Machine Learning Prediction of Autism Spectrum Disorder from a Minimal Set of Medical and Background Information" (2024) https://jamanetwork.com/journals/jamanetworkopen/fullarticle... https://github.com/Tammimies-Lab/ASD_Prediction_ML_Rajagopal...
I don't think any of the medical imaging NN training projects have similar colorspace analysis.
The possibilities for dark matter have shrunk
There is no dark matter in theories of superfluid quantum gravity.
Dirac later went to Dirac sea. Gödel had already proposed dust solutions, which are fluidic.
PBS Spacetime has a video out now on whether gravity is actually random. I don't know whether it addresses theories of superfluid quantum gravity.
>>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
Database “sharding” came from Ultima Online? (2009)
Wikipedia has:
Shard (database architecture) > Etymology: https://en.wikipedia.org/wiki/Shard_(database_architecture)#...
Partition > Horizontal partitioning --> Sharding: https://en.wikipedia.org/wiki/Partition_(database)
Database scalability > Techniques > Partitioning: https://en.wikipedia.org/wiki/Database_scalability
Network partition: https://en.wikipedia.org/wiki/Network_partition
why is every social media network now just "post something with varying levels of correctness because it farms engagement"
it's so exhausting needing to just read comments to get the actual, real truth
Do you realize that the linked Wikipedia post agrees with the article? It lists Ultima Online as one of two likely sources for the term "sharding."
Brain found to store three copies of every memory
I think the evidence for this should be extraordinary because it is so apparently unlikely.
Why would the brain store multiple copies of a memory? It’s so inefficient.
On a small level computers do that: cpu cache (itself 3 levels), GPU memory (optional), main memory, hdd cache.
The reason why animal brains would store multiple copies of a memory is functional proximity. You need memory near the limbic system for neurotic processing and fear conditioning. Long term storage is near the brain stem so that it can eventually bleed into the cerebellum to become muscle memory. There is memory storage near the frontal lobes so that people can reason about with their advanced processors like speech parsing, visual cortex, decision bias, and so forth.
OT ScholarlyArticle: "Divergent recruitment of developmentally defined neuronal ensembles supports memory dynamics" (2024) https://www.science.org/doi/10.1126/science.adk0997
"Our Brains Instantly Make Two Copies of Each Memory" (2017) https://www.pbs.org/wgbh/nova/article/our-brains-instantly-m... :
> We might be wrong. New research suggests that our brains make two copies of each memory in the moment they are formed. One is filed away in the hippocampus, the center of short-term memories, while the other is stored in cortex, where our long-term memories reside.
From https://www.psychologytoday.com/us/blog/the-athletes-way/201... :
> Surprisingly, the researchers found that long-term memories remain "silent" in the prefrontal cortex for about two weeks before maturing and becoming consolidated into permanent long-term memories.
ScholarlyArticle: "Engrams and circuits crucial for systems consolidation of a memory" (2017) https://www.science.org/doi/10.1126/science.aam6808
So that makes four (4) copies of each memory in the brain if you include the engram cells in the prefrontal cortex.
What is the survival advantage to redundant, resilient recall; why do brains with such traits survive and where and when in our evolutionary lineage did such complexity arise?
"Spatial Grammar" in DNA: Breakthrough Could Rewrite Genetics Textbooks
ScholarlyArticle: "Position-dependent function of human sequence-specific transcription factors" (2024) https://www.nature.com/articles/s41586-024-07662-z
Celebrating 6 years since Valve announced Steam Play Proton for Linux
I've seen it joked that with Proton, Win32 is a good stable ABI for gaming on Linux.
Given that, and that I'm most comfortable developing on Linux and in C++, does anyone have a good toolchain recommendation for cross compiling C++ to Windows from Linux? (Ideally something that I can point CMake at and get a build that I can test in Proton.)
MinGW works well (e.g. mingw-w64 in Debian/Ubuntu). It works well with CMake, you just need to pass CMake flags like
-DCMAKE_CXX_COMPILER=x86_64-w64-mingw32-c++ -DCMAKE_C_COMPILER=x86_64-w64-mingw32-c
Something like SDL or Raylib is useful for cross platform windowing and sound if you are writing gamesHey, TuxMath is written with SDL, though there's a separate JS port now IIUC.
/? mingw: https://github.com/search?q=mingw&type=repositories
The msys2/MINGW-packages are PKGBUILD packages: https://github.com/msys2/MINGW-packages :
> Package scripts for MinGW-w64 targets to build under MSYS2.
> [..., SDL, gles, glfw, egl, glbinding, cargo-c, gtest, cppunit, qt5, gtk4, icu, ki-i18n-qt5, SDL2_pango, jack2, gstreamer, ffmpeg, blender, gegl, gnome-text-editor, gtksourceview, kdiff3, libgit2, libusb, libressl, libsodium, libserialport, libslirp, hugo]
PKGBUILD is a bash script packaging spec from Arch, which builds packages with makepkg.
msys2/setup-msys2: https://github.com/msys2/setup-msys2:
> setup-msys2 is a GitHub Action (GHA) to setup an MSYS2 environment (i.e. MSYS, MINGW32, MINGW64, UCRT64, CLANG32, CLANG64 and/or CLANGARM64 shells)
Though, if you're writing a 3d app/game, e.g. panda3d already builds for Windows, Mac, and Linux; and pygbag compiles panda3d to WASM, and there's also harfang-wasm, and TIL about leptos is more like React in Rust and should work for GUI apps, too.
panda3d > Building applications: https://docs.panda3d.org/1.11/python/distribution/building-b...
https://github.com/topics/mingw :
> nCine, win-sudo, drmingw debugger,
mstorsjo/llvm-mingw: https://github.com/mstorsjo/llvm-mingw :
> Address Sanitizer and Undefined Behaviour Sanitizer, LLVM Control Flow Guard -mguard=cf ; i686, x86_64, armv7 and arm64
msys2/MINGW-packages: "[Wish] Add 3D library for Python" https://github.com/msys2/MINGW-packages/issues/21325
panda3d/panda3d: https://github.com/panda3d/panda3d
Before Steam for Linux, there was tuxmath.
tux4kuds/tuxmath//mingw/ has CodeBlocks build config with mingw32 fwics: https://github.com/tux4kids/tuxmath/blob/master/mingw/tuxmat...
Code::Blocks: https://en.m.wikipedia.org/wiki/Code::Blocks
There's a CMake build: https://github.com/tux4kids/tuxmath/blob/master/CMakeLists.t...
But it says the autotools build is still it; configure.ac for autoconf and Makefile.am for automake
SDL supports SVG since SDL_image 2.0.2 with IMG_LoadSVG_RW() and since SDL_image 2.6.0 with IMG_LoadSizedSVG_RW: https://wiki.libsdl.org/SDL2_image/IMG_LoadSizedSVG_RW
conda-forge has SDL on Win/Mac/Lin.
conda-forge/sdl2-feedstock: https://github.com/conda-forge/sdl2-feedstock
emscripten-forge does not yet have SDL or .*gl.* or tuxmath or bash or busybox: https://github.com/emscripten-forge/recipes/tree/main/recipe...
conda-forge/panda3d-feedstock / recipe/meta.yaml builds panda3d on Win, Mac, Lin, and Github Actions containers: https://github.com/conda-forge/panda3d-feedstock/blob/main/r... :
conda install -c conda-forge panda3d # mamba install panda3d # miniforge
# pip install panda3d
Panda3d docs > Distributing Panda3D Applications > Third-party dependencies lists a few libraries that I don't think are in the MSYS or conda-forge package repos:
https://docs.panda3d.org/1.10/python/distribution/thirdparty...What about math games on open source operating systems, Steam?
A Manim renderer for [game engine] would be cool for school, and cool for STEM.
"Render and interact with through Blender, o3de, panda3d ManimCommunity/manim#3362" https://github.com/ManimCommunity/manim/issues/3362
GitHub Named a Leader in the Gartner First Magic Quadrant for AI Code Assistants
Gartner "Magic Quadrant for AI Code Assistants" (2024) https://www.gartner.com/doc/reprints?id=1-2IKO4MPE&ct=240819...
Additional criteria for assessing AI code assistants from https://news.ycombinator.com/item?id=40478539 re: Text-to-SQL bemchmarks :
codefuse-ai/Awesome-Code-LLM > Analysis of AI-Generated Code, Benchmarks: https://github.com/codefuse-ai/Awesome-Code-LLM :
> 8.2. Benchmarks: * Integrated Benchmarks, Program Synthesis, Visually Grounded Program Synthesis, Code Reasoning and QA, Text-to-SQL, Code Translation, Program Repair, Code Summarization, Defect/Vulnerability Detection, Code Retrieval, Type Inference, Commit Message Generation, Repo-Level Coding*
OT did not assess:
Aider: https://github.com/paul-gauthier/aider :
> Aider works best with GPT-4o & Claude 3.5 Sonnet and can connect to almost any LLM.
> Aider has one of the top scores on SWE Bench. SWE Bench is a challenging software engineering benchmark where aider solved real GitHub issues from popular open source projects like django, scikitlearn, matplotlib, etc
SWE Bench benchmark: https://www.swebench.com/
A carbon-nanotube-based tensor processing unit
"A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
> Abstract: The growth of data-intensive computing tasks requires processing units with higher performance and energy efficiency, but these requirements are increasingly difficult to achieve with conventional semiconductor technology. One potential solution is to combine developments in devices with innovations in system architecture. Here we report a tensor processing unit (TPU) that is based on 3,000 carbon nanotube field-effect transistors and can perform energy-efficient convolution operations and matrix multiplication. The TPU is constructed with a systolic array architecture that allows parallel 2 bit integer multiply–accumulate operations. A five-layer convolutional neural network based on the TPU can perform MNIST image recognition with an accuracy of up to 88% for a power consumption of 295 µW. We use an optimized nanotube fabrication process that offers a semiconductor purity of 99.9999% and ultraclean surfaces, leading to transistors with high on-current densities and uniformity. Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
1 TOPS/W/s
"Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
"Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490
Or CNT Carbon Nanotubes, or TWCNT Twisted Carbon Nanotubes.
From "Rivian reduced electrical wiring by 1.6 miles and 44 pounds" (2024) https://news.ycombinator.com/item?id=41210021 :
> Are there yet CNT or TWCNT Twisted Carbon Nanotube substitutes for copper wiring?
"Twisted carbon nanotubes store more energy than lithium-ion batteries" (2024? https://news.ycombinator.com/item?id=41159421
- NewsArticle about the ScholarlyArticle: "The first tensor processor chip based on carbon nanotubes could lead to energy-efficient AI processing" (2024) https://techxplore.com/news/2024-08-tensor-processor-chip-ba...
How to build a 50k ton forging press
> By the early 2000s, parts from the heavy presses were in every U.S. military aircraft in service, and every airplane built by Airbus and Boeing.
>The savings on a heavy bomber was estimated to be even greater, around 5-10% of its total cost; savings on the B-52 alone were estimated to be greater than the entire cost of the Heavy Press Program.
These are wild stats.
Great article! I was fascinated to learn about the Heavy Press program for the first time, here on HN[1] a month ago, and am glad more about it is being posted.
It makes me think: what other processes could redefine an industry or way of thinking/designing if taken a step further? We had forging and extrusion presses … but huge, high pressure ones changed the game entirely.
> It makes me think: what other processes could redefine an industry or way of thinking/designing if taken a step further
Pressure-injection molded hemp plastic certainly meets spec for automotive and aerospace applications.
"Plant-based epoxy enables recyclable carbon fiber" (2022) [that's stronger than steel and lighter than fiberglass] https://news.ycombinator.com/item?id=30138954 ... https://news.ycombinator.com/item?id=37560244
Silica aerogels are dermally abrasive. Applications for non-silica aerogels - for example hemp aerogels - include thermal insulation, packaging, maybe upholstery fill.
There's a new method to remove oxygen from Titanium: "Cheap yet ultrapure titanium metal might enable widespread use in industry" (2024) https://news.ycombinator.com/item?id=40768549
"Electric recycling of Portland cement at scale" (2024) https://www.nature.com/articles/s41586-024-07338-8 ... "Combined cement and steel recycling could cut CO2 emissions" https://news.ycombinator.com/item?id=40452946
"Researchers create green steel from toxic [aluminum production waste] red mud in 10 minutes" (2024) https://newatlas.com/materials/toxic-baulxite-residue-alumin...
There are many new imaging methods for quality inspection of steel and other metals and alloys, and biocomposites.
"Seeding steel frames brings destroyed coral reefs back to life" (2024) https://news.ycombinator.com/item?id=39735205
[deleted]
Seven basic rules for causal inference
At the bottom, the author mentions that by "correlation" they don't mean "linear correlation", but all their diagrams show the presence or absence of a clear linear correlation, and code examples use linear functions of random variables.
They offhandedly say that "correlation" means "association" or "mutual information", so why not just do the whole post in terms of mutual information? I think the main issue with that is just that some of these points become tautologies -- e.g. the first point, "independent variables have zero mutual information" ends up being just one implication of the definition of mutual information.
This isnt a correction to your post, but a clarification for other readers: correlation implies dependence, but dependence does not imply correlation. Conversely, two variables share non-zero mutual information if and only if they are dependent.
By that measure, all of these Spurious Correlations indicate insignificant dependence, which isn't of utility: https://www.tylervigen.com/spurious-correlations
Isn't it possible to contrive an example where a test of pairwise dependence causes the statistician to error by excluding relevant variables from tests of more complex relations?
Trying to remember which of these factor both P(A|B) and P(B|A) into the test
I think you're using the word "insignificant" in a possibly misleading or confusing way.
I think in this context, the issue with the spurious correlations from that site is that they're all time series for overlapping periods. Of course, the people who collected these understood that time was an important causal factor in all these phenomena. In the graphical language of this post:
T --> X_i
T --> X_j
Since T is a common cause to both, we should expect to see a mutual information between X_i, X_j. In the paradigm here, we could try to control for T and see if a relationship persists (i.e. perhaps in the same month, collect observations for X_i, X_j in each of a large number of locales), and get a signal on whether some the shared dependence on time is the only link.
If a test of dependence shows no significant results, that's not conclusive because of complex, nonlinear, and quantum 'functions'.
How are effect lag and lead expressed in said notation for expressing causal charts?
Should we always assume that t is a monotonically-increasing series, or is it just how we typically sample observations? Can traditional causal inference describe time crystals?
What is the quantum logical statistical analog of mutual information?
Are there pathological cases where mutual information and quantum information will not discover a relationship?
Does Quantum Mutual Information account for Quantum Discord if it only uses von Neumann definition of entropy?
Launch HN: MinusX (YC S24) – AI assistant for data tools like Jupyter/Metabase
Hey HN! We're Vivek, Sreejith and Arpit, and we're building MinusX (https://minusx.ai), a data science assistant for Jupyter and Metabase. MinusX is a Chrome extension (https://minusx.ai/chrome-extension) that adds an AI sidechat to your analytics apps. Given an instruction, our agent operates your app (by clicking and typing, just like you would) to analyze data and answer queries. Broadly, you can do 3 types of things: ask for hypotheses and explore data, extend existing notebooks/dashboards, or select a region and ask questions. There's a simple video walkthrough here: https://www.youtube.com/watch?v=BbHPyX2lJGI. The core idea is to "upgrade" existing tools, where people already do most of their data work, rather than building a new platform.
I (Vivek) spent 6 years working in various parts of the data stack, from data analysis at a 1000+ person ride hailing company to research at comma.ai, where I also handled most of the metrics and dashboarding infrastructure. The problems with data, surprisingly, were pretty much the same. Developers and product managers just want answers, or want to set up a quick view of some metric they care about. They often don't know which table contains what information, or what specific secret filters need to be kept in mind to get clean data. At large companies, analysts/scientists take care of most of these requests over a thousand back-and-forths. In small companies, most data projects end up being one-off efforts, and many die midway.
I've tried every new shiny analytics app out there and none of them fully solve this core issue. New tools also come with a massive cost: you have to convince everyone around you to move, change all your workflows and hope the new tool has all features your trusty old one did. Most people currently go to ChatGPT with barely any real background context, and admonish the model till it sputters some useful code, SQL or hypothesis.This is the kind of user we're trying to help.
The philosophy of MinusX mirrors that of comma. Just as comma is working on "an AI upgrade for your car", we want to retrofit analytics software with abilities that LLMs have begun to unlock. We also get a kick out of the fact that we use the same APIs humans use (clicking and typing), so we don't really need "permission" from any analytics app (just like comma.ai does not need permission from Mr Toyota Corolla) :)
How it works: Given an instruction, the MinusX chrome extension first constructs a simplified representation of the host application's state using the DOM, and a bunch of application specific cues. We also have a set of (currently) predefined actions (eg: clicking and typing) that the agent can use to interact with the host application. Any "complex action" can be described as a combination of these action-primitives. We send this entire context, the instruction and the actions to an LLM. The LLM responds with a sequence of actions which are executed and the revised state is computed and sent back to the LLM. This loop terminates when the LLM evaluates that the desired goals are met. Our architecture allows users to extend the capabilities of the agent by specifying new actions as combinations of the action-primitives. We're working on enabling users to do this through the extension itself.
"Retrofitting" is a weird concept for software, and we've found that it takes a while for people to grasp what this actually implies. We think, with AI, it will be more of a thing. Most software we use will be "upgraded" and not always by the people making the original software.
We ourselves are focused on data analytics because we've worked in and around data science / data analysis / data engineering all our careers - working at startups, Google, Meta, etc - and understand it decently well. But since "retrofitting" can be just as useful for a bunch of other field-specific software, we're going to open-source the entire extension and associated plumbing in the near future.
Also, let's be real - a sequence of function calls rammed through a decision tree does not make any for-loop "agentic". The reality is that a large amount of in-the-loop data needed for tasks such as ours does not exist yet! Getting this data flywheel running is a very exciting axis as well.
The product is currently free to use. In the future, we'll probably charge a monthly subscription fee, and support local models / bring-your-own-keys. But we're still working that out.
We'd be super stoked for you to try out MinusX! You can find the extension here: https://minusx.ai/chrome-extension. We've also created a playground with data, for both Jupyter and Metabase, so once the extension is installed you can take it for a spin: https://minusx.ai/playground
We'd love to hear what you think about the idea, and anything else you'd like to share! Suggestions on which tools to support next are most welcome :)
XAI! Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Use case: Evidence-based policy; impact: https://en.wikipedia.org/wiki/Evidence-based_policy
Test case: "Find leading economic indicators like bond yield curve from discoverable datasets, and cache retrieved data like or with pandas-datareader"
Use case: Teach Applied ML, NNs, XAI: Explainable AI, and first ethics
Tools with integration opportunities:
Google Model Explorer: https://github.com/google-ai-edge/model-explorer
Yellowbrick ML; teaches ML concepts with Visualizers for humans working with scikit-learn, which can be used to ensemble LLMs and other NNs because of its Estimator interfaces : https://www.scikit-yb.org/en/latest/
Manim, ManimML, Blender, panda3d, unreal: "Explain this in 3d, with an interactive game"
Khanmigo; "Explain this to me with exercises"
"And Calculate cost of computation, and Identify relatively sustainable lower-cost methods for these computations"
"Identify where this process, these tools, and experts picking algos, hyperparameters, and parameters has introduced biases into the analysis, given input from additional agents"
Uv 0.3 – Unified Python packaging
I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.
I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).
Anyways, software supply chain security and Python & package build signing and then containers and signing them too
Conda-forge's builds are probably faster than the official CPython builds. conda-forge/python-feedstock//recipe/meta.yml: https://github.com/conda-forge/python-feedstock/blob/main/re...
Conda-forge also has OpenBLAS, blos, accelerate, netlib, and Intel MKL; conda-forge docs > switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base/#swit...
From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
> We will show how to use this in practice with `rattler-build`
> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.
> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel
Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)
virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.
ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.
e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.
Is there already a way to, as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?
Show HN: PgQueuer – Transform PostgreSQL into a Job Queue
PgQueuer is a minimalist, high-performance job queue library for Python, leveraging the robustness of PostgreSQL. Designed for simplicity and efficiency, PgQueuer uses PostgreSQL's LISTEN/NOTIFY to manage job queues effortlessly.
Does the celery SQLAlchemy broker support PostgreSQL's LISTEN/NOTIFY features?
Similar support in SQLite would simplify testing applications built with celery.
How to add table event messages to SQLite so that the SQLite broker has the same features as AMQP? Could a vtable facade send messages on tablet events?
Are there sqlite Triggers?
Celery > Backends and Brokers: https://docs.celeryq.dev/en/stable/getting-started/backends-...
/? sqlalchemy listen notify: https://www.google.com/search?q=sqlalchemy+listen+notify :
asyncpg.Connection.add_listener
sqlalchemy.event.listen, @listen_for
psychopg2 conn.poll(), while connection.notifies
psychopg2 > docs > advanced > Advanced notifications: https://www.psycopg.org/docs/advanced.html#asynchronous-noti...
PgQueuer.db, PgQueuer.listeners.add_listener; asyncpg add_listener: https://github.com/janbjorge/PgQueuer/blob/main/src/PgQueuer...
asyncpg/tests/test_listeners.py: https://github.com/MagicStack/asyncpg/blob/master/tests/test...
/? sqlite LISTEN NOTIFY: https://www.google.com/search?q=sqlite+listen+notify
sqlite3 update_hook: https://www.sqlite.org/c3ref/update_hook.html
Can Large Language Models Understand Symbolic Graphics Programs?
What an awful paper title, saying "Symbolic Graphics Programs" when they just mean "vector graphics". I don't understand why they can not just use the established term instead. Also, there is no "program" here, in the same way that coding HTML is not programming, as vector graphics are not supposed to be Turing complete. And where they pulled the "symbolic" from is completely beyond me.
I'm more curious how they think LLM's can imagine things:
> To understand symbolic programs, LLMs may need to possess the ability to imagine how the corresponding graphics content would look without directly accessing the rendered visual content
To my understanding, LLMs are predictive engines based upon their tokens and embeddings without any ability to "imagine" things.
As such, an LLM might be able to tell you that the following SVG is a black circle because it is in Mozilla documentation[0]:
<svg viewBox="0 0 100 100" xmlns="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg">
<circle cx="50" cy="50" r="50" />
</svg>
However, I highly doubt any LLM could tell you the following is a "Hidden Mickey" or "Mickey Mouse Head Silhouette": <svg viewBox="0 0 175 175" xmlns="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg">
<circle cx="100" cy="100" r="50" />
<circle cx="50" cy="50" r="40" />
<circle cx="150" cy="50" r="40" />
</svg>
- [0] https://developer.mozilla.org/en-US/docs/Web/SVG/Element/cir...If teh LLM saves the SVG vector graphic to a raster image like a PNG and prompts with that instead, it will have no trouble labeling what's depicted in the SVG.
So, the task is "describe what an SVG depicts without saving it to a raster image and prompting with that"?
I believe this is exactly what the modern multimodal models are doing -- they are not strictly Large Language Models any more.
Which part is the query preprocessor?
/? LLM stack app architecture https://www.google.com/search?q=LLM+stack+app+architecture&t...
https://cobusgreyling.medium.com/emerging-large-language-mod... ; Flexibility / Complexity,
Using a list to manage executive function
todo.txt is a lightweight text format that a number of apps support: http://todotxt.org/ . From http://todotxt.org/todo.txt :
(priority) task text +project @context
(A) Call Mom @Phone +Family
(A) Schedule annual checkup +Health
(B) Outline chapter 5 +Novel @Computer
(C) Add cover sheets @Office +TPSReports
x Download Todo.txt mobile app @Phone
From "What does my engineering manager do all day?" (2021) https://news.ycombinator.com/item?id=28680961 :> - [ ] Create a workflow document with URLs and Text Templates
> - [ ] Create a daily running document with my 3 questions and headings and indented markdown checkbox lists; possibly also with todotxt/todo.txt / TaskWarrior & BugWarrior -style lineitem markup.
3 questions: Since, Before, Obstacles: What have I done since the last time we met? What will I do before the next time we meet? What obstacles are blocking my progress?
## yyyy-mm-dd
### @teammembername
#### Since
#### Before
#### Obstacles
A TaskWarrior demo from which I wrote https://gist.github.com/westurner/dacddb317bb99cfd8b19a3407d... : $ task help
task; task list; # 0 tasks.
task add "Read the source due:today priority:H project:projectA"
task add Read the docs later due:tomorrow priority:M project:projectA
task add project:one task abc
# "Created task 3."
task add project:one task def +someday depends:3
task
task context define projectA "project:projectA or +urgent"
task context projectA
task add task ghi
task add task jkl +someday depends:3
task # lists just tasks for the current context:"project:projectA"
task next
task 1 done
task next
task 2 start
task 2 done
task context none
task delete
TaskWarrior Best Practices: https://taskwarrior.org/docs/best-practices/The TaskWarrior +WAITING virtual label is the new way instead of status:waiting according to the changelog.
Every once in awhile, I remember that I have a wiki/workflow document with a list of labels for systems like these: https://westurner.github.io/wiki/workflow#labels :
GTD says have labels like -next, -waiting, and -someday.
GTD also incorporates the 4 D's of Time Management; the Eisenhower Matrix Method.
4 D's of Time Management: Do, Defer/Schedule, Delegate, Delete
Time management > The Eisenhower Method: https://en.wikipedia.org/wiki/Time_management#The_Eisenhower...
Hipster PDA: https://en.wikipedia.org/wiki/Hipster_PDA
"Use a work journal" (2024-07; 1083 pts) https://news.ycombinator.com/item?id=40950584
U.S. Inflation Rate 1960-2024
Keep in mind the things tracked in the Consumer Price Index are not static. That is, over time, there is a drift towards comparing apples to oranges. For example, if beef becomes too expensive so consumers switch to pork eventually then beef is removed and pork is tracked. That's not less inflation. That's bait and switch.
Futhermore, year by year inflation is too out of context. Instead for a given year we should also get the compounded rate for the previous 5 yrs, 10 yrs, and 15 yrs. That level of transparency would be significant.
CPI: Consumer Price Index; "inflation": https://en.wikipedia.org/wiki/Consumer_price_index
US BLS: Bureau of Labor Statistics > Consumer Price Indexes Overview: https://www.bls.gov/cpi/overview.htm :
> Price indexes are available for the U.S., the four Census regions, nine Census divisions, two size of city classes, eight cross-classifications of regions and size-classes, and for 23 local areas. Indexes are available for major groups of consumer expenditures (food and beverages, housing, apparel, transportation, medical care, recreation, education and communications, and other goods and services), for items within each group, and for special categories, such as services.
- FWIU food prices never decreased after COVID-19.
https://www.google.com/search?q=FWIU+food+prices+never+decre.... :
"Consumer Price Index for All Urban Consumers: Food and Beverages in U.S. City Average (CPIFABSL)" https://fred.stlouisfed.org/series/CPIFABSL
"Consumer Price Index for All Urban Consumers: All Items Less Food and Energy in U.S. City Average (CPILFESL)" https://fred.stlouisfed.org/series/CPILFESL
> The "Consumer Price Index for All Urban Consumers: All Items Less Food & Energy" is an aggregate of prices paid by urban consumers for a typical basket of goods, excluding food and energy. This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
- COVID eviction moratoriums ended in 2021. How did that affect CPI rent/lease/mortgage, and new housing starts now that lumber prices have returned to normal? https://en.wikipedia.org/wiki/COVID-19_eviction_moratoriums_...
"Consumer price index (CPI) for rent of primary residence compared to CPI for all items in the United States from 2000 to 2023" https://www.statista.com/statistics/1440254/cpi-rent-primary...
- pandas-datareader > FRED, caching queries: https://pandas-datareader.readthedocs.io/en/latest/remote_da...
> This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
Translation: The Core Consumer Price Index has little to do with actual citizens, and instead reflects some theoretical group of people who don't eat and don't go anywhere.
In addition, politicians recite the CCPI and The Media parrot them. Both knowing - or should know - it's deceptive and doesn't actually reflect reality.
What could go wrong?
What metrics do you suggest for the purpose?
Is tone helpful?
It's not the metrics per se, it's the disconnect between what they actually represent and how they're used to misrepresent the current economic feeliings of the masses.
Take the last few months in the USA, we've been assured by the party in power that inflation is under control, etc. Yet, *no one* who goes to the super market on weekly basis has seen that and believes that. Eventually, such gaslighting leads to mistrust and pushback.
The smart thing would be to include anything practical consumers MUST purchase. To exclude food and gas might be great for academics, but for the everyone else is smells of BS.
How should the free hand of the market affect post-COVID global food prices?
Tariff spats (and firms' resultant inability to compete at global price points) or free market trade globalization with nobody else deciding the price for the contracting [fair trade] parties.
There are multiple lines that can be plotted on a chart: Core CPI, CPI Food &|| Gas, GDP, yield curve, M1 and M2, R&D focii and industry efficiency metrics
You're off topic. The discussion is about CCPI and how it's misleading and abused. That is, how it's used as a propaganda tool.
I'm not trying to explain post Covid inflation, the free market, etc.
How should they get food price inflation under control?
What other metrics for economic health do you suggest?
Well, you certainly don't get it under control by removing it from the key metric. You don't get it under control by pretending it hasn't been removed and then gaslighting the public by saying everything is ok, that is under control.
What can they do here?
"Harris’ plan to stop [food] price gouging could create more problems than it solves" (2024) https://www.cnn.com/2024/08/16/business/harris-price-gouging... :
> Food prices have surged by more than 20% under the Biden-Harris administration, leaving many voters eager to stretch their dollars further at the grocery store.
> On Friday, Vice President Kamala Harris said she has a solution: a federal ban on price gouging across the food industry.
They still haven't undone the prior administration's tariff wars of recent yore fwiu. What happened to everyone loves globalization and Free Trade and Fair Trade? Are you down with TPP?
But did prices reflect the value of the workers and their product in the first place.
There were compost shortages during COVID.
There are still fertilizer shortages FWIU. TIL about not Ed Koch trying to buy Iowa's basically state-sponsored fertilizer business which was created in direct response to the fertilizer market conditions created in significant part by said parties.
Humanoid robots in agriculture, with agrivoltaics, and sustainable no-till farming practices; how much boost in efficiency can we all watch and hope for?
What Is a Knowledge Graph?
Good article on the high level concepts of a knowledge graph, but some concerning mischaracterizations of core functions of ontologies supporting the class schema and continued disparaging of competing standards-based (RDF triple-store) solutions. That the author omits the updates for property annotations using RDF* is probably not an accident and glosses over the issues with their proprietary clunky query language.
While knowledge graphs are useful in many ways, personally I wouldn't use Neo4J to build a knowledge graph as it doesn't really play to any of their strengths.
Also, I would rather stab myself with a fork than try to use Cypher to query a concept graph when better standards-based options are available.
I enjoy cypher, it's like you draw ASCII art to describe the path you want to match on and it gives you what you want. I was under the impression that with things like openCypher that cypher was becoming (if not was already) the main standard for interacting with a graph database (but I could be out of date). What are the better standards-based options you're referring to?
W3C SPARQL, SPARUL is now SPARQL Update 1.1, SPARQL-star, GQL
GraphQL is a JSON HTTP API schema (2015): https://en.wikipedia.org/wiki/GraphQL
GQL (2024): https://en.wikipedia.org/wiki/Graph_Query_Language
W3C RDF-star and SPARQL-star (2023 editors' draft): https://w3c.github.io/rdf-star/cg-spec/editors_draft.html
SPARQL/Update implementations: https://en.wikipedia.org/wiki/SPARUL#SPARQL/Update_implement...
/? graphql sparql [ cypher gremlin ] site:github.com inurl:awesome https://www.google.com/search?q=graphql+sparql++site%253Agit...
But then data validation everywhere; so for language-portable JSON-LD RDF validation there are many implementations of JSON Schema for fixed-shape JSON-LD messages, there's W3C SHACL Shapes and Constraints Language, and json-ld-schema is (JSON Schema + SHACL)
/? hnlog SHACL, inference, reasoning; https://news.ycombinator.com/item?id=38526588 https://westurner.github.io/hnlog/#comment-38526588
> free copy of the O’Reilly book "Building Knowledge Graphs: A Practitioner’s Guide"
Knowledge Graph (disambiguation) https://en.wikipedia.org/wiki/Knowledge_Graph_(disambiguatio...
Knowledge graph: https://en.wikipedia.org/wiki/Knowledge_graph :
> In knowledge representation and reasoning, a knowledge graph is a knowledge base that uses a graph-structured data model or topology to represent and operate on data. Knowledge graphs are often used to store interlinked descriptions of entities – objects, events, situations or abstract concepts – while also encoding the free-form semantics or relationships underlying these entities. [1][2]
> Since the development of the Semantic Web, knowledge graphs have often been associated with linked open data projects, focusing on the connections between concepts and entities. [3][4] They are also historically associated with and used by search engines such as Google, Bing, Yext and Yahoo; knowledge-engines and question-answering services such as WolframAlpha, Apple's Siri, and Amazon Alexa; and social networks
Ideally, a Knowledge Graph - starting with maybe a "personal knowledge base" in a text document format that can be rendered to HTML with templates - can be linked with other data about things with correlate-able names; ideally you can JOIN a knowledge graph with other graphs as you can if the Node and Edge Relations with Schema and URIs make it possible to JOIN.
A knowledge graph is a collection of nodes and edges (or nodes and edge nodes) with schema so that it is query-able and JOIN-able with.
A Named Graph URI may be the graphid ?g of an RDF statement in a quadstore:
?g ?s ?p ?o // ?o_datatype ?o_lang
MiniBox, ultra small busybox without uncommon options
Would this compile to WASM for use as a terminal in JupyterLite? Though, busybox has the ash shell instead of bash. https://github.com/jupyterlite/jupyterlite/issues/949
It could with a few tweaks, but it lacks features when compared to bash and busybox ash. For example, it doesn't support shell scripting, configuration file initialization, etc. I wouldn't recommended it right now though since it doesn't have most features of a standard shell but if you want to test it out, give it a try.
RustyBox is a c2rust port/rewrite of BusyBox to Rust; though it doesn't look like anyone has contributed fixes to the unsafe parts in awhile.
So, no SysV /etc/init.d because no shell scripting, and no systemd because embedded environment resource constraints? There must be a process to init and respawn processes on events like boot (and reboot, if there's a writeable filesystem)
Yes, you are right, I am trying to create a smaller version of sysvinit compatible with busybox's init. There is still no shell scripting yet, but I do promise to make the shell fully compatible with other shells with shell scripting in a future version.
What happened to RustyBox? c2rust is way easier than doing it manually, then why is it abandoned?
Same question about rustybox. Maybe they're helping with cosmos-de or coreutils or something.
OpenWRT has procd instead of systemd, but it does source a library of shell functions and run [somewhat haphazard] /etc/init.d scripts instead of parsing systemd unit configuration files to eliminate shell scripting errors when spawning processes as root.
https://westurner.github.io/hnlog/ Ctrl-F procd, busd
(This re: bootloaders, multiple images, and boot integrity verification: https://news.ycombinator.com/item?id=41022352 )
Systemd is great for many applications.
Alternatives to systemd: https://without-systemd.org/wiki/index_php/Alternatives_to_s...
There are advantages to systemd unit files instead of scripts. Porting packages between distros is less work with unit files. Systemd respawns processes consistently and with standard retry/backoff functionality. Systemd+journals produces indexable logs with consistent datetimes.
There's a pypi:SystemdUnitParser.
docker-systemctl-replacement > systemctl3.py parses and schedules processes defined in systemd unit files: https://github.com/gdraheim/docker-systemctl-replacement/blo...
Architectural Retrospectives: The Key to Getting Better at Architecting
Architecture retrospectives sound like a good idea but don't seem to be "the key" because in practice they fail too often.
Retrospectives look backwards, and most teams aren't wired that way. And unlike sprint retrospectives, architecture retrospectives turn up issues that are unrealistic/impossible to adjust in real time.
Instead, try lightweight architectural decision records, during the selection process, at the start, when things are easiest to change, and can be the most flexible.
When teams try ADRs in practice, teamwork goes up, choices get better, and implementations get more realistic. And if someone later on wants to do a retrospective, then the key predictive information is right there in the ADR.
https://github.com/joelparkerhenderson/architecture-decision...
Is there already a good way to link an ADR Architectural Decision Record with Threat Modeling primitives and considerations?
"Because component_a doesn't support OAuth", "Because component_b doesn't supported signed cookies"
Threat Model: https://en.wikipedia.org/wiki/Threat_model
GH topic: threat-modeling: https://github.com/topics/threat-modeling
Real and hypothetical architectural security issues can be linked to CWE Common Weakness Enumeration URLs.
SBOM tools help to inventory components and versions in an existing architecture and to JOIN with vuln databases that publish in OSV OpenSSF Vulnerability Format, which is useful for CloudFuzz, too.
Good question. Yes there are a variety of ways that help.
1. If the team favors lightweight markdown, then add a markdown section for Threat Modeling and markdown links to references. Some teams favor doing this for more kinds of analysis, such as business analysis (e.g. SWOT) and environment analysis (e.g. PESTLE) and risk analysis (e.g. RAID).
2. If the team favors metrics systems, then consider trying a Jupyter notebook. I haven't personally tried this. Teams anecdotally tell me these can be excellent for showing probable effects.
3. If the team is required to use existing tooling such as tracking systems, such as for auditing and compliance, then consider writing the ADR within the tracking system and linking to it internally.
> 1.
Add clickable URL links to the reference material for whichever types of analyses.
> 2. Jupyter
Notebooks often omit test assertions that would be expected of a Python module with an associated tests/ directory.
With e.g. ipytest, you can run specific unit tests in notebook input cells (instead of testing the whole notebook).
There are various ways to template and/or parametrize notebooks; copy the template.ipynb and include the date/time in the filename__2024-01-01.ipynb, copy a template git repo and modify, jupyterlab_templates, papermill
> 3. [...] consider writing the ADR within the tracking system and linking to it internally
+1. A "meta issue" or an epic includes multiple other issues.
If you reference an issue as a list item in GFM GitHub-Flavored Markdown, GitHub will auto-embed the current issue title and closed/open status:
- https://github.com/python/cpython/issues/1
- python/cpython#1
This without the leading `- ` doesn't get the card embed though: https://github.com/python/cpython/issues/1
Whereas this
form with hand-copied issue titles, non-obfuscated URLs you don't need to hover over to read, and - [ ] markdown checkboxes works with all issue trackers: - [ ] "Issue title" https://github.com/python/cpython/issues/1
- [x] "Issue title" https://github.com/python/cpython/issues/2
ARPA-H announces awards to develop novel technologies for precise tumor removal
This is the sort of thing I'd love to see NIH have many parallel research tracks on this stuff, given backing & support to make the research happen & to release the work.
Precision medicine has so much potential, feels so in line for some really wild breakthroughs, with its ability to make so many exact & small operations.
I wonder where the Obama Precision Medicine (2015) work has gotten to so far. https://obamawhitehouse.archives.gov/the-press-office/2015/0...
The biggest Obama spend was to create a research cohort (https://allofus.nih.gov/). So far, it hasn't paid dividends in an appreciable way, but the UK Biobank (on which the US program is partially modeled) started in 2006 and is now contributing immensely to the development of medicine.
The US program has the potential to be even more valuable if managed well, but I haven't seen overwhelming indications of reaching that potential yet; however, I think a few more years are needed for a clear evaluation.
Sadly the All of Us program (of which I am a research subject [and researcher]) hasn't done any sort of imaging or ECG. They did draw blood, so that allowed for genome sequencing. That may also, in principle, allow for assaying new biomarkers (I don't believe anything like that has been funded though).
2009-2010 Meaningful Use criteria required physicians to implement electronic care records.
There's now FHIR for sharing records between different vendors' EHR systems. There's a JSONLD representation of FHIR.
Which health and exercise apps can generate FHIR for self-reporting?
Can participants forward their other EHR/EMR records to the All of Us program?
Can participants or Red Cross or other blood donation services forward collected vitals and sample screening data to the All of Us program?
The imaging reports are being imported, but not the images.
The bigger issue is that clinical imaging is done due to medical indications. Medical indications dramatically confound the results. A key element of the UK Biobank's value is that imaging and ECG are being done without any clinical indication.
So they need more data from healthy patients when they're healthy in order to determine what's anomalous?
SIEM tools do anomaly detection with lots of textual log data and sensor data.
What would a Cost-effectiveness analysis say about collecting data from healthy patients: https://en.wikipedia.org/wiki/Cost-effectiveness_analysis
TIL the ROC curve applies to medical tests.
Photon Entanglement Drives Brain Function
Traditional belief: photons do not interact with photons, photons are massless according to the mass energy relation.
New findings: Photons interact as phonons in matter.
"Quantum entangled photons react to Earth's spin" (2024) https://news.ycombinator.com/item?id=40720147 :
> Actually, photons do interact with photons; as phonons in matter: "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 https://news.ycombinator.com/item?id=40600762
"New theory links quantum geometry to electron-phonon coupling" (2024) https://news.ycombinator.com/item?id=40663966 https://phys.org/news/2024-06-theory-links-quantum-geometry-... :
> A new study published in Nature Physics introduces a theory of electron-phonon coupling that is affected by the quantum geometry of the electronic wavefunctions
The field of nonlinear optics deals with photon-photon interactions in matter, and has been around for almost a century.
Do they model photons as rays, vectors, particles, waves, or fluids?
https://en.wikipedia.org/wiki/Nonlinear_optics :
> In nonlinear optics, the superposition principle no longer holds.[1][2][3]
But phonons are quantum waves in or through matter and the superposition principle holds with phonons AFAIU
Superposition is valid in vacuum, well, actually, until the photons have enough energy to colide and form an electron-positron pair.
It's also valid in most transparent material, again, assuming
1) each photon has no enough energy for example to extract an electron form the material like in the photoelectric effect, or creating an electron and a hole in a semiconductor, or ...
2) there are not enough photons, so you can model the effect using linear equations
And there are weird materials where the non linear effects are easy to trigger.
The conclusion is that superposition is only a nice approximation in the easy case.
I had always assumed that activity in the brain was unsynchronized and that it is that which produces the necessary randomness through race conditions. I am considering the need to find synchronization as making some sort of computer analogy which doesn't exist.
Brain activity is highly synchronized, at least according to the definition used by neuroscientists. Brain waves operate at certain hertz based on your level of arousal. https://en.wikipedia.org/wiki/Neural_oscillation
Brain waves also synchronize to other brain waves; "interbrain synchrony"
- "The Social Benefits of Getting Our Brains in Sync" (2024) https://www.quantamagazine.org/the-social-benefits-of-gettin...
But that might be just like the sea has waves?
The brain waves drive the synchronization process by triggering inhibition in neurons not in the targeted subgroup.
There are different frequency waves that operate over different timescales and distances and are typically involved in different types of functional activity.
Mermaid: Diagramming and Charting Tool
A key thing to appreciate is that both Github and Gitlab support rendering Mermaid graphs in their ReadMe's
[0] https://docs.gitlab.com/ee/user/markdown.html
[1] https://github.blog/developer-skills/github/include-diagrams...
JupyterLab supports MermaidJS: https://github.com/jupyterlab/jupyterlab/pull/14102
Looks like Colab almost does.
The official vscode MermaidJS extension probably could work in https://vscode.dev/ : https://marketplace.visualstudio.com/items?itemName=MermaidC... :
ext install MermaidChart.vscode-mermaid-chart
LLMs can generate MermaidJS in Markdown diagrams that can easily fixed given sufficient review.Additional diagramming (node graph) tools: GraphViz, Visio, GSlides, Jamboard, yEd by yworks layout algorithms, Gephi, Dia, PlantUML, ArgoUML, blockdiag, seqdiag, rackdiag, https://markmap.js.org/
JPlag – Detecting Software Plagiarism
Should a plagiarism score be considered when generating code with an infinite monkeys algorithm with selection or better?
Would that result in inability to write code in a clean room, even; because eventually all possible code strings and mutations thereof would already be patented.
For example, are three notes or chords copyrightable?
Resilient Propagation and Free-Space Skyrmions in Toroidal EM Pulses
"Observation of Resilient Propagation and Free-Space Skyrmions in Toroidal Electromagnetic Pulses" (2024) https://pubs.aip.org/apr/article/11/3/031411/3306444/Observa...
- "Electromagnetic vortex cannon could enhance communication systems" (2024) https://phys.org/news/2024-08-electromagnetic-vortex-cannon-...
Elroy Jetson, Buzz Lightyear, and Ghostbusters all project rings / vortexes.
Are toroidal pulses any more stable for fusion, or thrust and handling?
"Viewing Fast Vortex Motion in a Superconductor" (2024) https://physics.aps.org/articles/v17/117
New research on why Cahokia Mounds civilization left
I used to live near the Missouri River before it meets the Mississippi River south of STL, but haven't yet made it to the Cahokia Mounds which are northeast and across the river from what is now St. Louis, Missouri.
Was it disease?
["Fusang" to the Chinese, various names to Islanders FWIU]
[?? BC/AD: Egyptian treasure in Illinois, somehow without paddleboats to steam up the Mississippi]
~800 AD: Lead Cross of Knights Templar in Arizona, according to America Unearthed S01E10. https://www.google.com/search?q=%7E800+AD%3A+Templar+Cross%2... ; a more recent dating of Tucson artifacts: https://en.wikipedia.org/wiki/Tucson_artifacts
~1000 AD: Leif Erickson, L'Anse aux Meadows; Discovering Vinland: https://en.m.wikipedia.org/wiki/Leif_Erikson#Discovering_Vin...
And then the Story of Erik the Red, and a Skraeling girl in Europe, and Columbus; and instead we'll celebrate Juneteenth day to celebrate when news reached Galveston.
Did they plant those mounds? Did they all bring good soil or dirt to add to the mound?
May Pole traditions may be similar to "all circle around the mountain" practices in at least ancient Egyptian culture FWIU.
If there was a lot of contact there, would that have spread diseases? (Various traditions have intentionally high contact with hol y water containers on the way in, too, for example.)
FWIU there's strong evidence for Mayans and Aztecs in North America; but who were they displacing?
For one thing, the words you want are "Maya" and "Nahua". "Mayan" is the language family and "Aztec" refers to various things, none of which are what you want.
They're also definitely unrelated to the mound cultures except for the broadest possible relationships like existing on the same continent.
How are pyramid-building cultures definitely unrelated to mound-building cultures?
Cahokia Mounds: https://en.wikipedia.org/wiki/Cahokia :
> Today, the Cahokia Mounds are considered to be the largest and most complex archaeological site north of the great pre-Columbian cities in Mexico.
Chicago was a trading post further north FWIU, but not an archaeological site.
"Michoacan, Michigan, Mishigami, Mizugami: Etymological Origins? A Legend." https://christopherbrianoconnor.medium.com/michoacan-michiga...
Is there evidence of hydrological engineering or stonework?
It's not clear whether the megalithic Sage Wall in MT was man-made, and sort of looks like the northern glacier pass it may have marked.
FWIU there are quarry sites in the southwest that predate most timelines of stonework in the Americas and in Egypt, Sri Lanka / Indonesia, and East Asia; but they're not further north than Cahokia Mounds.
In TN, There are many Clovis sites; but they decided to flood the valley that was home to Sequoyah - who gave written form to Cherokee and other native languages - and also a 9500-year old archaeological site.
This says the Clovis people of Clovis, New Mexico are the oldest around: https://tennesseeencyclopedia.net/entries/paleoindians-in-te...
The Olmecs, Aztecs, and Mayans all worked stone.
From where did stonework like the Osireon originate?
> How are pyramid-building cultures definitely unrelated to mound-building cultures?
You're asserting a causal relationship, not a functional or morphological similarity. You haven't made an argument for that yet.
> This says the Clovis people of Clovis, New Mexico are the oldest around
They aren't. Pre-clovis is well established at this point. I'm not sure you intend this with your phrasing, but maybe this will be useful. Archaeological cultures like Clovis don't name a "people" because the actual humans or may not have shared unified identities. Type sites also just represent a clear example of the broader category they're naming rather than anything about where things originated.
There's also thousands and thousands of years separation between Clovis Mexica, Mound cultures, or the Maya, so unless you make an argument I'm not sure what you're trying to say here. Do you think all lithics are the same?
> The Olmecs, Aztecs, and Mayans all worked stone.
1) you've made the same naming mistake again and 2) this isn't an argument for anything.
Queues invert control flow but require flow control
I always wish more metaphors were built using conveyor belts in these discussions. It helps me mentally underscore that you have to pay attention to what queue/belt you load and why you need to give that a lot of thought.
Granted, I'm probably mostly afraid of diving into factorio again. :D
The Spintronics mechanical circuits game is sort of like conveyor belts.
Electrons are not individually identified like things on a conveyor belt.
Electrons in conductors, semiconductors, and superconductors do behave like fluids.
Turing tapes; https://hackaday.com/2016/08/18/the-turing-tapes/
Theory of computation > Models of computation: https://en.wikipedia.org/wiki/Theory_of_computation
Reservoir of liquid water found deep in Martian rocks
Article: "Liquid water in the Martian mid-crust" (2024) https://www.pnas.org/doi/10.1073/pnas.2409983121
- "Mars may host oceans’ worth of water deep underground" [according to an analysis of seismic data] https://www.planetary.org/articles/mars-may-host-oceans-wort... :
> Now, a team of scientists has used Marsquakes — measured by NASA’s InSight lander years ago — to see what lies beneath. Since the way a Marsquake travels depends on the rock it’s passing through, the researchers could back out what Mars’ crust looks like from seismic measurements. They found that the mid-crust, about 10-20 kilometers (6-12 miles) down, may be riddled with cracks and pores filled with water. A rough estimate predicts these cracks could hold enough water to cover all of Mars with an ocean 1-2 kilometers (0.6-1.2 miles) deep
> [...] This reservoir could have percolated down through nooks and crannies billions of years ago, only stopping at huge depths where the pressure would seal off any cracks. The same process happens on our planet — but unlike Mars, Earth’s plate tectonics cycles this water back up to the surface
> [...] “It would be very challenging,” Wright said. Only a few projects have ever bored so deep into Earth’s crust, and each one was an intensive undertaking. Replicating that effort on another planet would take lots of infrastructure, Wright goes on, and lots of water.
How much water does drilling take on Earth?
Water is used to displace debris and to carry it up to the surface.
A cylinder of 30cm diameter and 10km deep would hold around 700k litres.
1 US gal = 3.785 litres
700k litres = 0.184920437 million US gallons
"How much water does the typical hydraulically fractured well require?" https://www.usgs.gov/faqs/how-much-water-does-typical-hydrau... :
> Water use per well can be anywhere from about 1.5 million gallons to about 16 million gallons
https://www.epa.gov/watersense/statistics-and-facts :
> Each American uses an average of 82 gallons of water a day at home (USGS, Estimated Use of Water in the United States in 2015).
So, 1m gal / 82 gal/person/day = 12,195 person/days of water.
Camping guidelines suggests 1-2 gallons of water per person per day.
1m gal / 2 gal/person/day = 500,000 person/days of water
The 2016 Mars NatGeo series predicts dynamics of water scarcity on Mars.
Water on Mars: https://en.wikipedia.org/wiki/Water_on_Mars
Show HN: R.py, a small subset of Python Requests
i revisited the work that i started in 2022 re: writing a subset of Requests but using the standard library: https://github.com/gabrielsroka/r
back then, i tried using urllib.request (from the standard library, not urllib3) but it lacks what Requests/urllib3 has -- connection pools and keep alive [or that's where i thought the magic was] -- so my code ran much slower.
it turns out that urllib.request uses http.client, but it closes the connection. so by using http.client directly, i can keep the connection open [that's where the real magic is]. now my code runs as fast as Requests/urllib3, but in 5 lines of code instead of 4,000-15,000+
moral of the story: RTFM over and over and over again.
"""Fetch users from the Okta API and paginate."""
import http.client
import json
import re
import urllib.parse
# Set these:
host = 'domain.okta.com'
token = 'xxx'
url = '/api/v1/users?' + urllib.parse.urlencode({'filter': 'profile.lastName eq "Doe"'})
headers = {'authorization': 'SSWS ' + token}
conn = http.client.HTTPSConnection(host)
while url:
conn.request('GET', url, headers=headers)
res = conn.getresponse()
for user in json.load(res):
print(user['id'])
links = [link for link in res.headers.get_all('link') if 'rel="next"' in link]
url = re.search('<https://[^/]+(.+)>', links[0]).group(1) if links else None
https://docs.python.org/3/library/urllib.request.htmlhttps://docs.python.org/3/library/http.client.html
IIRC somewhere on the Python mailing list archives there's an email about whether to add the HTTP redirect handling and then SSL support - code to urllib, urllib2, or create urllib or httplib.
How does performance compare to HTTPX, does it support HTTP 1.1 request pipelining, does it support HTTP/2 or HTTP/3?
Show HN: I built an animated 3D bookshelf for ebooks
The most cited authors in the Stanford Encyclopedia of Philosophy
Fascinating list that I thought yall would enjoy! If you’re not yet aware, https://plato.stanford.edu is as close to “philosophical canon” as it gets in modern American academia.
Shoutout to Gödel and Neumann taking top spots despite not really being philosophers, at least in how they’re remembered. Comparatively, I’m honestly shocked that neither Bohr nor Heisenberg made the cut, even though there’s multiple articles on quantum physics… Turing also managed to sneak in under the wire, with 33 citations.
The bias inherent in the source is discussed in detail, and I would also love to hear HN ideas on how to improve this project, and how to visualize the results! I’m not the author, but this is right up my alley to say the least, and I’d love to take a crack at it.
From "Show HN: WhatTheDuck – open-source, in-browser SQL on CSV files" https://news.ycombinator.com/item?id=39836220 :
> datasette-lite can load [remote] sqlite and Parquet but not yet DuckDB (?) with Pyodide in WASM, and there's also JupyterLite as a datasette plug-in: https://github.com/simonw/datasette-lite https://news.ycombinator.com/user?id=simonw https://news.ycombinator.com/from?site=simonwillison.net
JSON-LD with https://schema.org/Person records with wikipedia/dbpedia RDF URIs would make it easy to query on whichever datasets can be joined on common RDFS properties like schema: :identifier and rdfs:subPropertyOf sub-properties, https://schema.org/url, :sameAs,
Plato in RDF from dbpedia: https://dbpedia.org/page/Plato
Today there are wikipedia URLs, DOI URN URIs, shorturls in QR codes, ORCID specifically to search published :ScholarlyArticle by optional :author, and now there are W3C DIDs Decentralized Identifiers for signing, identifying, and searching of unique :Thing and skos:Concept that can be generated offline and optionally registered centrally, or centrally generated and assigned like DOIs but they're signing keys.
Given uncertainty about time intervals, plot concepts over time with charts for showing graph growth over time. Maybe philosophy skos:Concept intervals (and relations, links) from human annotations thereof and/or from LLM parsing and search snippets of Wikipedia, dbpedia RDF, wikidata RDF, and ranked Stanford Encyclopedia of Philosophy terminological occurrence frequency.
- "Datasette Enrichments: a new plugin framework for augmenting your data" (2023) by row with asyncio and optionally httpx: https://simonwillison.net/2023/Dec/1/datasette-enrichments/
Ask HN: How to Price a Product
I'm 11 years experienced software engineer. I have been showing interests in frontend design and development and my day job is around it.
I built a working concept that converts designs to interactive code. However, the tech is not usable at this stage, no documentation and looks bad, but works as expected.
I'm aware of selling services as SAAS. I'd like to sell the tech as a product containing a software, product manual, usage instructions with samples.
The target users are designers, initial feedbacks were some points to address and they are curious and interested.
Having said that it's not SAAS, not subscription based, I'd like to know how to put a price on it as a product.
The Accounting Equation values your business:
Asset Value = Equities + Liabilities
Accounting Equation: https://en.wikipedia.org/wiki/Accounting_equation : Assets = Liabilities + Contributed Capital + Revenue − Expenses − Dividends
Revenue - Expenses
Cash flow is Revenue.Cash flow: https://en.wikipedia.org/wiki/Cash_flow :
> A cash flow CF is determined by its time t, nominal amount N, currency CCY, and account A; symbolically, CF = CF(t, N, CCY, A)
Though CF(A, t, ...) or CF(t, A) may be more search-index optimal, and really A_src_n, A_dest_n [...] ILP Interledger Protocol.
Payment fees are part of the price, unless you're a charity with a donate processing costs too option.
OTOH, though there are already standard departmental accounting chart names:
price_product_a = cost_payment_fees + cost_materials + cost_labor + cost_marketing + cost_sales + cost_support + cost_errors_omissions + cost_chargebacks + cost_legal + cost_future_liabilities + [...]
A CAS Computer Algebra System like {SymPy,Sage} in a notebook can help define inequality relations to bound or limit the solution volume or hyper volume(s)And then unit test functions can assert that a hypothesized model meets criteria for success
But in Python, for example, test_ functions can't return values to the test runner, they would need to write [solution evaluation score] outputs to a store or a file to be collected with the other build artifacts and attached to a GitHub Release, for example.
Eventually, costs = {a: 0, b: 2} so that it's costs[account:str|uint64] instead of cost_account, and then costs = ERP.reports[name,date].archived_output()
Monte Carlo simulation is possible with things like PyMC (MCMC) and TIL about PyVBMC. Agent-based simulation of consumers may or may not be more expensive or the same problem. In behavioral economics, many rational consumers make informed buying decisions.
Looking at causal.app > Templates, I don't see anything that says "pricing" but instead "Revenue" which may imply models for sales, pricing, costs, cost drivers;
> Finance > Marketing-driven SaaS Revenue Model for software companies with multiple products. Its revenue growth is driven by marketing spend.
> Finance > Sales-driven SaaS Revenue > Understand sales rep productivity and forecast revenue accurately
/? site:causal.app pricing model https://www.google.com/search?q=site%3Acausal.app+pricing+mo...
- Blog: "The Pros and Cons of Common SaaS Pricing Models",
- Models: "Simple SaaS pricing calculator",
- /? hn site=causal.app > more lists a number of SaaS metrics posts: https://news.ycombinator.com/from?site=causal.app
/? startupschool pricing: https://www.google.com/search?q=startupschool+pricing
Startup School Curriculum > Ctrl-F pricing: https://www.startupschool.org/curriculum
- "Startup Business Models and Pricing | Startup School" https://youtube.com/watch?v=oWZbWzAyHAE&
/? price sensitivity analysis Wikipedia: https://www.google.com/search?q=price+sensitivity+analysis+W...
Pricing strategies > Models of pricing: https://en.wikipedia.org/wiki/Pricing_strategies#Models_of_p...
Price analysis > Key Aspects, Marketing: https://en.wikipedia.org/wiki/Price_analysis :
> In marketing, price analysis refers to the analysis of consumer response to theoretical prices assessed in survey research
/? site:github.com price sensitivity: https://www.google.com/search?q=site%3Agithub.com+price+sens... :
As a market economist looking at product pricing, given maximal optimization for price And CLV customer lifetime value, what corrective forces will pull the price back down? Macroeconomic forces, competition,
Porter's five forces analysis: https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysi... :
> Porter's five forces include three forces from 'horizontal competition' – the threat of substitute products or services, the threat of established rivals, and the threat of new entrants – and two others from 'vertical' competition – the bargaining power of suppliers and the bargaining power of customers.
> Porter developed his five forces framework in reaction to the then-popular SWOT analysis, which he found both lacking in rigor and ad hoc.[3] Porter's five-forces framework is based on the structure–conduct–performance paradigm in industrial organizational economics. Other Porter's strategy tools include the value chain and generic competitive strategies.
Are there upsells now, later; planned opportunities to produce additional value for the longer term customer relationship?
When and how will you lower the price in response to competition and costs?
How does a pricing strategy vary if at all if a firm is Bootstrapping vs the Bank's money?
/? saas pricing: https://hn.algolia.com/?q=saas+pricing
Rivian reduced electrical wiring by 1.6 miles and 44 pounds
The electrical wiring in cars is Conway's law manifest in copper.
For those unfamiliar with Conway's law, I am arguing that how the car companies have organized themselves--and their budgets--ends up being directly reflected in the number of ECUs as well as how they're connected with each other. I imagine that by measuring the amount of excess copper, you´d have a pretty good measure for the overhead involved in the project management from the manufacturers' side.
(I previously worked for Daimler)
Would be funny to have a ‘mm Cu / FTE’ scaling law.
The ohm*meter^3 is the unit of electrical resistance.
Electrical resistivity and conductivity: https://en.wikipedia.org/wiki/Electrical_resistivity_and_con...
Is there a name for Wh/m or W/m (of Cu or C) of loss? Just % signal loss?
"Copper Mining and Vehicle Electrification" (2024) https://www.ief.org/focus/ief-reports/copper-mining-and-vehi... .. https://news.ycombinator.com/item?id=40542826
There's already conductive graphene 3d printing filament (and far less conductive graphene). Looks like 0.8ohm*cm may be the least resistive graphene filament available: https://www.google.com/search?q=graphene+3d+printer+filament...
Are there yet CNT or TWCNT Twisted Carbon Nanotube substitutes for copper wiring?
A high energy hadron collider on the Moon
Could this be made out of a large number of starships that land on the moon and have a magnet payload? Would require something on the order of 1000 starships 50 meters tall but only about 100 starships 100 meters tall. Perhaps an existing starship with a payload that extends a magnet another 50 meters higher after landing. Has Musk already thought about this? Correction - would require about 500 starships 100 meters tall. ChatGpt 4o mini is even worse than me at this line of sight math! Taller towers could be built to extend upwards out of a starship payload especially if guywires are used for stability. Magnets would require shielding from the Sun and reflections from the lunar surface similar to James Webb but far less demanding to achieve a temperature suitable for superconducting magnets. Obviously solar power with batteries to enable lunar nighttime operation. How many magnets are there in the LHC? How exactly circular or not does it need to be?
Next thought is to build it in space where there can be unlimited expansion of the ring diameter. Problem would be focusing the beam by aiming many freely floating magnets. Would require some form of electronic beam aiming that can correct for motion of the magnets. My EM theory is too rusty (50 years since I got a C in the course) to figure out what angle a proton beam can be bent by one magnet at these energies and so how many magnets would be required.
NotebookLLM is designed for this math.
First thoughts without reading the paper: there's too much noise and the moon is hollow (*), so even if you could assemble it [a particle collider] by bootstrapping with solar and thermoelectric and moon dirt and self-replicating robots, the rare earth payload cost is probably a primary cost driver barring new methods for making magnets from moon rock rare earths.
Is the moon hollow?
Lunar seismology: https://en.wikipedia.org/wiki/Lunar_seismology :
> NASA's Planetary Science Decadal Survey for 2012-2022 [12] lists a lunar geophysical network as a recommended New Frontiers mission. [...]
> NASA awarded five DALI grants in 2024, including research on ground-penetrating radar and a magnometer system for determining properties of the lunar core. [14]
Spellcheck says "magnometer" is not even a word.
But how much radiation noise is there from solar and cosmic wind on the surface of the moon - given the moon's lack of magnetosphere and partially thus also its lack of atmosphere - or shielded by how many meters of moon; underground in dormant lava tubes or vents?
> Structure of the Lunar Interior: The solid core has a radius of about 240 km and is surrounded by a much thinner liquid outer core with a thickness of about 90 km.[9] The partial melt layer sits above the liquid outer core and has a thickness of about 150 km. The mantle extends to within 45 ± 5 km of the lunar surface.
What are the costs to drill out the underground collider ring at what depth and but first to assemble the moon-drilling unit(s) from local materials just and given energy (and thus on-the-moon production systems therefore)
Presumably only the collision area needs to be underground to minimize noise. The rest of the ring can be above the surface. Building things on the moon is hard especially materials much easier to ship them from Earth using Starships or a derivative. Can start with a small ring made from a few ships and build a bigger one and finally a great circle one. Actually only need 500 or so towers 100 meters high each supporting a magnet. Starship can deliver 100 metric tons to LEO so could deliver a significant fraction of that to the lunar surface. Maybe 25 Starships performing 25 lunar missions each would be enough assuming LEO refueling and payload transfer.
Quantum Cryptography Has Everyone Scrambling
This is an article about QKD, a physical/hardware encryption technology that, to my understanding, cryptography engineers do not actually take seriously. It's not about post-quantum encryption techniques (PQC) like structured lattices.
As far as I can tell, QKD is mainly useful to tell people to separate out the traffic that is really worth snooping onto a network that is hard to snoop on. It is much harder for someone to spy on your secrets when they don't pass through traditional network devices.
Of course, it also tells a dedicated adversary with L1 access exactly where to tap your cables.
Show HN: I've spent nearly 5y on a web app that creates 3D apartments
Funny, I worked on something similar. My first job we needed to build up some fancy 3D alerting system for a client. I found a threejs based project on GitHub (I think it was called BluePrint 3D) and pitched it to my boss, it saved me screaming for help at figuring out how to build the same things in three JS with zero experience, but also saved us hundreds of hours to rebuild the same thing. It looked somewhat like this tool, though I'm sure this ones way more polished.
It too had a 2D editor for 3D, it was cool, but we were just building floorplans and displaying live data on those floor plans, so all the useful design stuff was scrapped for the most part. This looks nicely polished, good job.
It was a painful project due to the client asking for things that were just... well they were insane.
Neural radiance fields: https://en.wikipedia.org/wiki/Neural_radiance_field :
> A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020,[1] it has since gained significant attention for its potential applications in computer graphics and content creation.[2]
- https://news.ycombinator.com/item?id=38636329
- Business Concept: A b2b business idea from awhile ago expanded: "Adjustable Wire Shelving Emporium"; [business, office supply] client product mix 3D configurator with product placement upsells; from photos of a space to products that would work there. Presumably there's already dimensional calibration with a known-good dimension or two;
- ENH: the tabletop in that photo is n x m, so the rest of the objects are probably about
- ENH: the building was constructed in YYYY in Locality, so the building code there then said that the commercial framing studs should be here and the cables and wiring should be there.
My issue was moreso I had 0 experience with 3D and they wanted me to slap something together with threejs. I didn't even know JavaScript that well at the time. That is a pretty cool project though.
Show HN: BudgetFlow – Budget planning using interactive Sankey diagrams
Nice use of Sankey. Here's an ask or thought, I would like to feed this automatically with my spend data from my bank account. So I can export a csv that has time-date, entity, amount (credit/debit) - would be great if it could spit this out by category (where perhaps an llm could help with this task). I would then like flow to be automated monthly or perhaps quarterly with major deltas (per my definition) then pushed to me via alerts. So this isn't so much active budget management but passive nudging where the shape of my spend changes something I don't look at now but would like to.
If you have a CSV and are happy using LLM, then you should ask an LLM to give you python code to generate a sankey plot. It's not difficult to wrangle.
The Plaid API JSON includes inferred transaction categories.
OFX is one alternative to CSV for transaction data.
ofparse parses OFX: https://pypi.org/project/ofxparse/
W3C ILP Interledger protocol is for inter-ledger data for traditional and digital ledgers. ILP also has a message spec for transactions.
Scientists convert bacteria into efficient cellulose producers
> A new approach has been presented by the research group led by André Studart, Professor of Complex Materials at ETH Zurich, using the cellulose-producing bacterium Komagataeibacter sucrofermentans. [...]
> K. sucrofermentans naturally produces high-purity cellulose, a material that is in great demand for biomedical applications and the production of packaging material and textiles. Two properties of this type of cellulose are that it supports wound healing and prevents infections.
From "Cellulose Packaging: A Biodegradable Alternative" https://www.greencompostables.com/blog/cellulose-packaging :
> What is cellulose packaging? Cellulose packaging is made from cellulose-based materials such as paper, cardboard, or cellophane.
> Cellulose can also be used to produce a type of bioplastic that is biodegradable and more environmentally friendly than petroleum-based plastics. These bioplastics can be utilized to make food containers, bottles, cups or trays.
> Can cellulose replace plastic? With an annual growth rate of 5–10% over the past decade, cellulose is already at the forefront of replacing plastic in everyday use.
> Cellophane, especially, is set to replace plastic film packaging soon. According to a Future Market Insights report, cellulose packaging will have a compound annual growth rate of 4.9% between 2018 and 2028.
Is there already cling wrap cellophane? Silicone lids are washable, freezable, microwaveable
Open Source Farming Robot
It sill looks like the software is written by people who don't know how to care for plants. You don't spray water on leaves as shown in the video; you'll just end up with fungus infestation. You water the soil and nourish the microorganisms that facilitate nutrient absorption in roots. But, I don't see any reason the technology can't be adapted to do the right thing.
Probably?
But spraying water on leaves is not only the way water naturally gets to plants, it's often the only practical way to water crops at scale. Center-pivot irrigation has dramatically increased the amount of and reliability of arable cropland, while being dramatically less sensitive to topography and preparation than flood irrigation.
The advice to "water the soil, not the leaves" is founded in manual watering regimes in very small-scale gardening, often with crops bred to optimize for unnaturally prolific growth at the cost of susceptibility to fungal diseases, but which are still immature, exposing the soil. Or with transplanted bushes and trees where you have full access to the entire mulch bed. And it's absolutely a superior method, in those instances... but it's not like it's a hard-and-fast rule.
We can extend the technique out to mid-size market gardens with modern drip-lines, at the cost of adding to the horrific amounts of plastic being constantly UV-weathered that we see in mid-size market gardens.
Drip irrigation is kind of a thing out here in the desert...
And then there is burried drip irrigation...
Yes, but as GP said that doesn't scale. I live in an agriculture heavy community in the desert (mountain-west USA), and drip irrigation is only really used for small gardens and landscaping. Anyone with an acre or more of crops is not using drip.
I certainly agree that drip is the ideal, and when you aren't doing drip you want to minimize the standing water on leaves, but if I were designing this project I would design for scale.
But drip irrigation doesn’t scale because you would need to lay + connect + pressurize + maintain hundreds of miles of hoses. It’s high-CapEx.
A “watering robot”, meanwhile, can just do what a human gardener does to water a garden, “at scale.”
Picture a carrot harvester-alike machine — something whose main body sits on a dirt track between narrow-packed row-groups, with a gantry over the row-group supported by narrow inter-row wheels. Except instead of picker arms above the rows, this machine would have hoses hanging down between each row (or hoses running down the gantry wheels, depending on placement) with little electronic valve-boxes on the ends of the hoses, and side-facing jet nozzles on the sides of the valve boxes. The hoses stay always-fully-pressurized (from a tank + compressor attached to the main body); the valves get triggered to open at a set rate and pulse-width, to feed the right amount of water directly to the soil.
“But isn’t the ‘drip’ part of drip irrigation important?” Not really, no! (They just do it because constant passive input is lazy and predictable and lower-maintenance.) Actual rain is very bursty, so most plants (incl. crops) aren’t bothered at all by having their soil periodically drenched and then allowed to dry out again, getting almost bone dry before the next drenching. In fact, everything other than wetland crops like rice prefer this; and the dry-out cycles decrease the growth rates for things like parasitic fungi.
As a bonus, the exact same platform could perform other functions at the same time. In fact, look at it the other way around: a “watering robot” is just an extension of existing precision weeding robots (i.e. the machines designed to reduce reliance on pesticides by precision-targeting pesticide, or clipping/picking weeds, or burning/layering weeds away, or etc.) Any robot that can “get in there” at ground level between rows to do that, can also be made to water the soil while it’s down there.
Fair point, the robot could lower its nozzle to the ground and jet the water there, much like a human would, with probably not a lot of changes required. That does seem like it would be a good optimization.
Isn't it better to mist plants, especially if you can't delay watering due to full sun?
IIUC that's what big box gardening centers do; with fixed retractable hoses for misting and watering.
A robot could make and refill clay irrigation Ollas with or without microsprinkler inlets and level sensing with backscatter RF, but do Ollas scale?
Why have a moving part there at all? Could just modulate spec valves to high and low or better fixed height sprayers
FWIU newer solar weeding robots - which minimize pesticide use by direct substitution and minimize herbicide by vigilant crop monitoring - have fixed arrays instead of moving part lasers
An agricultural robot spec:
Large wheels, light frame, can right itself when terrain topology is misestimated, Tensor operations per second (TOPS), Computer Vision (OpenCV, NeRF,), modular sensor and utility mounts, Open CAD model with material density for mass centroid and ground contact outer hull rollover estimation,
There is a shift underway from ubiquitous tilling to lower and no till options. Tilling solves certain problems in a field - for a while - but causes others, and is relatively expensive. Buried lines do not coexist with tilling.
We are coming to understand a bit more about root biology and the ecosystem of topsoil and it seems like the 20th century approach may have been a highly optimized technique of using a sledgehammer to pound in a screw.
> IIUC that's what big box gardening centers do; with fixed retractable hoses for misting and watering.
Speaking from experience - they're making it up as their go along. Instructed to water the soil rather than the leaves and a certain number of seconds per diameter of pot, and set loose. The benefit of 'gentle rain' heads (high flow, low velocity, unlike misters) at close range is that they don't blow the heads off the blooming flowers they're selling, which is what happens if you use the heads designed for longer range at close range.
Is there evidence-based agricultural research for no-till farming?
No-Till Farming > Adoption across the world : https://en.wikipedia.org/wiki/No-till_farming#Adoption_acros... :
> * By 2023, farmland with strict no-tillage principles comprise roughly 30% of the cropland in the U.S.*
The new model used to score fuel lifecycle emissions is the Greenhouse Gases, Regulated Emissions and Energy use in Technologies (GREET) model: https://www.energy.gov/eere/greet :
> GREET is a tool that assesses a range of life cycle energy, emissions, and environmental impact challenges and that can be used to guide decision-making, research and development, and regulations related to transportation and the energy sector.
> * For any given energy and vehicle system, GREET can calculate:*
> - Total energy consumption (non-renewable and renewable)
> - Fossil fuel energy use (petroleum, natural gas, coal)
> - Greenhouse gas emissions
> - Air pollutant emissions
> - Water consumption
FWIU you have to plant cover crops to receive the new US ethanol / biofuel subsidies.
From "Some Groups Pan SAF Rules for Farmers Groups Criticize Cover Crop Requirement for Sustainable Aviation Fuel Modeling" (2024) https://www.dtnpf.com/agriculture/web/ag/news/business-input... :
> Some biofuel groups were encouraged the guidance would recognize climate-smart farm practices for the first time in ethanol or biodiesel's carbon intensity score. Others said the guidance hurts them because their producers have a hard time growing cover crops and carbon scoring shouldn't be limited to a few specific farming practices. Environmental groups said there isn't enough hard science to prove the benefit of those farm practices.
I have "The Living Soil Handbook" by Jessie Frost (in KY) here, and page 1 "Introduction" reads:
> 1. Disturb the soil as little as possible.
> 2. Keep the soil covered as much as possible.
> 3. Keep the soil planted as much as possible.
FWIU tilling results in oxidation and sterilization due to UV-C radiation and ozone; tilling turns soil to dirt; and dry dirt doesn't fix nitrogen or CO2 or host mycorhizzae which help plants absorb nutrients.
Bunds with no irrigation to not till over appear to win in many climates. Maybe bunds, agrivoltaics, and agricultural robots can help undo soil depletion.
"Vikings razed the forests. Can Iceland regrow them?" https://news.ycombinator.com/item?id=40361034
https://westurner.github.io/hnlog/# ctrl-f soil , no-till / notill / no till / #NoTill
Moments in Chromecast's history
Is there any substitute for chrome cast audio? I love being able to play in sync audio to the group of receivers I choose throughout the property, using any amplifier. I’m not even using the digital optical input and I love them
I think Sonos sued the heck out of Google for those, and it caused those devices to disappear for a few years. Sonos lost that case late last year though, so hopefully we'll see a resurgence?
https://www.reuters.com/legal/litigation/google-wins-repriev...
Otherwise, you can DIY it with a bunch of old devices or Raspberry Pis and https://github.com/geekuillaume/soundsync
I am fairly certain that the academic open source community had already published prior art for delay correction and volume control of speaker groups (which are obvious problems when you add multiple speakers to a system with transmission delay). IIRC there was a microsoft research blog post with a list of open source references for distributed audio from prior to 2006 for certain. (Which further invalidates the patent claims in question).
Before they locked Chromecast protocol down, it was easy to push audio from a linux pulseaudio sound server to Chromecast device(s).
The patchbay interface in soundsync looks neat. Also patch bay interfaces: BespokeSynth, HoustonPatchBay, RaySession, patchance, org.pipewire.helvum (GTK), easyeffects (GTK4 + GStreamer), https://github.com/BespokeSynth/BespokeSynth/issues/1614#iss...
pipewire handles audio and video streams. soundsync with video would be cool too.
FWIU Matter Casting is an open protocol which device vendors could implement.
Quantum state mimics gravitational waves
> Quantum particles in a spin-nematic state carry energy through waves.
> “We realised that the properties of the waves in the spin-nematic state are mathematically identical to those of gravitational waves,” says Shannon.
"Gravitational wave analogs in spin nematics and cold atoms" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.109.L... :
> Abstract: Large-scale gravitational phenomena are famously difficult to observe, making parallels in condensed matter physics a valuable resource. Here we show how spin nematic phases, found in magnets and cold atoms, can provide an analog to linearized gravity. In particular, we show that the Goldstone modes of these systems are massless spin-2 bosons, in one-to-one correspondence with quantized gravitational waves in flat spacetime. We identify a spin-1 model supporting these excitations and, using simulation, outline a procedure for their observation in a 23Na spinor condensate.
How Postgres stores data on disk – this one's a page turner
If anyone is interested in contrasting this with InnoDB (MySQL’s default engine), Jeremy Cole has an outstanding blog series [0] going into incredible detail.
Apache Arrow Columnar Format: https://arrow.apache.org/docs/format/Columnar.html :
> The Arrow columnar format includes a language-agnostic in-memory data structure specification, metadata serialization, and a protocol for serialization and generic data transport. This document is intended to provide adequate detail to create a new implementation of the columnar format without the aid of an existing implementation. We utilize Google’s Flatbuffers project for metadata serialization, so it will be necessary to refer to the project’s Flatbuffers protocol definition files while reading this document. The columnar format has some key features:
> Data adjacency for sequential access (scans)
> O(1) (constant-time) random access
> SIMD and vectorization-friendly
> Relocatable without “pointer swizzling”, allowing for true zero-copy access in shared memory
Are the major SQL file formats already SIMD optimized and zero-copy across TCP/IP?
Arrow doesn't do full or partial indexes.
Apache Arrow supports Feather and Parquet on-disk file formats. Feather is on-disk Arrow IPC, now with default LZ4 compression or optionally ZSTD.
Some databases support Parquet as the database flat file format (that a DBMS process like PostgreSQL or MySQL provides a logged, permissioned, and cached query interface with query planning to).
IIUC with Parquet it's possible both to use normal tools to offline query data tables as files on disk and also to online query tables with a persistent process with tunable parameters and optionally also centrally enforce schema and referential integrity.
From https://stackoverflow.com/questions/48083405/what-are-the-di... :
> Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage
> Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future.
> Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files
> Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems
Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?
Edit: What are some of the DLT solutions for indexing given a consensus-controlled message spec designed for synchronization?
- cosmos/iavl: a Merkleized AVL+ tree (a balanced search tree with Merkle hashes and snapshots to prevent tampering and enable synchronization) https://github.com/cosmos/iavl/blob/master/docs/overview.md
- Google/trillion has Merkle hashed edges between rows in order in the table but is centralized
- "EVM Query Language: SQL-Like Language for Ethereum" (2024) https://news.ycombinator.com/item?id=41124567 : [...]
Enum class improvements for C++17, C++20 and C++23
Speaking of enums, what's a better serialization framework for C++ like Serde in Rust, and then what about Linked Data support and of course also form validation.
Linked Data triples have (subject, predicate, object) and quads have (graph, subject, predicate, object).
RDF has URIs for all Subjects and Predicates.
RDF Objects may be URIs or literal values like xsd:string, xsd:float64, xsd:int (32bit signed value), xsd:integer, xsd:long, xsd:time, xsd:dateTime, xsd:duration.
RDFS then defines Classes and Properties, identified by string URIs.
How best to get from an Enum with (type,attr, {range of values}) to an rdfs:range definition in a schema with a URI prefix?
Python has dataclasses which is newer than attrs, but serde also emits Python pickles
So far, most people would use whatever is provided alongside the frameworks they are already using for their application.
Since the 90's, Turbo Vision, OWL, MFC, CSet++, Tools.h++, VCL, ATL, PowerPlant, Qt, Boost, JUCE, Unreal, CryEngine,....
Maybe if reflection does indeed land into C++26, something might be more universally adopted.
"Reflection for C++26" (2024) https://news.ycombinator.com/item?id=40836594
Wt:Dbo years ago but that's just objects to and from SQL, it's not linked data schema or fast serialization with e.g. Arrow.
There should be easy URIs for Enums, which I guess map most closely to the rdfs:range of an rdfs:Property. Something declarative and/or extracted by another source parser would work. To keep scheme definitions DRY
re Serde: If you want JSON, https://github.com/beached/daw_json_link is about as close as we can get. It interop's well with the reflection(or like) libraries such as Boost Describe or Boost PFR and will work well with std reflection when it is available.
Probably not too much work to add and then also build a JSONLD @context from all of the ~ message structs.
:Thing > https://schema.org/name , :URL , :identifier and subclasses
Thing > Intangible > Enumeration: https://schema.org/Enumeration
Adding reflection will be simple and backwards compatible with existing code as it would only come into play when someone hasn't manually mapped a type. This leaves the cases where reflection doesn't work(private member variables) still workable too.
Haven't looked at JSONLD much, but it seems like it could be added but would be a library above I think. Extracting the mappings is already doable and is done in the JSON Schema export.
"Exporting JSON Schema" https://github.com/beached/daw_json_link/blob/release/docs/c... :
include <daw/json/daw_json_schema.h>
std::string daw::json::to_json_schema<MyType>( "identifier", "title" );
From https://westurner.github.io/hnlog/#comment-38526588 :> SHACL is used for expressing integrity constraints on complete data, while OWL allows inferring implicit facts from incomplete data; SHACL reasoners perform validation, while OWL reasoners do logical inference.
- "Show HN: Pg_jsonschema – A Postgres extension for JSON validation" https://news.ycombinator.com/item?id=32186878 re: json-ld-schema, which bridges JSONschema and SHACL for JSONLD
Twisted carbon nanotubes store more energy than lithium-ion batteries
> By making single-walled carbon nanotubes (SWCNTs) into ropes and twisting them like the string on an overworked yo-yo, Katsumi Kaneko, Sanjeev Kumar Ujjain and colleagues showed that they can store twice as much energy per unit mass as the best commercial lithium-ion batteries.
> SWCNTs are made from sheets of pure carbon just one atom thick that have been rolled into a straw-like tube. They are impressively tough – five times stiffer and 100 times stronger than steel – and earlier theoretical studies by team member David Tománek and others suggested that twisting them could be a viable means of storing large amounts of energy in a compact, lightweight system.
> [...] The maximum gravimetric energy density (that is, the energy available per unit mass) they measured was 2.1 MJ/kg (583 Wh/kg). While this is lower than the most advanced lithium-ion batteries, which last year hit a record of 700 Wh/kg, it is much higher than commercial versions, which top out at around 280 Wh/kg. The SWCNT ropes also maintained their performance over at least 450 twist-release cycles
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
> Abstract: A sustainable society requires high-energy storage devices characterized by lightness, compactness, a long life and superior safety, surpassing current battery and supercapacitor technologies. Single-walled carbon nanotubes (SWCNTs), which typically exhibit great toughness, have emerged as promising candidates for innovative energy storage solutions. Here we produced SWCNT ropes wrapped in thermoplastic polyurethane elastomers, and demonstrated experimentally that a twisted rope composed of these SWCNTs possesses the remarkable ability to reversibly store nanomechanical energy. Notably, the gravimetric energy density of these twisted ropes reaches up to 2.1 MJ kg−1, exceeding the energy storage capacity of mechanical steel springs by over four orders of magnitude and surpassing advanced lithium-ion batteries by a factor of three. In contrast to chemical and electrochemical energy carriers, the nanomechanical energy stored in a twisted SWCNT rope is safe even in hostile environments. This energy does not deplete over time and is accessible at temperatures ranging from −60 to +100 °C.
Ask HN: How much would it cost to build a RISC CPU out of carbon?
RFC: Estimates on program costs to:
Evaluate carbon-based alternatives for fabrication of electronic digital and quantum computers
Evaluate scalable production processes for graphene-based transistors and other carbon-based nanofabrication capabilities
Evaluate supply chain challenges in semiconductor and superconductor manufacturing and design
Invest in developing chip fabrication capabilities that do not require rare earths in order to eliminate critical supply chain risks
> RFC: Estimates on program costs to:
> Evaluate carbon-based alternatives for fabrication of electronic digital and quantum computers
> Evaluate scalable production processes for graphene-based transistors and other carbon-based nanofabrication capabilities
> Evaluate supply chain challenges in semiconductor and superconductor manufacturing and design
> Invest in developing chip fabrication capabilities that do not require rare earths in order to eliminate critical supply chain risks
- "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024-06) https://news.ycombinator.com/item?id=40719725
Group1 Unveils First Potassium-Ion Battery in 18650 Format
> Leveraging Kristonite™, Group1's flagship product, a uniquely engineered 4V cathode material in the Potassium Prussian White (KPW) class, this battery enables the best combination of performance, safety, and cost when compared to LiFePO4 (LFP)-based LIBs and Sodium-ion batteries (NIBs).
> The 18650 form factor is the most widely used and designed cell formats. These new 18650 batteries use commercial graphite anodes, separators, and electrolyte formulations comprised of commercially sourced components. They offer superior cycle life and excellent discharge capability, and they operate at 3.7V. The product release exceeds initial performance expectations and demonstrates a practical path to achieve a gravimetric energy density of 160-180Wh/kg, which is standard for LFP-LIB.
https://interestingengineering.com/energy/first-18650-potass... :
> 3.7v, 18650
How much less bad for plants is a Potassium Ion battery compared with NiMH, NiCD, Li-ion, LiFePO4, and Sodium-ion batteries?
FWIU salt kills plants, lithium kills plants, NiMH and NiCD shouldn't be disposed of in the trash either, and just Potassium (K) is a macronutrient essential for plant growth.
There are fertilizer shortages. Is there a potassium fertilizer shortage?
"Potassium depletion in soil threatens global crop yields" (2024) https://phys.org/news/2024-02-potassium-depletion-soil-threa...
"Global food security threatened by potassium neglect" (2024) https://www.nature.com/articles/s43016-024-00929-8 :
> Inadequate potassium management jeopardises food security and freshwater ecosystem health. Potassium, alongside nitrogen and phosphorus, is a vital nutrient for plant growth1 and will be fundamental to achieving the rapid rises in crop yield necessary to sustain a growing population. Sustainable nutrient management is pivotal to establishing sustainable food systems and achieving the UN Sustainable Development Goals. While momentum to deliver nitrogen [2] and phosphorus sustainability [3] builds, potassium sustainability has been chronically neglected. There are no national or international policies or regulations on sustainable potassium use equivalent to those for nitrogen and phosphorus. Calls to mitigate rising potassium soil deficiency by increasing potassium inputs in arable agriculture are understandable [4,5]. However, substantial knowledge gaps persist regarding the potential environmental impacts of such interventions. We outline six proposed actions that aim to prevent crop yield declines due to soil potassium deficiency, safeguard farmers from price volatility in potash (i.e. mined potassium salts used to make fertiliser) and address environmental and ecosystem concerns associated with potash mining and increased potassium fertiliser use.
From https://www.pure.ed.ac.uk/ws/portalfiles/portal/409660541/Br... :
> 1. Review current potassium stocks and flows
> 2. Establish capabilities for monitoring and predicting potassium price fluctuations
> 3. Help farmers maintain sufficient soil potassium levels
> 4. Evaluate the environmental effects of potash mining and increased potassium162 application to identify sustainable practices
> 5. Develop a global strategy to transition to a circular potassium economy (*)
> 6. Accelerate intergovernmental cooperation as a catalyst for change
How much less bad for oceans are Potassium Ion batteries compared with NiMH, NiCD, Li-ion, LiFePO4, and Sodium-ion batteries?
USB Sniffer Lite for RP2040
Can this be used to capture the packets of another computer?
The USB packets? Yes.
So it should be possible to bridge wired USB wirelessly over 802.11n (2.4 GHz) and Bluetooth BLE 5.2 with two RP2040w, or one and software on a larger computer on the other side with e.g. google/bumble
Why go through all this trouble if you already have physical access to the computer?
Intel took billions from the CHIPS Act, and gave nothing back
I respect and like Intel, but even with my biases I am nonetheless mildly annoyed as a tax payer that we paid so much into CHIPS to still end up losing the silicon wars to Taiwan/China.
I want my tax dollars back.
We gave them the money this year. They are building fabs. This stuff takes years.
One fab can cost $20 billion.
This past year, they were expected to piss the money we gave them away on a factory in high-risk genocideland but not in the USA.
How could they have prevented those expenses?
They've finally introduced lower power gaming chips.
Hopefully they build fabs here, and we alleviate the supply chain blockade and competitive obstruction by fabricating chips out of graphene and other forms of carbon.
About Google Chrome's "This extension may soon no longer be supported"
I don’t get why we need only a declarative manifest format to specify blocking rules.
Chrome has the ability to run code in a sandboxed environment with no external network access. This could be done in JS, or it could be WASM.
So, why can’t they leverage that? Ublock receives the page being visited and it needs to return an array of elements to block. The browser can do the replacement and element stripping itself. This is the approach Safari takes on iOS for content blocking extensions.
Instead, Chrome went with a “limited set of static patterns in JSON format” approach, which rules out or limits a lot of use cases.
But… why? The obvious answer is because they don’t want adblockers, but this seems a bit too obvious. Is there some technical reason I’m missing for this?
"A look inside the BPF verifier [lwn]" https://news.ycombinator.com/item?id=41135371
eBPF: https://en.wikipedia.org/wiki/EBPF
"How are eBPF programs written?" https://ebpf.io/what-is-ebpf/#how-are-ebpf-programs-written :
> In a lot of scenarios, eBPF is not used directly but indirectly via projects like Cilium, bcc, or bpftrace which provide an abstraction on top of eBPF and do not require writing programs directly but instead offer the ability to specify intent-based definitions which are then implemented with eBPF.
> If no higher-level abstraction exists, programs need to be written directly. The Linux kernel expects eBPF programs to be loaded in the form of bytecode. While it is of course possible to write bytecode directly, the more common development practice is to leverage a compiler suite like LLVM [clang] to compile pseudo-C code into eBPF bytecode
WASM eBPF for adblocking
A look inside the BPF verifier [lwn]
> The BPF verifier is, fundamentally, a form of static analysis. It determines whether BPF programs submitted to the kernel satisfy a number of properties, such as not entering an infinite loop, not using memory with undefined contents, not accessing memory out of bounds, only calling extant kernel functions, only calling functions with the correct argument types, and others. As Alan Turing famously proved, correctly determining these properties for all programs is impossible — there will always be programs that do not enter an infinite loop, but which the verifier cannot prove do not enter an infinite loop. Despite this, the verifier manages to prove the relevant properties for a large amount of real-world code.
[ Halting problem: https://en.wikipedia.org/wiki/Halting_problem ]
> Some basic properties, such as whether the program is well-formed, too large, or contains unreachable code, can be correctly determined just by analyzing the program directly. The verifier has a simple first pass that rejects programs with these problems. But the bulk of the verifier's work is concerned with determining more difficult properties that rely on the run-time state of the program [dynamic analysis].
Ask HN: What's the consensus on "unit" testing LLM prompts?
LLMs are notoriously non deterministic, which makes it hard for us developers to trust them as a tool in a backend, where we usually expect determinism.
I’m in a situation where using an LLM makes sense from a technical perspective, but I’m wondering if there are good practice on testing, besides manual testing:
- I want to ensure my prompt does what I want 100% of the time
- I want to ensure I don’t get regressions as my prompt evolve, or when updating the version of the LLM I use, or even if I switch to another LLM
The ideas I have in mind are:
- forcing the LLM to return JSON with a strict definition
- running a fixed set of tests periodically with my prompt and checking I get the expected result
Are there specificities with LLM prompt testing I should be aware of? Are some good practices emerging?
From "Asking 60 LLMs a set of 20 questions" (2023) https://news.ycombinator.com/item?id=37445401#37451493 : PromptFoo, ChainForge, openai/evals, TheoremQA,
https://news.ycombinator.com/item?id=40859434 re: system prompts
"Robustness of Model-Graded Evaluations and Automated Interpretability" https://www.lesswrong.com/posts/ZbjyCuqpwCMMND4fv/robustness...
"Detecting hallucinations in large language models using semantic entropy" (2024) https://news.ycombinator.com/item?id=40769496
AI-enabled solar panel installation robot lifts, places, and attaches
- "Robot-Installed Solar Panels Cut Costs by 50%" (2012) https://spectrum.ieee.org/robotinstalled-solar-panels-cut-co... :
> Here's the executive summary, since I've never done an executive summary before and it sounds fancy: using robots to set up a 14 megawatt solar power plant can potentially cuts costs from $2,000,000 to $900,000, while being constructed in eight times faster with only three human workers instead of 35.
> So there you go! If you're still reading, we can tell you a little bit about the robot that performs this incredible feat of engineering efficiency: it costs just under a million bucks, but it's built from off-the-shelf parts and in continuous use will supposedly pay for itself in either no time at all or less than a year, whichever comes last.
> [...] The robot itself has a mobile base that runs on tank treads, and a robot arm grips huge 145 watt panels one at a time and autonomously positions them in just the right spot on a pre-installed metal frame. Humans follow along behind, adding fasteners and making electrical connections,
- "Launch HN: Charge Robotics (YC S21) - Robots that build solar farms" (2022) https://news.ycombinator.com/item?id=30780455 :
> We're using a two-part robotic system to build this racking structure. First, a portable robotic factory placed on-site assembles sections of racking hardware and solar modules. This factory fits inside a shipping container. Robotic arms pick up solar modules from a stack and fasten them to a long metal tube (the "torque tube"). Second, autonomous delivery vehicles distribute these assembled sections into the field and fasten them in place onto target destination piles.
Show HN: Trayce – Network tab for Docker containers
Trayce (https://github.com/evanrolfe/trayce_gui) is an open source desktop application which monitors HTTP(S) traffic to Docker containers on your machine. It uses EBPF to achieve zero-configuration sniffing of TLS-encrypted traffic.
As a backend developer I wanted something which was similar to Wireshark or the Chrome network tab, but which intercepted requests & responses to my containers for debugging in a local dev environment. Wireshark is a great tool but it seems more geared towards lower level networking tasks. When I'm developing APIs or microservices I dont care about packets, I'm only concerned with HTTP requests and their responses. I also didn't want to have to configure a pre-shared master key to intercept TLS, I wanted it to work out-of-the-box.
Trayce is in beta phase so feedback is very welcome, bug reports too. The frontend GUI is written in Python with the QT framework. The TrayceAgent which is what does the intercepting of traffic is written in Go and EBPF.
Looks very cool. I think you should write a docker extension https://www.docker.com/products/extensions/
As long as it's in addition to the standalone app. Not everybody uses Docker Desktop.
Podman Desktop supports Docker Desktop extensions and also its own extension API. https://podman-desktop.io/extend
"Developing a Podman Desktop extension" https://podman-desktop.io/docs/extensions/developing
This tool has really cool potential!
Just one problem I noticed imminently that prevents me from using this, the docker agent container[1] isn't multi-architecture, this will be an issue on Apple Silicon devices. This is something I have some experience setting up if you are looking for help, though will take some research to figure out how to get going in github actions etc.
1: https://github.com/evanrolfe/trayce_agent/
EDIT: quick search found this post, tested on a side project repo it works great: https://depot.dev/blog/multi-platform-docker-images-in-githu...
Good point, thanks. Its only ever been tested on a Mac with an Intel chip. I will try and sort this out ASAP!
I did submit a PR with a partial implementation, well the Docker and Github side. I am useless at the C and low level code.
Who has CI build runners for the given architectures?
It looks[1] like github actions only run on amd64 hosts, so if you use a platform matrix like in this example[2] I am fairly certain it is running under qemu just based on the fact that is takes roughly 10 times longer to run. I am aware of Blaze[3] has arm64 runners.
If you are willing to pay for it, you can also setup a runner that uses AWS Graviton EC2 instances[4], we do that at my workplace for our multi architecture builds.
1: https://docs.github.com/en/actions/using-github-hosted-runne...
2: https://docs.docker.com/build/ci/github-actions/multi-platfo...
Genetically synthesized supergain broadband wire-bundle antenna
Genetic optimisation seems like an odd choice here. The fundamental operation of genetic optimisation is crossing over - taking two solutions, and picking half the parameters from one, and half from the other. This makes sense if parameters make independent contributions to fitness. But the case of antenna design, surely that's exactly what they don't? The position of each element relative to the other elements matters enormously. If you had two decent antenna designs, cut both in half, and glued the halves together, i would expect that to be a crappy antenna.
It would be interesting to see an evaluation of genetic optimisation vs more conventional techniques like simplex or BOBYQA, or anything else in NLopt.
GA methods also may converge without expert bias in parameters but typically after more time or generations of mutation, crossover, and selection.
Why would the fitness be lower with mutation, crossover, and selection?
Manual optimization is selection: the - possibly triple-blind - researcher biases the analysis by choosing models, hyperparameters, and parameters. This avoids search cost, but just like gradient descent it can result in unexplored spaces where optima may lurk.
There's already GA Genetic Algorithms deep space antenna design success story.
But there are also dielectric and Rydberg antennae, which don't at all need to be as long as the radio wavelength. How long would that have taken the infinite monkeys GA?
/?hnlog: antenna, Rydberg, dielectric: https://westurner.github.io/hnlog/#comment-38054784
Cylinder sails promise up to 90% fuel consumption cut for cargo ships
EVM Query Language: SQL-Like Language for Ethereum
- How does this compare to the BigQuery copy of the chain?
"Ethereum Mainnet example queries" https://cloud.google.com/blockchain-analytics/docs/example-e...
- from "Show HN: A Database Generator for EVM with CRUD and On-Chain Indexing" https://news.ycombinator.com/item?id=36144368 https://github.com/KenshiTech/SolidQuery ; rdflib-leveldb ?
- re: affording decentralized CT Certificate Transparency queries: https://news.ycombinator.com/item?id=30782522 :
> Indexers are node operators in The Graph Network that stake [GRT] in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn query fees that are rebated according to an exponential rebate function.
I prefer rST to Markdown
A Table of Contents instruction that works across all markdown implementations would be an accomplishment.
OTOH, Markdown does not require newlines between list elements; which is more compact but limits the HTML that can be produced from the syntax.
MyST Markdown adds Sphinx roles and directives to Markdown.
nbsphinx adds docutils (RST) support back to Jupyter Notebooks. (IPython notebook originally supported RST and Markdown.)
Researchers trap atoms, force them to serve as photonic transistors
> "Using state-of-the-art nanofabrication instruments in the Birck Nanotechnology Center, we pattern the photonic waveguide in a circular shape at a diameter of around 30 microns (three times smaller than a human hair) to form a so-called microring resonator. Light would circulate within the microring resonator and interact with the trapped atoms," adds Hung.
- -459.67 F, Cesium, Tractor beam
"Trapped Atoms and Superradiance on an Integrated Nanophotonic Microring Circuit" (2024) : https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.03... :
> Interfacing cold atoms with integrated nanophotonic devices could offer new paradigms for engineering atom-light interactions and provide a potentially scalable route for quantum sensing, metrology, and quantum information processing. However, it remains a challenging task to efficiently trap a large ensemble of cold atoms on an integrated nanophotonic circuit. Here, we demonstrate direct loading of an ensemble of up to 70 atoms into an optical microtrap on a nanophotonic microring circuit. Efficient trap loading is achieved by employing degenerate Raman-sideband cooling in the microtrap, where a built-in spin-motion coupling arises directly from the vector light shift of the evanescent-field potential on a microring. Atoms are cooled into the trap via optical pumping with a single free space beam. We have achieved a trap lifetime approaching 700 ms under continuous cooling. We show that the trapped atoms display large cooperative coupling and superradiant decay into a whispering-gallery mode of the microring resonator, holding promise for explorations of new collective effects. Our technique can be extended to trapping a large ensemble of cold atoms on nanophotonic circuits for various quantum applications. https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.03...
Rediscovering Transaction Processing from History and First Principles
Joran from TigerBeetle here! Really stoked to see a bit of Jim Gray history on the front page and happy to dive into how TB's consensus and storage engine implements these ideas (starting from main! https://github.com/tigerbeetle/tigerbeetle/blob/main/src/tig...).
TPS: Transactions Per Second: https://en.wikipedia.org/wiki/Transactions_per_second
"A Measure of Transaction Processing Power" (1985)
"A measure of transaction processing 20 years later" (2005) https://arxiv.org/abs/cs/0701162 .. https://scholar.google.com/scholar?cluster=11019087883708435...
About the cover sheet on those TPS reports.
Max throughput, max efficiency,
Network throughput: https://en.wikipedia.org/wiki/Network_throughput
"Is measuring blockchain transactions per second (TPS) stupid in 2024? Big Questions" https://cointelegraph.com/magazine/blockchain-transactions-p... :
> focusing on the raw TPS number is a bit like “counting the number of bills in your wallet but ignoring that some are singles, some are twenties, and some are hundreds.”
https://chainspect.app/dashboard describes each of their metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)
USD/day probably doesn't predict TPS; because the Average transaction value is higher on networks with low TPS.
Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr
And then there's Uptime; or losses due to downtime (given SLA prorated costs)
> "A measure of transaction processing 20 years later (2005)"
This is one of my favorites. Along with https://www.microsoft.com/en-us/research/wp-content/uploads/... (also 20 years later) which has more detail.
> USD/day probably doesn't predict TPS; because the Average transaction value is higher on networks with low TPS.
Exactly. If we only look at USD/day we might not see the trend in transaction volume.
What's happened is like a TV set going from B&W to full color 4K. If you look at the dimensions of the TV, it's pretty much the same. But the number of pixels (txns) is increasing, with their size (value) decreasing, for significantly higher resolution across sectors. And this is directly valuable because higher resolution (i.e. transactionality) enables arbitrage/efficiency.
I don't know what the frontier is on TPS and value in financial systems. Does HFT or market-making really add that much value compared to direct capital investment and the trading frequency of an LTSE Long Term Stock Exchange?
High tx fees (which are simply burnt) disincentive HFT, which may also be a waste of electricity like PoW, in terms of allocation.
Low tx fees increase the likelihood of arbitrage that reduces the spread between prices (on multiple exchanges) and then what effect on volatility and real value?
But those are market dynamics, not general high-TPS system dynamics.
You can see the trend better in the world of instant payments, if you look to India's UPI or Brazil's PIX, how they've taken over from cash and are disrupting credit card transactions, with this starting to spread to Europe and the US: https://www.npci.org.in/what-we-do/upi/product-statistics
However, it's not limited to fintech.
For example, in some countries, energy used to be transacted once a month. Someone would come to your house or business, read your meter, and send you a bill for your energy usage. With the world moving away from coal to clean energy and solar, this is changing because the price of energy now follows the sun. So energy providers are starting to price energy, not once a month, but every 30 minutes. In other words, this 1440x increase in transactions volume enables cleaner more efficient energy pricing.
You see the same trend also in content and streaming. You used to buy an album, then a song, and now you stream on Spotify and Apple. Same for movies and Netflix.
The cloud is no different. Moving from colocation, to dedicated, to per-hour and then per-minute instance pricing, and now serverless.
And then Uber and Airbnb have done much the same for car rentals or house rentals: take an existing business model and make it more transactional.
RestrictedPython
A Python interpreter with costed opcodes would be smart.
But then still there is no oracle to predict whether user-supplied [RestrictedPython] code never halts; so, without resource quotas from VMs or Containers, there is unmitigated risk of resource exhaustion from user-supplied and developer-signed code.
IIRC there are one-liners that DOS Python and probably RestrictedPython too.
JupyterHub Spawners and Authenticators were created to spin up [k8s] containers on demand - with resource quotas - for registered users with e.g. JupyterHub and unregistered users with BinderHub.
Someone already has a presentation on how it's impossible to fully sandbox Python without OS process isolation?
You can instead have them run their code on their machine with Python in WASM in a browser tab; but should you trust the results if you do not test for convergence by comparing your local output with other outputs? Consensus with a party of one or an odd number of samples.
So, reproducibility as scientific crisis: "it worked on my machine" or "it works on the gitops CI build servers" and "it works in any browser" and the output is the same when the user runs their own code locally as when it is run on the server [with RestrictedPython].
I heard it was actually impossible to sandbox Python (and most or all other languages) with itself?
/? python sandbox RestrictedPython https://google.com/search?q=python+sandbox+RestrictedPython
> But then still there is no oracle to predict whether user-supplied [RestrictedPython] code never halts; so, without resource quotas from VMs or Containers, there is unmitigated risk of resource exhaustion from user-supplied and developer-signed code
This is actually something I was interested in a while ago, but in Scheme. For example GNU Guile has such a facility built in [0], though with a caveat that if you're not careful what you expose you could still make yourself vulnerable to DoS attack (the memory exhaustion kind). But if you don't expose any facility that allows the untrusted user to make large allocations at a time (in a single call) you should be fine (fingers crossed). I don't see a reason why the same mechanics couldn't be implemented in Python.
[0] https://www.gnu.org/software/guile/manual/html_node/Sandboxe...
Edit: in some sense this restricted python stuff also reminds me of Safe Haskell (extension) [1], which came out a bit ahead of it's time and by this point almost forgotten about. Might become relevant again in the future.
[1] https://begriffs.com/posts/2015-05-24-safe-haskell.html - better overview than the wiki page
Java implementations have run-time heap allocator limits configurable with the -Xms and -Xmx options for minimum and maximum heap size, and -Xss for per-thread stack size:
java -jar -Xms1024M -Xmx2048M -Xss1M example.jar
RAM and CPU and network IO cgroups are harder limits than a process's attempts to bound its own allocation with the VM.
TIL about hardened_malloc. Python doesn't have hardened_malloc, and IDK how configurable hardened_malloc is in terms of self-imposed process resource limits. FWIU hardened_malloc groups and thereby contains allocations by size. https://github.com/GrapheneOS/hardened_malloc
There is a reason that EVM and eWASM have costed opcodes and do not have socket libraries (blocking or async).
The default sys.setrecursionlimit() is 1000; so, 1000 times the unbounded stack size per frame: https://docs.python.org/3/library/sys.html#sys.setrecursionl...
Most (all?) algorithms can be rewritten with a stack instead of recursion, thereby avoiding per-frame stack overhead in languages without TCO Tail-Call Optimization like Python.
Kata containers or Gvisor or just RestrictedPython that doesn't run until it's checked into git [and optionally signed]?
setup.py could or should execute in a RestrictedPython sandbox if run as root (even in a container).
Starlark and Skylark are also restricted subsets of Python, for build configuration with Blaze, Bazel, Buck2, Caddy,
"Starlark implementations, tools, and users" https://github.com/bazelbuild/starlark/blob/master/users.md
Ask HN: Am I crazy or is Android development awful?
TL;DR - what I can do in 10 minutes on a desktop python app (windows or Linux, both worked fine) seems near impossible as an Android app.
I have a simple application to prototype: take a wired USB webcam, display it on the screen on a computer device (ideally a small device/screen), and draw a few GUI elements on top of it.
Using Python scripting and OpenCV, I had a working cross-compatible script in 10 minutes, for both Linux and Windows.
Then I realized I'd love this to work on an Android phone. I have devices with USB OTG, and a USB-C hub for the webcam. I confirmed the hardware setup working using someone else's closed source app.
However the development process has been awful. Android Studio has so much going on for a 'Hello World', and trying to integrate various USB webcam libraries has been impossible, even with AI assistants and Google guiding me. Things to do with Gradle versions or Kotlin versions being wrong between the libraries and my Android studio; my project not being able to include external repos via dependencies or toml files, etc.
In frustration I then tried a few python-to-android solutions, which promise to take a python script and make an APK. I tried: Kivy + python for android, and then Beeswax or Briefcase (may have butchered names slightly). Neither would build without extremely esoteric errors that neither me nor GPT had any chance to fix.
Well, looks like modern mobile phones are not a great hacker's playground, huh?
I guess I will go for a raspberry pi equivalent. In fact I already have tested my script on a RPi and it's just fine. But what a waste, needing to use a new computer module, screen display, and battery, when the smartphone has all 3 components nicely set up already in one sleek package.
Anyways that's my rant, wanted to get others' takes on Android (or smartphone in general) dev these days, or even some project advice in case anyone has done something similar connecting a wired webcam to a smartphone.
/? termux USB webcam: https://www.google.com/search?q=termux+usb+webcam
Termux was F-droid only, but 4 years later is back on the Play Store: https://github.com/termux-play-store#current-status-for-user...
Termux has both glibc and musl libc. Android has bionic libc.
One time I got JupyterLab to run on Android in termux with `proot` and pip. And then the mobile UI needed work in a WebView app or just a browser tab t. Maybe things would port back from Colab to JupyterLab.
conda-forge and Linux arm64 packages don't work on arm64 Android devices, so the only option is to install the *-dev dependencies and wait for compilation to finish on the Android device.
Waydroid is one way to work with Android APKs in a guest container on a Linux host.
That Android Studio doesn't work on Android or ChromiumOS without containers (that students can't have either).
When you get x on turmux working you see what mobile development could have been and weep for the fallen world we live in.
containers/podman > [Feature]: Android support: https://github.com/containers/podman/discussions/17717 :
> There are docker and containerd in termux-packages. https://github.com/termux/termux-packages/tree/master/root-p...
But Android 13+ supports rootless pKVM VMs, which podman-machine should be able to run containers in; (but only APK-installed binaries are blessed with the necessary extended filesystem attributes to exec on Android 4.4+ with SELinux in enforcing mode.)
- Android pKVM: https://source.android.com/docs/core/virtualization/architec... :
> qemu + pKVM + podman-machine:
> The protected kernel-based virtual machine (pKVM) is built upon the Linux KVM hypervisor, which has been extended with the ability to restrict access to the payloads running in guest virtual machines marked ‘protected’ at the time of creation.
> KVM/arm64 supports different execution modes depending on the availability of certain CPU features, namely, the Virtualization Host Extensions (VHE) (ARMv8.1 and later).
- "Android 13 virtualization lets [Pixel >= 6] run Windows 11, Linux distributions" (2022) https://news.ycombinator.com/item?id=30328692
It's faster to keep a minimal container hosting VM updated.
So, podman-machine for Android in Termux might help solve for development UX on Android (and e.g. Android Studio on Android).
podman-machine: https://docs.podman.io/en/latest/markdown/podman-machine.1.h...
Wait so you're telling me I can run podman on android?
Looks like it's almost possible to run podman-machine on Android; and it's already possible to manually create your qcow for the qemu on Android and then run containers in that VM: https://github.com/cyberkernelofficial/docker-in-termux
The Puzzle of How Large-Scale Order Emerges in Complex Systems
This has been a topic of interest for me for almost a decade so far.
See also: non-linear dynamics, dynamical systems
The study of Complexity in the US Sciences is fairly recent, with the notable kickoff of the Santa Fe Institute [1]. They have all the resources there, from classes to scientific conferences.
A few book recommendations I've picked up over the years:
- Lewin, Complexity: Life at the Edge of Chaos [2] (recommended for a gentle introduction with history)
- Gleick, Chaos: Making a New Science [3]
- Holland, Emergence: From Chaos To Order [4]
- Holland, Signals and Boundaries: Building Blocks for Complex Adaptive Systems [5]
- Holland, Hidden Order: How Adaptation Builds Complexity [6] (unread but part of the series)
- Miller, Complex Adaptive Systems: An Introduction to Computational Models of Social Life [7]
- Strogatz, Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering [11] (edit: forgot i had this on my desk)
Also a shoutout to Stephen Wolfram, reading one of his articles [8] about underlying structures while stuck at an airport proved to be a pivotal moment in my life.
- Wolfram's New Kind of Science, as an exploration of simple rules computed to the point of complex behavior. [9]
- Wolfram's Physics Project, a later iteration on NKS ideas with some significant and thorough publication of work. [10]
Links:
[2] https://www.amazon.com/gp/product/0226476553
[3] https://www.amazon.com/gp/product/0143113453
[4] https://www.amazon.com/gp/product/0738201421
[5] https://www.amazon.com/gp/product/0262525933
[6] https://www.amazon.com/Hidden-Order-Adaptation-Builds-Comple...
[7] https://www.amazon.com/gp/product/0691127026
[8] https://writings.stephenwolfram.com/2015/12/what-is-spacetim...
[9] https://www.wolframscience.com/nks/
[10] https://wolframphysics.org/
[11] https://www.amazon.com/Nonlinear-Dynamics-Chaos-Applications...
About 30 years for me. As my username might suggest.
To this already excellent list I would add that the Santa Fe Institute (mentioned above) is continually putting out new work and papers on this topic.
Examples here: https://www.santafe.edu/research/results/books
Complex, nonlinear - possibly adaptive - systems with or without convergence but emergence, and structure!
Convergence: https://en.wikipedia.org/wiki/Convergence > STEM
Emergence: https://en.wikipedia.org/wiki/Emergence
Stability: https://en.wikipedia.org/wiki/Stability
Stability theory: https://en.wikipedia.org/wiki/Stability_theory
Glossary of systems theory: https://en.wikipedia.org/wiki/Glossary_of_systems_theory
Ask HN: Does collapse of the wave function occur with an AI observer?
Pnut: A C to POSIX shell compiler you can trust
If you are wondering how it handles C-only functions.. it does not.
open(..., O_RDWR | O_EXCL) -> runtime error, "echo "Unknow file mode" ; exit 1"
lseek(fd, 1, SEEK_HOLE); -> invalid code (uses undefined _lseek)
socket(AF_UNIX, SOCK_STREAM, 0); -> same (uses undefined _socket)
looking closer at "cp" and "cat" examples, write() call does not handle errors at all. Forget about partial writes, it does not even return -1 on failures.
"Compiler you can Trust", indeed... maybe you can trust it to get all the details wrong?
There seems to be libc in the repo but many functions are TODO https://github.com/udem-dlteam/pnut/tree/main/portable_libc
Otherwise the builtins seems to be here https://github.com/udem-dlteam/pnut/blob/main/runtime.sh
FYI all your functions are not "C functions", but rather POSIX functions. I did not expect it to be complete, but it's still impressive for what it is.
There are Linux ports of the plan9 `syscall` binary, which is presumably necessary to implement parts of libc with shell scripts: https://stackoverflow.com/questions/10196395/os-system-calls...
I don't remember there being a way to keep a server listening on a /dev/tcp/$ip/$port port, for sockets from shell scripts with shellcheck at least
How an 1803 Jacquard Loom Led to Computer Technology [video]
From "Calling your computer a Turing Machine is controversial" https://news.ycombinator.com/item?id=37709009 :
> Turing was familiar with the Jacquard loom for weaving patterns on punch cards and Babbage's, and was then tasked with brute-forcing a classical cipher.
And before the Jacquard loom was the music box.
Music box > Timeline: https://en.wikipedia.org/wiki/Music_box#Timeline
Jacquard machine > Importance in computing: https://en.wikipedia.org/wiki/Jacquard_machine#Importance_in...
Mechanical computer > Examples: https://en.wikipedia.org/wiki/Mechanical_computer#Examples :
> Antikythera: c. 100BC (a mechanical astronomical clock)
SQLite-jiff: SQLite extension for timezones and complex durations
What I'd find really interesting in this space, would be coalescing around a "standard" storage format and API.
This definitely fills a need [1], and I'd easily port something like this to my Go bindings as needed (no Rust necessary).
It's encouraging that the storage format is a (recently published) RFC 9557 [2]. Not so sure about having jiff on all the function names.
Making SQLite easy to extend (Alex's Rust FFI, the APSW Python bindings, etc) really is a treasure trove.
"RFC 9557: Date and Time on the Internet: Timestamps with Additional Information" https://www.rfc-editor.org/rfc/rfc9557.html
1996-12-19T16:39:57-08:00[America/Los_Angeles]
> The offset -00:00 is provided as a way to express that "the time in UTC is known, but the offset to local time is unknown" 1996-12-19T16:39:57-00:00
1996-12-19T16:39:57Z
> Furthermore, applications might want to attach even more information to the timestamp, including but not limited to the calendar system in which it should be represented. 1996-12-19T16:39:57-08:00[America/Los_Angeles][u-ca=hebrew]
1996-12-19T16:39:57-08:00[_foo=bar][_baz=bat]
Astropy supports the Julian calendar – circa Julius Caesar (~25BC), born by Caesarean section (an Eastern procedure)), and also astronomical numbering which has a Year Zero. [1]There is still not a zero in Roman numerals; there's "nulla" but no zero. Modern zero is notated with the Arabic numeral 0.
[1] Year Zero, calendaring systems: https://wrdrd.github.io/docs/consulting/knowledge-engineerin...
astropy.time > Time Formats: https://docs.astropy.org/en/latest/time/#time-format
Which calendar is the oldest?
List of calendars: https://en.wikipedia.org/wiki/List_of_calendars
Epoch > Calendar eras: https://en.wikipedia.org/wiki/Epoch#Calendar_eras
--
Are string tags at the end of the datetime that indexable?
Shouldn't there be #LinkedData URLs or URIs in an RDFS vocabulary for IANA/olsen and also for calendaring systems?
E.g. Schema.org/dateCreated has a range of schema.org/DateTime, which supports ISO8601, which also specifies timezone Z (but not -00:00, as the RFC mentions).
Astounding that there's been no solution for calendar year date offsets on computers. Are there notations for indicating which system, or has everyone on earth also always assumed that bare integer years are relative to their preferred system?
Somewhere there's a chart of how recorded human history is only like 10K out of 900K (?) years of hominids of earth, through ice ages and interglacials like the Holocene.
Systemd Talks Up Automatic Boot Assessment in Light of the CrowdStrike Outage
> The only problem? Major Linux distributions aren't yet onboard with using the Automatic Boot Assessment feature.
systemd.io/AUTOMATIC_BOOT_ASSESSMENT/ : https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/
From https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for:?
systemd-analyze security
> Is there a tool like `audit2allow` for systemd units?And also automatic variance in boot sequences with timeouts.
Where does it explain that a systemd service unit is always failing at boot?
From https://news.ycombinator.com/item?id=41022664 re: potential boot features: mandatory dead man's switch failover to the next kernel+initrd unless [...]:
> For EFI you could probably set BootNext to something else early on, in combination with some restarting watchdog. GRUB can store state between boots https://www.gnu.org/software/grub/manual/grub/html_node/Envi... and has "default" and "fallback" vars.
Rrweb – record and replay debugger for the web
RR's trick is to record any sources of nondeterminism, but otherwise execute code. One consequence is that it must record the results of syscalls.
Does Rrweb do the same for browser APIs and web requests?
The page mentions pixel-perfect replays, but does that require running on the same browser, exact same version, with the exact same experiments/feature flags enabled?
RRWeb only records changes to the DOM, it doesn't actually replay the JavaScript that makes those changes happen. So you see exactly what the user sees, but you're not able to inspect memory or anything like that.
There are a few caveats since not everything is captured in the DOM, such as media playback state and content in canvases. The user may also have some configurations that change their media queries, such as dark mode or prefers reduced motion.
Edit: and yes, to your point, browser differences would also render differently.
What about debugging and recording stack traces too?
"DevTools Protocol API docs—its domains, methods, and events": https://github.com/ChromeDevTools/debugger-protocol-viewer .. https://chromedevtools.github.io/devtools-protocol/
ChromeDevTools/awesome-chrome-devtools > Chrome Debugger integration with Editors: https://github.com/ChromeDevTools/awesome-chrome-devtools#ch...
DAP: Debug Adapter Protocol > Implementations: https://microsoft.github.io/debug-adapter-protocol/implement... :
- Microsoft/vscode-js-debug: https://github.com/microsoft/vscode-js-debug :
> This is a DAP-based JavaScript debugger. It debugs Node.js, Chrome, Edge, WebView2, VS Code extensions, and more. It has been the default JavaScript debugger in Visual Studio Code since 1.46, and is gradually rolling out in Visual Studio proper.
- awto/effectfuljs: https://github.com/awto/effectfuljs/tree/main/packages/vscod... :
> EffectfulJS Debugger: VSCode debugger for JavaScript/TypeScript. Besides the typical debugger's features it offers: Time-traveling, Persistent state, Platform independence, Programmable API, Hot mocking of functions or even parts of a function, Hot code swapping, Data breakpoints. This works by instrumenting JavaScript/TypeScript code and injecting necessary debugging API calls into it. It is implemented using EffectfulJS.
https://github.com/awto/effectfuljs : @effectful/debugger , @effectful/es-persist: https://github.com/awto/effectfuljs/tree/main/packages/es-pe...
I wish we had an rr for nodejs.
Do you mean a time travel debugger? I believe it would be an awesome feature to be able to record & replay program execution. I imagine the recordings would be huge in size, as there are many more degrees of freedom on backend than it is on frontend.
Today I found EffectfulJS Debugger, which is a DAP debugger with time travel and state persistence for JS: https://news.ycombinator.com/item?id=41036985
Don't snipe me in space-intentional flash corruption for STM32 microcontrollers
> Let’s first take a look at what the bootloader actually does. It manages 3 slots for operating system images, with each having around 500 KB reserved for it. Additionally, 2 redundant metadata structs are stored on different flash pages. During an update, one slot is overwritten, and then metadata is adjusted. We are resilient against power failures at any point, and as long as at least one image slot contains an operating system image, we can boot. [...]
> In our bootloader we can now enable a custom boot mode that ensures that if at least one image is bootable, it is booted, which will enable us to fix this problem remotely
"OneFileLinux: A 20MB Alpine metadistro that fits into the ESP" https://news.ycombinator.com/item?id=40915199 :
- eaut/efistub generates signed boot images containing kernels and initrd ramdisks in one file to ensure the verification of the entire initial boot process.
Are there already ways to fail to the next kernel+initrd for uboot, [ventoy, yumi, rufus], grub, systemd-boot, or efistub?
For EFI you could probably set BootNext to something else early on, in combination with some restarting watchdog. GRUB can store state between boots https://www.gnu.org/software/grub/manual/grub/html_node/Envi... and has "default" and "fallback" vars.
So the bootloader would always advance nextboot= to the next boot entry, and a process after a successful boot must update nextboot=$thisbootentry?
Is there another way with fewer write() calls?
[deleted]
Scan HTML even faster with SIMD instructions (C++ and C#)
Nice. Is that microdata or RDFa?
- "SIMD-accelerated computer vision on a $2 microcontroller" (2024-06) https://news.ycombinator.com/item?id=40794553 ; https://github.com/topics/simd , SimSIMD, SIMDe
- From "Show HN: SimSIMD vs SciPy: How AVX-512 and SVE make SIMD nicer and ML 10x faster" https://news.ycombinator.com/item?id=37808036 :
> simdjson-feedstock: https://github.com/conda-forge/simdjson-feedstock/blob/main/...
> pysimdjson-feedstock: https://github.com/conda-forge/pysimdjson-feedstock/blob/mai...
- https://github.com/simd-lite/simd-json :
> [simd-lite/simd-json is a] Rust port of extremely fast simdjson JSON parser with serde compatibility
> Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.
Breakthrough bioplastic decomposes in 2 months
> A plastic alternative being developed in Denmark, made from barley starch and sugar beet waste, could soon be holding your leftovers.
> The best part about the University of Copenhagen-designed material is that it decomposes in nature in about two months, according to a lab summary. Standard plastic takes around 20 to 500 years (or more) to break down, according to the UN.
- "Researchers invent 100% biodegradable 'barley plastic'" https://news.ycombinator.com/item?id=40783777
Viewing Fast Vortex Motion in a Superconductor
> Nakamura and her colleagues have now developed their technique to measure vortex motion in two dimensions, this time in the iron-based superconductor FeSe0.5Te0.5, or FST. The researchers fabricated a 38-nm-thick film of FST and cooled it below its superconducting transition temperature (16.5 K). A coil generated a magnetic field, which produced vortices in the film. Additionally, the magnetic field induced a shielding current, which traced out a several-millimeter-diameter circle on the film.
> The researchers irradiated the film with a 20-picosecond infrared (0.3-terahertz) pulse and detected a second harmonic in the spectrum of the pulse that emerged from the sample, as in the previous experiments. But here they detected both polarizations of the emitted light, parallel and perpendicular to the incident pulse’s polarization.
> Using a roughly 1-mm-wide beam, they probed a region near the ring of the field-induced shielding current and then analyzed the transmitted waveforms in order to reconstruct the motion of a typical vortex residing in that location. The team found an oscillating, roughly parabolic trajectory rather than a straight line. This shape resulted from the interaction between the vortex, which is magnetic, and the shielding current. Discovering this motion was the most exciting part of the work, Nakamura says. “It felt like we were looking directly into the 2D motion of the vortex.”
"Picosecond trajectory of two-dimensional vortex motion in FeSe0.5Te0.5 visualized by terahertz second harmonic generation" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
"Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691
Automerge: A library of data structures for building collaborative applications
In practice most projects seem to use Yjs rather than Automerge. Is there an up-to-date comparison of the two? Has anyone here chosen Automerge over Yjs?
I’m quite familiar with both, having spent some time building a crdt library of my own. The authors of both projects are lovely humans. There are quite a lot of small differences that might matter to some people:
- Yjs is mostly made by a single author (Kevin Jahns). It does not store the full document history, but it does support arbitrarily many checkpoints which you can rewind a document to. Yjs is written in JavaScript. There’s a rust rewrite (Yrs) but it’s significantly slower than the JavaScript version for some reason. (5-10x slower last I checked).
- Automerge was started by Martin Kleppmann, Cambridge professor and author of Designing Data Intensive Applications. They have some funding now and as I understand it there are people working on it full time. To me it feels a bit more researchy - for example the team has been working on Byzantine fault tolerance features, rich text and other interesting but novel stuff. These days it’s written in rust, with wasm builds for the web. Automerge stores the entire history of a document, so unlike Yjs, deleted items are stored forever - with the costs and benefits that brings. Automerge is also significantly slower and less memory efficient than Yjs for large text documents. (It takes ~10 seconds & 200mb of ram to load a 100 page document in my tests.) I’m assured the team is working on optimisations; which is good because I would very much like to see more attention in that area.
They’re both good projects, but honestly both could use a lot of love. I’d love to have a “SQLite of local first software”. I think we’re close, but not quite there yet.
(There are some much faster test based CRDTs around if that’s your jam. Aside from my own work, Cola is also a very impressive and clean - and orders of magnitude faster than Yjs and automerge.)
> I’d love to have a “SQLite of local first software”
We have recently published a new research paper on replicating SQLite [1] in a local-first manner. We think it goes a step closer to that goal.
It looks very similar to Evolu (https://github.com/evoluhq/evolu)
cr-sqlite https://github.com/vlcn-io/cr-sqlite :
> Convergent, Replicated SQLite. Multi-writer and CRDT support for SQLite
From "SQLedge: Replicate Postgres to SQLite on the Edge" (2023) https://news.ycombinator.com/item?id=37063238#37067980 :
>> In technical terms: cr-sqlite adds multi-master replication and partition tolerance to SQLite via conflict free replicated data types (CRDTs) and/or causally ordered event logs
Mangrove trees are on the move, taking the tropics with them
Mangrove forest restoration > Mangrove loss and degradation: https://en.wikipedia.org/wiki/Mangrove_restoration#Mangrove_...
A new way to control the magnetic properties of rare earth elements
Article: "Optical control of 4f orbital state in rare-earth metals" (2024) https://www.science.org/doi/10.1126/sciadv.adk9522 :
> [Terbium, and] generally for the excitation of localized electronic states in correlated materials
Import and Export Markdown in Google Docs
Hey folks, I'm the engineer who implemented the new feature. Just clearing up some confusion.
A lot of you are noticing the preexisting automatic detection feature from 2022 [1], which I also worked on. That's NOT what this newly announced feature is. The new feature supports full import/export, but it's still rolling out so you're likely not seeing it yet!
Hope you like it once it reaches you :)
[1] https://workspaceupdates.googleblog.com/2022/03/compose-with...
Thanks!
Google Colab also supports Markdown input cells as "Text" with a preview.
Does this work with Google Sites?
How to create a Google Sites page from a Markdown doc, with MyST Markdown YAML front matter
NotebookLM can generate Python, LaTeX, and Markdown.
How to Markdown and Git diff on a Chromebook without containers or Inspect Element because [...]
How to auto-grade Jupyter Notebooks with Markdown prose, with OtterGrader
How to build a jupyter-book from .rst, MyST Markdown .md, and .ipynb jupyter/nbformat notebooks containing MyST Markdown
Laser nanofabrication inside silicon with spatial beam modulation and seeding
"Laser nanofabrication inside silicon with spatial beam modulation and anisotropic seeding" (2024) https://www.nature.com/articles/s41467-024-49303-z :
> Abstract: Nanofabrication in silicon, arguably the most important material for modern technology, has been limited exclusively to its surface. Existing lithography methods cannot penetrate the wafer surface without altering it, whereas emerging laser-based subsurface or in-chip fabrication remains at greater than 1 μm resolution. In addition, available methods do not allow positioning or modulation with sub-micron precision deep inside the wafer. The fundamental difficulty of breaking these dimensional barriers is two-fold, i.e., complex nonlinear effects inside the wafer and the inherent diffraction limit for laser light. Here, we overcome these challenges by exploiting spatially-modulated laser beams and anisotropic feedback from preformed subsurface structures, to establish controlled nanofabrication capability inside silicon. We demonstrate buried nanostructures of feature sizes down to 100 ± 20 nm, with subwavelength and multi-dimensional control; thereby improving the state-of-the-art by an order-of-magnitude. In order to showcase the emerging capabilities, we fabricate nanophotonics elements deep inside Si, exemplified by nanogratings with record diffraction efficiency and spectral control. The reported advance is an important step towards 3D nanophotonics systems, micro/nanofluidics, and 3D electronic-photonic integrated systems.
- "Researchers achieve unprecedented nanostructuring inside silicon" https://phys.org/news/2024-07-unprecedented-nanostructuring-...
Film 'built atom-by-atom' moves electrons 7x faster than semiconductors
"Quantum microscopy study makes electrons visible in slow motion" https://phys.org/news/2024-07-quantum-microscopy-electrons-v... .. https://news.ycombinator.com/item?id=40981039 :
> If you change a few atoms at the atomic level, the macroscopic properties remain unchanged. For example, metals modified in this way are still electrically conductive, whereas insulators are not.
> However, the situation is different in more advanced materials, which can only be produced in the laboratory—minimal changes at the atomic level cause new macroscopic behavior. For example, some of these materials suddenly change from insulators to superconductors, i.e., they conduct electricity without heat loss.
> These changes [to superconductivity, for example] can happen extremely quickly, within picoseconds, as they influence the movement of electrons through the material directly at the atomic scale. A picosecond is extremely short, just a trillionth of a second. It is in the same proportion to the blink of an eye as the blink of an eye is to a period of more than 3,000 years.
> [...] The researchers were able to optimize their microscope in such a way that it repeats the experiment 41 million times per second and thus achieves a particularly high signal quality.
> The [crystalline ternary tetradymite] film — measuring just 100 nanometers wide, or about one-thousandth of the thickness of a human hair — was created through a process called molecular beam epitaxy, which involves precisely controlling beams of molecules to build a material atom-by-atom. This process allows materials to be constructed with minimal flaws or defects, enabling greater electron mobility, a measure of how easily electrons move through a material under an electric field.
> When the scientists applied an electric current to the film, they recorded electrons moving at record-breaking speeds of 10,000 centimeters squared per volt-second (cm^2/V-s). By comparison, electrons typically move at about 1,400 cm^2/V-s in standard silicon semiconductors, and considerably slower in traditional copper wiring.
MBE: Molecular-Beam Epitaxy: https://en.wikipedia.org/wiki/Molecular-beam_epitaxy
Quantum microscopy study makes electrons visible in slow motion
"Terahertz spectroscopy of collective charge density wave dynamics at the atomic scale" (2024) https://www.nature.com/articles/s41567-024-02552-7
Tlsd: Generate (message) sequence diagrams from TLA+ state traces
this looks pretty sweet. tla+ looks scary and mathy at first, but it's really a pretty simple concept: define some state machines, explore the state space, see if you run into trouble. with this you get a nice classic sequence diagram to show how things went down.
TLAplus: https://en.wikipedia.org/wiki/TLA%2B
A learnxinyminutes for TLA+ might be helpful: https://learnxinyminutes.com/
awesome-tlaplus > Books, (University) courses teaching (with) TLA+: https://github.com/tlaplus/awesome-tlaplus#books
Have you looked at using mermaidjs instead of SVG as an output? It might translate easier and plug into other solutions easier.
blockdiag > seqdiag is another syntax for Unicode sequence diagrams, optionally in ReST or MyST in Sphinx docs: http://blockdiag.com/en/seqdiag/examples.html#edge-types
blockdiag > nwdiag > rackdiag does server rack charts: http://blockdiag.com/en/nwdiag/
Otherwise mermaidjs probably has advantages including Jupyter Notebook support.
E.g. Gephi supports JSON of some sort. Supported graph formats: https://gephi.org/users/supported-graph-formats/
More graph and edge layout options might help with larger traces
yEd has many graph layout algorithms and parameters and supports GraphML XML; yEd: https://en.wikipedia.org/wiki/YEd
MermaidJS docs > Syntax > sequenceDiagram: https://mermaid.js.org/syntax/sequenceDiagram.html
It doesn't seem any of these would be naturally be able to express the example case https://raw.githubusercontent.com/eras/tlsd/master/doc/seque... . But maybe it's possible?
In addition there's also the case of messages that are never processed, but I suppose that could be its own "never handled" box.
Lasers Could Solve the Plastic Problem
"Light-driven C–H activation mediated by 2D transition metal dichalcogenides" (2024) https://www.nature.com/articles/s41467-024-49783-z
> The researchers used low-power light to break the chemical bonding of the plastics and create new chemical bonds that turned the materials into luminescent carbon dots. Carbon-based nanomaterials are in high demand because of their many capabilities, and these dots could potentially be used as memory storage devices in next-generation computer devices. [...]
> Potential for Broader Applications: The specific reaction is called C-H activation, where carbon-hydrogen bonds in an organic molecule are selectively broken and transformed into a new chemical bond. In this research, the two-dimensional materials catalyzed this reaction that led to hydrogen molecules morphing into gas. That cleared the way for carbon molecules to bond with each other to form the information-storing dots.
> Further research and development are needed to optimize the light-driven C-H activation process and scale it up for industrial applications. However, this study represents a significant step forward in the quest for sustainable solutions to plastic waste management.
Nanorobot with hidden weapon kills cancer cells
Looks like gold nanoparticles kill glioblastoma colon cancer: https://news.ycombinator.com/item?id=40819864 :
>> He emphasizes, "Any scientist can already use our model at the design stage of their own research to instantly narrow down the number of nanoparticle variants requiring experimental verification."
>> "Modeling Absorption Dynamics of Differently Shaped Gold Glioblastoma and Colon Cells Based on Refractive Index Distribution in Holotomographic Imaging" (2024) https://onlinelibrary.wiley.com/doi/10.1002/smll.202400778
> Holotomography: https://en.wikipedia.org/wiki/Holotomography
>>> Could part of the cancer-killing [200nm nanoparticle gold stars] be more magnetic than Gold (Au), for targeting treatment?
Magnets, nano robots, NIRS targeting,
"Whole-body magnetic resonance imaging at 0.05 Tesla" [1800W] https://www.science.org/doi/10.1126/science.adm7168 .. https://news.ycombinator.com/item?id=40335170
Article: ”A DNA Robotic Switch with Regulated Autonomous Display of Cytotoxic Ligand Nanopatterns” (2024) https://www.nature.com/articles/s41565-024-01676-4
Show HN: Kaskade – A text user interface for Kafka
Celery Flower TUI doesn't yet support Kafka in Python, though Kombu (the protocol wrapper for celery) does. It looks like Kaskade has non-kombu support for consumers and ? https://github.com/celery/celery/issues/7674#issuecomment-12... .. https://news.ycombinator.com/item?id=40842365
(Edit)
Kaskade models.py, consumer.py https://github.com/sauljabin/kaskade/blob/main/kaskade/model...
kombu/transport/confluentkafka.py: https://github.com/celery/kombu/blob/main/kombu/transport/co...
confluent-kafka-python wraps librdkafka with binary wheels: https://github.com/confluentinc/confluent-kafka-python
librdkafka: https://github.com/edenhill/librdkafka
Researchers demonstrate how to build 'time-traveling' quantum sensors
"Agnostic Phase Estimation" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... :
> Abstract: The goal of quantum metrology is to improve measurements’ sensitivities by harnessing quantum resources. Metrologists often aim to maximize the quantum Fisher information, which bounds the measurement setup’s sensitivity. In studies of fundamental limits on metrology, a paradigmatic setup features a qubit (spin-half system) subject to an unknown rotation. One obtains the maximal quantum Fisher information about the rotation if the spin begins in a state that maximizes the variance of the rotation-inducing operator. If the rotation axis is unknown, however, no optimal single-qubit sensor can be prepared. Inspired by simulations of closed timelike curves, we circumvent this limitation. We obtain the maximum quantum Fisher information about a rotation angle, regardless of the unknown rotation axis. To achieve this result, we initially entangle the probe qubit with an ancilla qubit. Then, we measure the pair in an entangled basis, obtaining more information about the rotation angle than any single-qubit sensor can achieve. We demonstrate this metrological advantage using a two-qubit superconducting quantum processor. Our measurement approach achieves a quantum advantage, outperforming every entanglement-free strategy.
- "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171
- "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... .. https://news.ycombinator.com/item?id=40492160 re photonic phase
Free-threaded CPython is ready to experiment with
Will there be an effort to encourage devs to add support for free-threaded Python like for Python 3 [1] and for Wheels [2]?
Is there a cibuildwheel / CI check for free-threaded Python support?
Is there already a reason not to have Platform compatibility tags for free-threaded cpython support? https://packaging.python.org/en/latest/specifications/platfo...
Is there a hame - a hashtaggable name - for this feature to help devs find resources to help add support?
Can an LLM almost port in support for free-threading in Python, and how should we expect the tests to be insufficient?
"Porting Extension Modules to Support Free-Threading" https://py-free-threading.github.io/porting/
[1] "Python 3 "Wall of Shame" Becomes "Wall of Superpowers" Today" https://news.ycombinator.com/item?id=4907755
(Edit)
Compatibility status tracking: https://py-free-threading.github.io/tracking/
(2021) https://news.ycombinator.com/item?id=29005573#29009072 :
python-feedstock / recipe / meta.yml: https://github.com/conda-forge/python-feedstock/blob/master/...
pypy-meta-feedstock can be installed in the same env as python-feedstock; https://github.com/conda-forge/pypy-meta-feedstock/blob/main...
Install commands from https://py-free-threading.github.io/installing_cpython/ :
sudo dnf install python3.13-freethreading
sudo add-apt-repository ppa:deadsnakes
sudo apt-get update
sudo apt-get install python3.13-nogil
conda create -n nogil -c defaults -c ad-testing/label/py313_nogil python=3.13
mamba create -n nogil -c defaults -c ad-testing/label/py313_nogil python=3.13
TODO: conda-forge ?, pixiStudent uses black soldier flies to grow pea plants in simulated Martian soil
Hydroponics shows that it's possible to grow most plants in water, no soil at all. So soil is by no means a requirement for plants to grow.
Source: Several Kratky method veggies growing on my balcony.
But don’t you also need to bring fertilizer with you? Also from what I remember you can’t really grow plans with wooded stalks, or at least not as easily as ones with all green stalks.
You would need some, but alot less than you might think. For my plants, about 3-5ml per litre of water is all thats needed.
Is there a KNF/JADAM hydro / aqua solution that would work?
NPK: Nitrogen, Phosphorous, Potassium
From https://www.space.com/16903-mars-atmosphere-climate-weather.... :
> According to ESA, Mars' atmosphere is composed of 95.32% carbon dioxide, 2.7% nitrogen, 1.6% argon and 0.13% oxygen. The atmospheric pressure at the surface is 6.35 mbar which is over 100 times less Earth's. Humans therefore cannot breathe Martian air.
Lichen might grow in Martian soil and atmosphere.
"‘Fixed’ nitrogen found in martian soil" (2015) https://www.science.org/doi/10.1126/science.347.6229.1403-a :
> Now, in a study published this week in the Proceedings of the National Academy of Sciences, the NASA Curiosity rover team reports detecting nitrates on Mars. ; nitric oxides
/? do plants consume protein? https://www.google.com/search?q=do+plants+consume+protein
Plants can use protein as a source of nitrogen.
"Plants can use protein as a nitrogen source without assistance from other organisms" (2008) https://www.pnas.org/doi/abs/10.1073/pnas.0712078105
"Solein® transforms ancient microbes into the future of food" https://www.solein.com/blog/solein-transforms-ancient-microb... :
> In Solein's case, the bacteria oxidise hydrogen – a process involving the removal of electrons from the hydrogen molecules. This reaction releases energy, which the bacteria use to fix carbon dioxide, transforming it into organic compounds, including proteins.
"Scientists Just Accidentally Discovered a Process That Turns CO2 Directly Into Ethanol" (2024) https://www.sciencealert.com/scientists-just-accidentally-di... :
> [...] Rondinone [@ORNL] and his colleagues had put together a catalyst using carbon, copper, and nitrogen, by embedding copper nanoparticles into nitrogen-laced carbon spikes measuring just 50-80 nanometres tall. (1 nanometre = one-millionth of a millimetre.)
> When they applied an electric current of just 1.2 volts, the catalyst converted a solution of CO2 dissolved in water into ethanol, with a yield of 63 percent.
Algae produce amino acids and proteins. Algae also use CO2 and Hydrogen to produce Omega-3 PUFAs, which are precursors to endocannabinoids.
FWIU Hydrogen Peroxide can flush aquarium tanks and Kratky systems. From watching YouTube, IDK about pool noodle polyethylene foam in the sun (solar radiation) instead of rockwool though
Narwhals and scikit-Lego came together to achieve dataframe-agnosticism
Narwhals: https://narwhals-dev.github.io/narwhals/ :
> Extremely lightweight compatibility layer between [pandas, Polars, cuDF, Modin]
Lancedb/lance works with [Pandas, DuckDB, Polars, Pyarrow,]; https://github.com/lancedb/lance
It may already be supported through pandas?
PHET Simulations: Balancing Act: LHS, Relation Operator, RHS
This PHET simulation says it only teaches ratios ("6.RP.A.3",), but I think it also teaches algebra!
The balance beam is not balanced when the equality relation does not hold: there is then an inequality relation between the masses given the position of the fulcrum.
Adding the same amount of mass to each side does not change whether the balance remains balanced (and subtraction is the inverse of addition, and integer multiplication is repeated addition).
Given a different amount of mass on each side of the balance, what is the mathematical analogue of balancing the equation by moving the fulcrum just?
Pyxel: A retro game engine for Python
These retro game engines are so much fun. Takes me back to the days of mode 13h.
Pyxel is (I think) unique among Python game engines in that it can run on the web.
Some others I’ve played with are PyGame and Arcade, mostly geared toward 2D, but you can see some impressive 3D examples on the youtube channel DaFluffyPotato.
Ursina is another that’s more 3D, fairly expressive, and runs fairly well for being Python.
I do feel like I’m going to be forced to cross over into something more powerful to build a real game though. Either Godot or Unity.
Panda3D [0] (which is what Ursina uses under the hood) and Pygame can both run on the web due to PygBag [1].
Truth be told you can build a game on any tool, obviously the tool you choose will help shape the game you make - but it's more about keeping at it then the underlying technology.
Personally I really like Panda3D and feel like it doesn't get enough attention. It's scene graph [3] is interesting because it splits it into nodes and nodepaths. A node is what gets stored in the graph but you manipulate them using the nodepaths which simplifies programming.
It also has a really amazing aync task manager [2] which makes game programming no problem. You can just pause in a task (or even a event) which sounds simple but you'd be suprised by who many engines won't let you do that.
It also has a multiplayer solution [6] that was battle tested for 2 mmos.
Finally I really like it's interval [4] system which is like Unreal or Unity's timeline but code based.
It's also on pypi [5] so super easy to install.
[1] https://pypi.org/project/pygbag/
[2] https://docs.panda3d.org/1.10/python/programming/tasks-and-e...
[3] https://docs.panda3d.org/1.10/python/more-resources/cheat-sh...
[4] https://docs.panda3d.org/1.10/python/programming/intervals/i...
[5] https://pypi.org/project/Panda3D/
[6] https://docs.panda3d.org/1.10/python/programming/networking/...
harfang-wasm is a fork of pygbag.
harfang-wasm: https://github.com/harfang3d/harfang-wasm
pygbag: https://github.com/pygame-web/pygbag
https://news.ycombinator.com/item?id=38772400 :
> FWIU e.g. panda3d does not have a react or rxpy-like API, but probably does have a component tree model?
Is there a react-like api over panda3d, or are there only traditional events?
The redux DevTools extension also works with various non-react+redux JS frameworks.
Manim has a useful API for teaching. Is there a good way to do panda3d with a manim-like interface, for scripting instructional design? https://github.com/ManimCommunity/manim/issues/3362#issuecom...
I haven't used React so I'm not sure but you could check their discourse. Someone may have created one.
I believe using intervals and async tasks/events you could do something similar to Manim.
Tunable quantum emitters on large-scale foundry silicon photonics
"Tunable quantum emitters on large-scale foundry silicon photonics" (2024) https://www.nature.com/articles/s41467-024-50208-0 :
> Abstract: Controlling large-scale many-body quantum systems at the level of single photons and single atomic systems is a central goal in quantum information science and technology. Intensive research and development has propelled foundry-based silicon-on-insulator photonic integrated circuits to a leading platform for large-scale optical control with individual mode programmability. However, integrating atomic quantum systems with single-emitter tunability remains an open challenge. Here, we overcome this barrier through the hybrid integration of multiple InAs/InP microchiplets containing high-brightness infrared semiconductor quantum dot single photon emitters into advanced silicon-on-insulator photonic integrated circuits fabricated in a 300 mm foundry process. With this platform, we achieve single-photon emission via resonance fluorescence and scalable emission wavelength tunability. The combined control of photonic and quantum systems opens the door to programmable quantum information processors manufactured in leading semiconductor foundries.
Show HN: I built GithubReader.org for better learning experience
Hey Guys, I created GithubReader.org (open-source, non-profit, most of the code is written by ChatGPT).
My motivation:
I wanted an easy way to keep a list of repositories I want to learn from and access them quickly. So, I built a search page where I can find repositories by name, like Google.
Also, I wanted a nicer way to read Markdown files on my laptop without the GitHub UI, like reading a book. So, I made a custom UI with a back button that takes you to the previous state without jumping pages.
The markdown links are sharable, e.g. https://githubreader.org/render?url=https%3A%2F%2Fgithub.com...
Repository URL: https://github.com/agdaily/github-reader/
The auto complete search is nice. I just found a repo (for linked-data) I wouldn't have otherwise.
ENH: search github labels with query patterns
glad you liked it. thanks for the suggestion, added it here https://github.com/agdaily/github-reader/issues/3
Show HN: A modern Jupyter client for macOS
I love Jupyter – it's how I learned to code back when I was working as a scientist. But I was always frustrated that there wasn't a simple and elegant app that I could use with my Mac. I made do by wrapping JupyterLab in a chrome app, and then more recently switching to VS Code to make use of Copilot. I've always craved a more focused and lighter-weight experience when working in a notebook. That's why I created Satyrn.
It starts up really fast (faster time-to-execution than VS Code or JupyterLab), you can launch notebooks right from the Finder, and the design is super minimalist. It's got an OpenAI integration (use your own API key) for multi-cell generation with your notebook as context (I'll add other LLMs soon). And many more useful features like a virtual environment management UI, Black code formatting, and easy image/table copy buttons.
Full disclosure: it's built with Electron. I originally wrote it in Swift but couldn't get the editor experience to where I wanted it. Now it supports autocomplete, multi-cursor editing, and moving the cursor between cells just like you'd expect from JupyterLab or VS Code.
Satyrn sits on top of the jupyter-server, so it works with all your existing python kernels, Jupyter configuration, and ipynb files. It only works with local files at the moment, but I'm planning to extend it to support remote servers as well.
I'm an indie developer, and I will try to monetize at some point, but it's free while in alpha. If you're interested, please try it out!
I'd love your feedback in the comments, or you can contact me at jack-at-satyrn-dot-app.
Cool!
Surprised to hear you started with a native UI and pivoted to electron. What was the major blocker there?
I recently got frustrated with OpenSCAD and decided to try CadQuery and Build123d. The modeling backend is a big step forward, but the GUI is not nearly as good as OpenSCAD. I managed to get it working via VSCode with a plugin, but I’m dreaming of embedding everything in a dedicated MacOS app so I can jump into CAD work without hacking through dev setup.
There aren't many great production-ready open-source frameworks for code-editor components in Swift. I assessed quite a few but found that the feature completeness was far from what I needed. I tried to fork [CodeEditSourceEditor](https://github.com/CodeEditApp/CodeEditSourceEditor) and add the extra features I wanted, but I think it would have taken me 6-12 months to get it to an acceptable state, meanwhile not spending any time focusing on the rest of the product experience.
I decided to play around with Typescript and Electron over a weekend and ended up getting a really solid prototype so I made the heart wrenching decision to move over.
I'm messing around with writing my own text editor component in Swift now, but it's quite a big endeavour to get the standard expected for a production ready product.
I'm assuming a pure-swift CAD UI would be equally difficult. Would be really cool to see that tho.
FWIW I've had good experiences with Tauri so far. It's great being able to call back easily into Rust code and I haven't had any real issues with the wrapped web view.
At this point I'd start with Tauri first and only switch to Electron only if I really need something only Chrome supports or if I need to support Linux. I guess the webview story there is not so good.
Tauri supports linux.
tauri-apps/tauri: https://github.com/tauri-apps/tauri :
> The user interface in Tauri apps currently leverages tao as a window handling library on macOS, Windows, Linux, Android and iOS. To render your application, Tauri uses WRY, a library which provides a unified interface to the system webview, leveraging WKWebView on macOS & iOS, WebView2 on Windows, WebKitGTK on Linux and Android System WebView on Android. ... Tauri GitHub action: https://tauri.app/v1/guides/building/cross-platform/#tauri-g...
WebView: https://en.wikipedia.org/wiki/WebView
CEF: Chromium Embedded Framework > External Projects: https://github.com/chromiumembedded/cef#external-projects : cefglue, cefpython,: https://github.com/cztomczak/cefpython
(edit)
tauri-apps/wry > Bundle Chromium Renderer; Chromium WebView like WebView2 (Edge (Chromium)) https://github.com/tauri-apps/wry/issues/1064#issuecomment-2...
Gravity alters the dynamics of a phase transition
> An experiment uncovers the role played by gravity in Ostwald ripening, a spontaneous thermodynamic process responsible for many effects such as the recrystallization of ice cream.
Ostwald ripening: https://en.wikipedia.org/wiki/Ostwald_ripening
And yet another parameter to phase transition diagrams:
Mpemba effect: https://en.wikipedia.org/wiki/Mpemba_effect :
> The Mpemba effect is the name given to the observation that a liquid (typically water) which is initially hot can freeze faster than the same liquid which begins cold, under otherwise similar condition
From "Exploring Quantum Mpemba Effects" (2024) https://physics.aps.org/articles/v17/105 :
> Under certain conditions, warm water can freeze faster than cold water. This phenomenon was named the Mpemba effect after Erasto Mpemba, a Tanzanian high schooler who described the effect in the 1960s [1]. The phenomenon has sparked intense debates for more than two millennia and continues to do so [2]. Similar processes, in which a system relaxes to equilibrium more quickly if it is initially further away from equilibrium, are being intensely explored in the microscopic world. Now three research teams provide distinct perspectives on quantum versions of Mpemba-like effects, emphasizing the impact of strong interparticle correlations, minuscule quantum fluctuations, and initial conditions on these relaxation processes [3–5]. The teams’ findings advance quantum thermodynamics and have potential implications for technologies, ranging from information processors to engines, powered by quantum resources.
Revealing the complex phases of rhombohedral trilayer graphene
> Rhombohedral graphene is an emerging material with a rich correlated-electron phenomenology, including superconductivity. The magnetism of symmetry-broken trilayer graphene has now been explored,
"Revealing the complex phases of rhombohedral trilayer graphene" (2024) https://www.nature.com/articles/s41567-024-02561-6
"Physicists create five-lane superhighway for electrons" https://news.ycombinator.com/item?id=40417929 :
> [ rhombohedral ]
> "Correlated insulator and Chern insulators in pentalayer rhombohedral-stacked graphene" (2023) https://www.nature.com/articles/s41565-023-01520-1 :
> [...] Our results establish rhombohedral multilayer graphene as a suitable system for exploring intertwined electron correlation and topology phenomena in natural graphitic materials without the need for moiré superlattice engineering.
And there's semiconductivity in graphene, too:
- "Researchers create first functional graphene semiconductor" (2024) https://news.ycombinator.com/item?id=38869171
- "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" https://news.ycombinator.com/item?id=40719725
Electric Vehicle Batteries Surprising New Source of 'Forever Chemical' Pollution
This is more fear mongering, a giant nothing burger. There will be huge advantages to recycling these batteries as the technologies come out, and dealing with the PFAS will be on the recycler.
> There will be
> will be on the recycler
Given the current state of recycling in general, I doubt this is a nothing burger.
Lead-acid batteries have very good recycling rates.
Yes, that is because in many places it is a law, plus recycling them is a money making industry.
I predict that it will be the same for lithium in the near future.
Based on what? Where are the economics to back that feeling up?
The first pilot plants are being built, e.g. https://electrek.co/2023/03/02/tesla-cofounders-redwood-show...
Redwood Materials: https://en.wikipedia.org/wiki/Redwood_Materials :
> In March 2023 Redwood claimed to have recovered more than 95% of important metals (incl. lithium, cobalt, nickel and copper) from 500,000 lb (230,000 kg) of old NiMH and Li-Ion packs. [10]
That's without $10K humanoid robots sorting through electronics and removing batteries IIUC.
Recovering rare earths from electronics waste should be even more profitable now. Just look at Copenhill; how many inputs and how many outputs?
"Turning waste into gold" (2024) https://www.sciencedaily.com/releases/2024/02/240229124612.h... :
> Researchers have recovered gold from electronic waste. Their highly sustainable new method is based on a protein fibril sponge, which the scientists derive from whey, a food industry byproduct. ETH Zurich researchers have recovered the precious metal from electronic waste.
"[Star-shaped] Gold nanoparticles kill cancer – but not as thought" (2024) https://news.ycombinator.com/item?id=40819864
Butter made from CO2 could pave the way for food without farming
I cant find any info.
Last time I read a similar "groundbreaking" resullt, they were using bacteria instead of cows.
With cows, you have CO2+sunlight -> grass -> butter
With bacteria it's a similar process, but you skip the grass if they are photosynthetic), or you just replace the grass with another intermediate step.
Violation of Bell inequality by photon scattering on a two-level emitter
"Violation of Bell inequality by photon scattering on a two-level emitter" (2024) https://www.nature.com/articles/s41567-024-02543-8
- "Quantum dot photon emitters violate Bell inequality in new study" https://phys.org/news/2024-07-quantum-dot-photon-emitters-vi...
Bell's Theorem: https://en.wikipedia.org/wiki/Bell%27s_theorem
From "Scientists show that there is indeed an 'entropy' of quantum entanglement" (2024) https://news.ycombinator.com/item?id=40396001#40396211 :
> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
"Reversibility of quantum resources through probabilistic protocols" (2024) https://www.nature.com/articles/s41467-024-47243-2
"Thermodynamics of Computations with Absolute Irreversibility, Unidirectional Transitions, and Stochastic Computation Times" (2024) https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.02...
End-to-end congestion control cannot avoid latency spikes (2022)
- "MOOC: Reducing Internet Latency: Why and How" (2023) https://news.ycombinator.com/item?id=37285586#37285733 ; sqm-autorate, netperf, iperf, flent, dslreports_8dn
Bufferbloat > Solutions and mitigations: https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti...
How I've Learned to Live with a Nonexistent Working Memory
Can't read without an account.
Also "working memory" is short-term storage, like registers, the details of what you're thinking about right now. Not memory of past events, like your wedding day.
Yeah I have no idea what definition the OP had in mind. Nonexistent working memory must be devastating:
Working memory is a cognitive system with a limited capacity that can hold information temporarily.
It is important for reasoning and the guidance of decision-making and behavior.
https://en.m.wikipedia.org/wiki/Working_memoryMemory improvement > Strategies: https://en.wikipedia.org/wiki/Memory_improvement#Strategies
Hippocampal prosthesis, Cortical implant: https://en.wikipedia.org/wiki/Hippocampal_prosthesis
Cortical implant: https://en.wikipedia.org/wiki/Cortical_implant
Digital immortality: https://en.wikipedia.org/wiki/Digital_immortality
External memory: https://en.wikipedia.org/wiki/External_memory_(psychology)
Interesting. I’m bipolar and I can absolutely say core training does not work for me. My working memory has a hard limit on the number of tokens it can store.
What does work is encoding complex ideas into longer term memory so they become one token instead of many. I typically shove them into visual memory.
The best way I can explain it is a terrain map. The map has trees which are flow charts, lakes which are distillations of many ideas into a small pools of concepts. Valleys of questions. Eventually everything is in my “mind’s eye” and I can figuratively fly over the data.
Now take that image and remove the visuals. My brain “sees” the map without needing to describe it. It exists as pure, untranslated thought. Intermediate steps, like language, would only get in the way.
Linguistic relativity; does language construct cognition or vice-versa? https://en.wikipedia.org/wiki/Linguistic_relativity
"List of concept- and mind-mapping software" https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapp...
- markmap: markdown + mindmap https://markmap.js.org/ , markmap-vs-code
- vim-voom: https://vim-voom.github.io/
- org-mode, nvim-org-mode
- vscode markdown outline: https://code.visualstudio.com/docs/languages/markdown#_docum...
Many a time back in the day I wondered why Word's outline editor mode imposed stylesheet styles on node levels at all.
Not a fan of the Sapir–Whorf hypothesis. For myself my brain clearly functions at a lower level of abstraction. Language is an imperfect translation of my actual ideas.
If the hypothesis has any truth at all, it cannot be universal. It doesn’t apply to me.
If someone has thoughts they cannot express in words, are they truly limited by the language they speak?
OneFileLinux: A 20MB Alpine metadistro that fits into the ESP
I made this exact thing like 2 years ago except with a custom busybox based environment, huh
It worked by building the system into the initramfs, then using a kernel feature to basically bake the initramfs into the kernel image, and then using efistub to make the kernel bootable
Efistub
Kernel docs > admin-guide/efistub: https://docs.kernel.org/admin-guide/efi-stub.html :
> Like most boot loaders, the EFI stub allows the user to specify multiple initrd files using the “initrd=” option. This is the only EFI stub-specific command line parameter, everything else is passed to the kernel when it boots
Arch Wiki > EFISTUB: https://wiki.archlinux.org/title/EFISTUB :
> The option is enabled by default on Arch Linux kernels, or if compiling the kernel one can activate it by setting CONFIG_EFI_STUB=y in the Kernel configuration.
eaut/efistub is a script for managing signed efistubs: https://github.com/eaut/efistub
... Systemrescuecd can be PXE booted (netbooted) from tftpd managed by Cobbler or Foreman.
From "Booting Linux off of Google Drive" https://news.ycombinator.com/item?id=40853770#40859485 re: signing and iPXE
Ventoy boots via grub/efi but there are blobs: https://news.ycombinator.com/item?id=40619822#40620279
"Ask HN: Where can I find a primer on how computers boot?" https://news.ycombinator.com/item?id=35230852
systemd-boot and grub support booting to efistubs
"Migrate from systemd-boot to EFISTUB" https://www.reddit.com/r/archlinux/comments/14jt8yv/migrate_... :
> You don't "disable" systemd-boot, you just don't run it. Or you can delete the boot entry for it with efibootmgr if you want. You also don't need to "set up" EFISTUB, the stub is already built into the Arch kernels, so they're already bootable. So all you need to do is make a boot entry in your UEFI using efibootmgr with the correct kernel parameters, most likely the same ones you used for systemd-boot.
Fedora > Changes/Unified_Kernel_Support_Phase_2 > Switch_an_existing_install_to_use_UKIs: https://fedoraproject.org/wiki/Changes/Unified_Kernel_Suppor... :
dnf install virt-firmware uki-direct kernel-uki-virt
kernel-bootcfg --show
https://github.com/spxak1/weywot/blob/main/guides/fedora_sys... : sudo dracut -fvM --uefi --hostonly-cmdline --kernel-cmdline "root=UUID=b6b8fa59-92cc-4d03-8d8f-d66dab76d433 ro rootflags=subvol=root resume=UUID=fb661671-97dc-45db-b720-062acdcf095e rhgb quiet mitigations =off"
- "No more boot loader: Please use the kernel instead" https://news.ycombinator.com/item?id=40907933Gentoo wiki > EFI_stub https://wiki.gentoo.org/wiki/EFI_stub :
> It is recommended to always have a backup kernel. If a bootmanager like grub is already installed, it should not be uninstalled, because grub can boot a stub kernel just like a normal kernel. A second possibility is to work with an additional UEFI entry. Before installing a new kernel, the current one can be copied from /efi/EFI/example/ to /efi/EFI/backup. [And then efibootmgr]
Circularly Polarized Light Unlocks Ultrafast Magnetic Storage and Spintronics
"Ultrafast opto-magnetic effects in the extreme ultraviolet spectral range" (2024) https://www.nature.com/articles/s42005-024-01686-7 :
> Coherent light-matter interactions mediated by opto-magnetic phenomena like the inverse Faraday effect (IFE) are expected to provide a non-thermal pathway for ultrafast manipulation of magnetism on timescales as short as the excitation pulse itself. As the IFE scales with the spin-orbit coupling strength of the involved electronic states, photo-exciting the strongly spin-orbit coupled core-level electrons in magnetic materials appears as an appealing method to transiently generate large opto-magnetic moments. Here, we investigate this scenario in a ferrimagnetic GdFeCo alloy by using intense and circularly polarized pulses of extreme ultraviolet radiation. Our results reveal ultrafast and strong helicity-dependent magnetic effects which are in line with the characteristic fingerprints of an IFE, corroborated by ab initio opto-magnetic IFE theory and atomistic spin dynamics simulations.
New shapes of photons open doors to advanced optical technologies
"Symmetries and wave functions of photons confined in three-dimensional photonic band gap superlattices" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.109.2...
PostgreSQL and UUID as Primary Key
The best advice I can give you is to use bigserial for B-tree friendly primary keys and consider a string-encoded UUID as one of your external record locator options. Consider other simple options like PNR-style (airline booking) locators first, especially if nontechnical users will quote them. It may even be OK if they’re reused every few years. Do not mix PK types within the schema for a service or application, especially a line-of-business application. Use UUIDv7 only as an identifier for data that is inherently timecoded, otherwise it leaks information (even if timeshifted). Do not use hashids - they have no cryptographic qualities and are less friendly to everyday humans than the integers they represent; you may as well just use the sequence ID. As for the encoding, do not use base64 or other hyphenated alphabets, nor any identifier scheme that can produce a leading ‘0’ (zero) or ‘+’ (plus) when encoded (for the day your stuff is pasted via Excel).
Generally, the principles of separation of concerns and mechanical sympathy should be top of mind when designing a lasting and purposeful database schema.
Finally, since folks often say “I like stripe’s typed random IDs” in these kind of threads: Stripe are lying when they say their IDs are random. They have some random parts but when analyzed in sets, a large chunk of the binary layout is clearly metadata, including embedded timestamps, shard and reference keys, and versioning, in varying combinations depending on the service. I estimate they typically have 48-64 bits of randomness. That’s still plenty for most systems; you can do the same. Personally I am very fond of base58-encoded AES-encrypted bigserial+HMAC locators with a leading type prefix and a trailing metadata digit, and you can in a pinch even do this inside the database with plv8.
For postgres, you want to use “bigint generated always as identity” instead of bigserial.
Why “bigint generated always as identity” instead of bigserial, instead of Postgres' uuid data type?
Postgres' UUID datatype: https://www.postgresql.org/docs/current/datatype-uuid.html#D...
django.db.models.fields.UUIDField: https://docs.djangoproject.com/en/5.0/ref/models/fields/#uui... :
> class UUIDField: A field for storing universally unique identifiers. Uses Python’s UUID class. When used on PostgreSQL and MariaDB 10.7+, this stores in a uuid datatype, otherwise in a char(32)
> [...] Lookups on PostgreSQL and MariaDB 10.7+: Using iexact, contains, icontains, startswith, istartswith, endswith, or iendswith lookups on PostgreSQL don’t work for values without hyphens, because PostgreSQL and MariaDB 10.7+ store them in a hyphenated uuid datatype type.
From the sqlalachemy.types.Uuid docs: https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqla... :
> Represent a database agnostic UUID datatype.
> For backends that have no “native” UUID datatype, the value will make use of CHAR(32) and store the UUID as a 32-character alphanumeric hex string.
> For backends which are known to support UUID directly or a similar uuid-storing datatype such as SQL Server’s UNIQUEIDENTIFIER, a “native” mode enabled by default allows these types will be used on those backends.
> In its default mode of use, the Uuid datatype expects Python uuid objects, from the Python uuid module
From the docs for the uuid Python module: https://docs.python.org/3/library/uuid.html :
> class uuid.SafeUUID: Added in version 3.7.
> safe: The UUID was generated by the platform in a multiprocessing-safe way
And there's not yet a uuid.uuid7() in the uuid Python module.
UUIDv7 leaks timing information ( https://news.ycombinator.com/item?id=40886496 ); which is ironic because uuids are usually used to avoid the "guess an autoincrement integer key" issue
Just noting, the commenter you replied to said:
> use “bigint generated always as identity” instead of bigserial.
The commenter you are replying to was not saying anything about whether to use UUIDs or not; they just said "if you are going to use bigserial, you should use bigint generated always as identity instead".
The question is the same; why would you use bigint instead of the native UUID type?
Why does OT compare text and UUID instead of char(32) and UUID?
What advantage would there be for database abstraction libraries like SQLalchemy and Django to implement the UUID type with bigint or bigserial instead of the native pg UUID type?
Best practice in Postgres is to use always use the text data type and combine it with check constraints when you need an exact length or max length.
See: https://wiki.postgresql.org/wiki/Don't_Do_This#Text_storage
Also, I think you're misunderstanding the article. They aren't talking about storing a uuid in a bigint. They're talking about have two different id's. An incrementing bigint is used internally within the db for PK and FK's. A separate uuid is used as an external identifier that's exposed by your API.
What needs to be stored as text if there is a native uuid type?
Chapter 8. Data Types > Table 8.2. Numeric Types: https://www.postgresql.org/docs/current/datatype-numeric.htm... :
> bigint: -9223372036854775808 to +9223372036854775807
> bigserial: 1 to 9223372036854775807
2*63 == 9223372036854775807
Todo UUID /? postgres bigint UUID: https://www.google.com/search?q=postgres+bigint+uuid :
- UUIDs are 128 bits, and they're unsigned, so: 2*127
- "UUID vs Bigint Battle!!! | Scaling Postgres 302" https://www.scalingpostgres.com/episodes/302-uuid-vs-bigint-...
"Reddit's photo albums broke due to Integer overflow of Signed Int32" https://news.ycombinator.com/item?id=33976355#33977924 re: IPv6 addresses having 64+64=128 bits
FWIW networkx has an in-memory Graph.relabel_nodes() method that assigns ints to unique node names in order to reduce RAM utilization for graph algorithms: https://networkx.org/documentation/stable/reference/generate...
Many people store UUID's as text in the database. Needles to say, this is bad. TFA starts by proposing that it's bad, then does some tests to show why.
I'm not quite sure what all the links have to do with the topic at hand.
Which link are you concerned about the topicality of, in specific?
Shouldn't we then link to the docs on how many bits wide db datatypes are, whether a datatype is prefix or suffix searchable, whether there's data leakage in UUID namespacing with primary NIC MAC address and UUIDv7, and whether there will be overflow with a datatype less wasteful than the text datatype for uuids when there is already a UUID datatype for uuids that one could argue to improve if there is a potential performance benefit
Teaching general problem-solving skills is not a substitute for teaching math [pdf] (2010)
This comes down to the old saying "everything is memorization at the end of the day".
Some people literally memorize answers. Other folks memorize algorithms. Yet other folks memorize general collections of axioms/proofs and key ideas. And perhaps at the very top of this hierarchy is memorizing just generic problem solving strategies/learning strategies.
And while naively we might believe that "understanding is everything". It really isn't. Consider if you are in the middle of a calculus exam and need to evaluate $7 \times 8$ by calculating $7+7+7+7...$ and then proceed to count on your fingers up to 56 because even $7+7$ wasn't memorized. You're almost certainly not going to make it past the first problem on your exam even though you really do understand exactly whats going on .
Similar things are true for software engineering. If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access it doesn't fucking matter HOW well you understand whats going on/how clear your mental models are. You are simply not going to be a productive/useful person on a team.
At some point in order to be effective in any field you need to eventually just KNOW the field, meaning have memorized shortcuts and paths so that you only spend time working on the "real problem".
To really drive the point home. This is the difference between being "intelligent" versus "experienced".
> Consider if you are in the middle of a calculus exam and need to evaluate $7 \times 8$ by calculating $7+7+7+7...$ and then proceed to count on your fingers up to 56 because even $7+7$ wasn't memorized. You're almost certainly not going to make it past the first problem on your exam even though you really do understand exactly whats going on.
This is not a counterexample because exams aren't an end goal. The process of filling out exams isn't an activity that provides value to society.
If an exam poorly grades a student who would do great solving actual real-world problems, the exam is wrong. No ifs. No buts. The exam is wrong because it's failing the ultimate goal: school is supposed to increase people's value to society and help figure out where their unique abilities may be of most use.
> Similar things are true for software engineering. If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access it doesn't fucking matter HOW well you understand whats going on/how clear your mental models are. You are simply not going to be a productive/useful person on a team.
If their mental models are truly so amazing, they'd make a great (systems) architect without having to personally code much.
To know something includes speed of regurgitation. Consider a trauma surgeon. You want them to know, off the top of their head, lots of stuff. You don’t want them taking their time and looking things up. You don’t want them redefining everything from first principles each time they perform surgery.
Knowing a topic includes instant recall of a key body of knowledge.
Maybe survey engineers with a first order derivative question and a PDE question n years after graduation with credential?
CAS and automated tests wins again.
A robosurgeon tech that knows to stop and read the docs and write test assertions may have more total impact.
I’m ABD in math. It was 30 years ago that I decided to not get a Ph.D. because I realized that I was never going to be decent at research. In the last 30 years I have forgotten a great of mathematics. It is no longer true that I know Galois Theory. I used to know it and I know the basic idea behind and I believe I can fairly easily relearn it. But right now I don’t know it.
That's wild, we all use AES cipers w/ TLS/HTTPS everyday - and Galois fields are essential to AES - but few people understand how HTTPS works.
The field is probably onto post-AES, PQ algos where Galois Theory is less relevant; but back then, it seemed like everyone needed to learn Galois Theory, which is or isn't prerequisite to future study.
The problem-solving skills are what's still useful.
Perhaps problem-solving skills cannot be developed without such rote exercises; and perhaps the content of such rote exercises is not relevant to predicting career success.
First anode-free sodium solid-state battery
Na4MnCr(PO4)3
Chromium is 5 times more abundant than Lithium in earth crust (0.01% vs 0.002%). Better, but not that much ?
"Regular" sodium-ion batteries with prussian blue has, it seems, the great advantage of not using any scarce elements. It would be nice to have a comparison between this solid state chemistry and the regular one.
What are the processes for recovering Chromium (Cr) when recycling batteries after a few hundred cycles?
Is it Trivalent, Hexavalent, or another form of Chromium?
Chromium > Precautions: https://en.wikipedia.org/wiki/Chromium#Precautions
Erin Brockovich: https://en.wikipedia.org/wiki/Erin_Brockovich :
> [average Cr-6 levels in wells; public health] In October 2022, even though the EPA announced Cr-6 was likely carcinogenic if consumed in drinking water, The American Chemistry Council, an industry lobby group, disputed their finding. [18]
Hopefully it's dietary chromium, not Hexavalent chromium (Cr-6).
(My comment on this is unexplainedly downvoted to 0?)
Again, would battery recycling processes affect the molecular form of Chromium?
Using copper to convert CO₂ to methane
Actual title: "Using copper to convert CO₂ to methane could be game changer in mitigating climate change".
Is there demand for methane? Why are there so many methane flares at oil wells in Texas, then?
At the rate solar, wind, and batteries are coming along, carbon capture is a waste of time and resources. Price alone is going to eliminate most demand for carbon based fuels. This is happening much faster than expected. See last week's Economist.
How are they planning to produce methane for return flights from Mars? There is Copper on Mars.
"Copper nanoclusters: Selective CO2 to methane conversion beyond 1 A/cm²" (2024) https://www.sciencedirect.com/science/article/pii/S092633732...
Ask HN: Solutions for Sustainable Tires before 2050?
Almost all automotive tires are made of synthetic "rubber".
Synthetic rubber tires pollute microplastics which harm aquatic life including salmon.
So, we need to make tires out of different materials in order to spare the environment, society and public health, and economy.
What are some potential solutions for sustainable tire products and production?
"Here’s how Michelin plans to make its tires more renewable: The tire company wants a completely sustainable tire by 2050" (2024) https://arstechnica.com/cars/2024/07/heres-how-michelin-plan...
"Michelin Sustainable Tires Use a Rice Byproduct" https://www.motortrend.com/features/michelin-sustainable-ric...
OTOH, Dandelion rubber (Taraxagum), Hemp
Dandelion Rubber; Taraxagum.
- "Tire dust makes up the majority of ocean microplastics" https://news.ycombinator.com/item?id=37728005
- Taraxagum is extractable with just water FWIU
- Would GMO dandelions be easier to scale up with traditional and vertical gardening? How could dandelions be genetically-engineered for use in sustainable tires? Larger stems, less or no seeds to avoid clogging up vertical gardening,
- What are some uses for the non-stem waste part of dandelion flowers?
Hemp "rubber"
/? hemp rubber https://www.google.com/search?q=hemp+rubber
"New Green Polymeric Composites Based on Hemp and Natural Rubber Processed by Electron Beam Irradiation" (2014) https://onlinelibrary.wiley.com/doi/10.1155/2014/684047 51 citations : https://scholar.google.com/scholar?cites=1043509048813828783... :
> Abstract: A new polymeric composite based on natural rubber reinforced with hemp has been processed by electron beam irradiation and characterized by several methods. The mechanical characteristics: gel fraction, crosslink density, water uptake, swelling parameters, and FTIR of natural rubber/hemp fiber composites have been investigated as a function of the hemp content and absorbed dose. Physical and mechanical properties present a significant improvement as a result of adding hemp fibres in blends. Our experiments showed that the hemp fibers have a reinforcing effect on natural rubber similar to mineral fillers (chalk, carbon black, silica). The crosslinking rates of samples, measured using the Flory-Rehner equation, increase as a result of the amount of hemp in blends and the electron beam irradiation dose increasing. The swelling parameters of samples significantly depend on the amount of hemp in blends, because the latter have hydrophilic characteristics.
Is there a dandelion hemp blend that's good for tires?
Michelin has rice byproduct in their current Most Sustainable Tire.
Maybe dandelion stems, some part of the industrial hemp plant, and rice byproduct?
With processing by IDK cheaper Electron Beam Irradiation too?
"Goodyear's new prototype tires are made of soybeans — and they could majorly improve your gas mileage: Tire manufacturer Goodyear has made a pledge to create a tire made from 100% sustainable materials by 2030." https://www.thecooldown.com/green-tech/goodyear-sustainable-...
We can't even be assed to mandate repairable smartphones in 2024. People don't care about our environment, they care about affordable technology that does what they want while marketing the values they respect to them. Tire manufacturers, big tech, and most sizable businesses have figured this out by now. Even if you invented a miracle rubber alternative, it was cheaper than synthetic rubber, and you made it freely available, tire manufacturers would still undercut each other until they had the highest-margin product. Any way you slice it, our miraculous free market exclusively incentivizes manufacturers to destroy the one planet our human race has.
My advice? Give up on the environment. You're on Hacker News, the majority of this website would happily destroy the earth for decades to come if it meant they could be king for a year. Even outside HN, who consciously considers the environment when making purchasing decisions anymore? Who stands up to the lobbyists that stop pro-environmental change from happening? Nobody. We suffer because people with money insist it's the only way we can live, and biodegradable tires aren't going to change that political status-quo.
Well, good luck with that approach, its outcomes, the costs and losses including poor food quality, and the gosh darn heat.
It's not my approach, it's our approach. I don't like it either, but I also lack the determinate capacity to change it. It's easier to just acknowledge that nobody values anything besides greed and resource competition unless the monopoly on violence demands otherwise.
A: Looking to share solutions for [XYZ]
B: Don't even try, it's hopeless, just do hedonism without consideration like everyone else
A: Use market incentives to change production and consumption behavior to head off costs and loss.
It's less of a chicken-and-egg problem and more of a cart-and-horse one. You fundamentally cannot promote a new solution in a market that won't respect the values you're trying to promote. Hedonism doesn't drive synthetic rubber sales, price-sensitive customers do.
> price-sensitive customers do
So we should fight with poverty.
Fixing poverty doesn't fix the free market. Customers will be price-sensitive no matter how rich they are because tires are a utility we don't benefit from splurging on. In other market segments, it's convenience or proprietary lock-in that separates them from positive alternatives.
The only way to "fix" it in America is to legislate the tire manufacturers into compliance. And when you do that people will kick and scream and say you've ruined their free market, so you've got to pick your poison and stand by it.
Anxious Generation – How Safetyism and Social Media Are Damaging the Kids
Does anxiety correlate to inflammatory ultra-processed diets?
Is there an incentive to self-report anxiety?
How does this connect to the article at all?
There is an epidemiological uptick in anxiety diagnosis rate; hence the article title: "Anxious Generation."
Is that due to broader trends in public health like lack of exercise and poor diet, specifically inflammatory foods like ultra-processed foods, exposure to food packaging, plastics, waterproofing chemicals, or other environmental contaminants?
What correlations in social research and human behavior can we identify?
Is there an increase in anxiety diagnoses in states with medical discounts for anxiety treatments?
YouTube's eraser tool removes copyrighted music without impacting other audio
I guess it's good that there's a new solution to this problem, but I wish it just wasn't a problem in the first place. It's ridiculous that someone can't just walk around outside, making a video of what they're doing, and post it, without having to worry that some copyrighted music is playing in the background for some portion of it.
That sort of situation should be considered fair use, and copyright owners who attempt to make trouble for people caught up in this situation should be slapped down, hard, and fined.
> walk around outside, making a video of what they're doing, and post it, without having to worry that some copyrighted music is playing in the background for some portion of it.
You can, this is called "incidental use" and is an exception to copyright in the USA. If you're filming yourself going down the street and someone starts playing a song, and you're not going out of your easy to capture the copyrighted content, it is legal.
YouTube's copyright strike system is more strict than the law, probably to kowtow to the music companies serving the YouTube Music product in exchange for not having to defend against a lawsuit.
If we weren't subject to their monopoly on online video, someone could just start telling copyright trolls to pound sand when a frivolous complaint comes up.
In addition to incidental use and and law enforcement and investigative purposes , copyrighted music in a YouTube video could also be Academic Use, journalism / criticism, sufficiently transformative, a mixtape-style compilation,.
Someday hopefully the musical copyright folks on YouTube will share revenue with Visual Artists on there, too.
Txtai – A Strong Alternative to ChromaDB and LangChain for Vector Search and RAG
https://news.ycombinator.com/item?id=40115099 :
> lancedb/lance is faster than pandas with dtype_backend="arrow" and has a vector index
From https://github.com/lancedb/lance :
> Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, DuckDB, Polars, Pyarrow,
> [...] Vector Search
> Comparison of different data formats in each stage of ML development cycle: Lance, Parquet & ORC, JSON & XML, TFRecord, Database, Warehouse
What can ChromaDB do that lancedb can't, and will either work in WASM?
From https://github.com/chroma-core/chroma :
> By default, Chroma uses Sentence Transformers to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.
Hydrothermal environment discovered deep beneath the ocean
Nature article: https://www.nature.com/articles/s41598-024-60802-3
"Discovery of the first hydrothermal field along the 500-km-long Knipovich Ridge offshore Svalbard (the Jøtul field)" (2024) https://www.nature.com/articles/s41598-024-60802-3 :
> "The newly discovered hydrothermal field, named Jøtul hydrothermal field, is associated with the eastern bounding fault of the rift valley rather than with an axial volcanic ridge. Guided by physico-chemical anomalies in the water column, ROV investigations on the seafloor showed a wide variety of fluid escape sites, inactive and active mounds with abundant hydrothermal precipitates, and chemosynthetic organisms. Fluids with temperatures between 8 and 316 °C as well as precipitates were sampled at four vent sites. High methane, carbon dioxide, and ammonium concentrations, as well as high [87/86] Sr isotope ratios of the vent fluids indicate strong interaction between magma and sediments from the Svalbard continental margin. Such interactions are important for carbon mobilization at the seafloor and the carbon cycle in the ocean
Does that help confirm or reject thi?s:
"Dehydration melting at the top of the lower mantle" (2014) https://www.science.org/doi/10.1126/science.1253358 :
> They conclude that the mantle transition zone — 410 to 660 km below Earth's surface — acts as a large reservoir of water.
I don't think it has anything to do with that. The water in these zones is seawater convecting down, then up, through the hot rock.
Mimicking the cells that carry hemoglobin as a blood substitute
Another solution; from "Scientists Find a Surprising Way to Transform A and B Blood Types Into Universal Blood" (2024) https://singularityhub.com/2024/04/29/scientists-find-a-surp... :
> Let’s Talk ABO+: When tested in clinical trials, converted blood has raised safety concerns. Even when removing A or B antigens completely from donated blood, small hints from earlier studies found an immune mismatch between the transformed donor blood and the recipient. In other words, the engineered O blood sometimes still triggered an immune response.
The sad state of property-based testing libraries
With the advent of coverage based fuzzing, and how well supported it is in Go, what am I missing from not using one of the property based testing libraries?
https://www.tedinski.com/2018/12/11/fuzzing-and-property-tes...
Like, with the below fuzz test, and the corresponding invariant checks, isn't this all but equivalent to property tests?
https://github.com/ncruces/aa/blob/505cbbf94973042cc7af4d6be...
https://github.com/ncruces/aa/blob/505cbbf94973042cc7af4d6be...
The distinction between property based testing and fuzzing is basically just a rough cluster of vibes. It describes a real difference, but the borders are pretty vague and precisely deciding which things are fuzzing and which are PBT isn’t really that critical.
- Quick running tests, detailed assertions —> PBT
- Longer tests, just looking for a crash —> fuzzing.
- In between, who knows?
https://hypothesis.works/articles/what-is-property-based-tes...
Interesting read. I guess I'm mostly interested in on number 2 here:
Under this point of view, a property-based testing library is really two parts:
1. A fuzzer.
2. A library of tools for making it easy to construct property-based tests using that fuzzer.
What should I (we?) be building on top of Go's fuzzer to make it easier to construct property-based tests?My current strategy is to interpret a byte stream as a sequence of "commands", let those be transitions that drive my "state machine", then test all invariants I can think of at every step.
I don’t know much about that specific fuzzer; but all you need is a/ consistent and controlled randomness, b/ combinators, c/ reducers.
What you describe (commands) is commonly used with model based testing and distributed systems.
TLA+, Formal Methods in Python: FizzBee, Nagini, Deal-solver, Dafny: https://news.ycombinator.com/item?id=39938759 :
> Python is Turing complete, but does [TLA,] need to be? Is there an in-Python syntax that can be expanded in place by tooling for pretty diffs; How much overlap between existing runtime check DbC decorators and these modeling primitives and feature extraction transforms should there be? (In order to: minimize cognitive overload for human review; sufficiently describe the domains, ranges, complexity costs, inconstant timings, and the necessary and also the possible outcomes given concurrency,)
From "S2n-TLS – A C99 implementation of the TLS/SSL protocol" https://news.ycombinator.com/item?id=38510025 :
> But formal methods (and TLA+ for distributed computation) don't eliminate side channels. [in CPUs e.g. with branch prediction, GPUs, TPUs/NPUs, Hypervisors, OS schedulers, IPC,]
Still though, coverage-based fuzzing;
From https://news.ycombinator.com/item?id=30786239 :
> OSS-Fuzz runs CloudFuzz[Lite?] for many open source repos and feeds OSV OpenSSF Vulnerability Format: https://github.com/google/osv#current-data-sources
From "Automated Unit Test Improvement using Large Language Models at Meta" https://news.ycombinator.com/item?id=39416628 :
> "Fuzz target generation using LLMs" (2023), OSSF//fuzz-introspector*
> gh topic: https://github.com/topics/coverage-guided-fuzzing
The Fuzzing computational task is similar to the Genetic Algorithm computational task, in that both explore combinatorial Hilbert spaces of potentially infinite degree and thus there is need for parallelism and thus there is need for partitioning for distributed computation. (But there is no computational oracle to predict that any particular sequence of combinations of inputs under test will deterministically halt on any of the distributed workers, so second-order methods like gradient descent help to skip over apparently desolate territory when the error hasn't changed in awhile)
The Fuzzing computational task: partition the set of all combinations of inputs for distributed execution with execute-once or consensus to resolve redundant results.
DbC Design-By-Contract patterns include Preconditions and Postconditions (which include tests of Invariance)
We test Preconditions to exclude Inputs that do not meet the specified Ranges, and we verify the Ranges of Outputs in Postconditions.
We test Invariance to verify that there haven't been side-effects in other scopes; that variables and their attributes haven't changed after the function - the Command - returns.
DbC: https://en.wikipedia.org/wiki/Design_by_contract :
> Design by contract has its roots in work on formal verification, formal specification and Hoare logic.
TLA+ > Language: https://en.wikipedia.org/wiki/TLA%2B#Language
Formal verification: https://en.wikipedia.org/wiki/Formal_verification
From https://news.ycombinator.com/item?id=38138319 :
> Property testing: https://en.wikipedia.org/wiki/Property_testing
> awesome-python-testing#property-based-testing: https://github.com/cleder/awesome-python-testing#property-ba...
> Fuzzing: https://en.wikipedia.org/wiki/Fuzzing
Software testing > Categorization > [..., Property testing, Metamorphic testing] https://en.wikipedia.org/wiki/Software_testing#Categorizatio...
--
From https://news.ycombinator.com/item?id=28494885#28513982 :
> https://github.com/dafny-lang/dafny #read-more
> Dafny Cheat Sheet: https://docs.google.com/document/d/1kz5_yqzhrEyXII96eCF1YoHZ...
> Looks like there's a Haskell-to-Dafny converter.
haskell2dafny: https://gitlab.doc.ic.ac.uk/dcw/haskell-subset-to-dafny-tran...
--
Controlled randomness: tests of randomness, random uniform not random norm, rngd, tests of randomness:
From https://news.ycombinator.com/item?id=40630177 :
> google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid... docs: https://github.com/google/paranoid_crypto/blob/main/docs/ran...
'Chemical recycling': 15-minute reaction turns old clothes into useful molecules
> The researchers instead turned to chemical recycling to break down some synthetic components of fabrics into reusable building blocks. They used a chemical reaction called microwave-assisted glycolysis, which can break up large chains of molecules — polymers — into smaller units, with the help of heat and a catalyst. They used this to process fabrics with different compositions, including 100% polyester and 50/50 polycotton, which is made up of polyester and cotton.
> For pure polyester fabric, the reaction converted 90% of the polyester into a molecule called BHET, which can be directly recycled to create more polyester textiles. The researchers found that the reaction didn’t affect cotton, so in polyester–cotton fabrics, it was possible to both break down the polyester and recover the cotton. Crucially, the team was able to optimise the reaction conditions so that the process took just 15 minutes, making it extremely cost-effective.
1. Flash heating plastic yields Hydrogen and Graphene; "Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763 https://news.ycombinator.com/item?id=37886982
2. This produces microwave (and THz) with Copper as the dielectric; though I don't think they were trying to make a microwave oven; "Pulsed THz radiation under ultrafast optical discharge of vacuum photodiode" https://news.ycombinator.com/item?id=40858955
3. Metasurface arrays could probably also waste less energy on microwaving clothing and textiles in order to recycle.
4. Are SPPs distinct from Cherenkov radiation, and photon-electron-phonon vorticity?
From "Electrons turn piece of wire into laser-like light source" https://news.ycombinator.com/item?id=33493885 :
Surface plasmon polaritons : https://en.wikipedia.org/wiki/Surface_plasmon_polaritons :
> [...] SPPs enable subwavelength optics in microscopy and photolithography beyond the diffraction limit. SPPs also enable the first steady-state micro-mechanical measurement of a fundamental property of light itself: the momentum of a photon in a dielectric medium. Other applications are photonic data storage, light generation, and bio-photonics.
But that's IR and visible light EM, not microwave EM.
Scientists discover way to 'grow' sub-nanometer sized transistors
"Integrated 1D epitaxial mirror twin boundaries for ultra-scaled 2D MoS2 field-effect transistors" (2024) https://www.nature.com/articles/s41565-024-01706-1
A similar process for graphene-based transistors would change the world.
"Ultrahigh-mobility semiconducting epitaxial graphene on silicon carbide" (2024) https://www.nature.com/articles/s41586-023-06811-0 .. "Researchers create first functional graphene semiconductor" https://news.ycombinator.com/item?id=38869171
GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity
I never understood why gpus can't use pluggable memories, like what the cpus can?
Cost originally, more recently probably so vendors can segment the market better for different products, i.e. you now have to get the fastest processor to get the most ram, regardless of your workload.
GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity
Good job decreasing latency.
Now work on the bandwidth.
A single HBM3 module has the bandwidth of half-a-dozen data center grade PCIe 5.0 x16 NVME drives.
A single DDR5 DIMM has the bandwidth of a pair of PCIe 5.0 x4 NVME drives.
"New RISC-V microprocessor can run CPU, GPU, and NPU workloads simultaneously" https://news.ycombinator.com/item?id=39938538
Show HN: Jb / json.bash – Command-line tool (and bash library) that creates JSON
jb is a UNIX tool that creates JSON, for shell scripts or interactive use. Its "one thing" is to get shell-native data (environment variables, files, program output) to somewhere else, using JSON encapsulate it robustly.
I wrote this because I wanted a robust and ergonomic way to create ad-hoc JSON data from the command line and scripts. I wanted errors to not pass silently, not coerce data types, not put secrets into argv. I wanted to leverage shell features/patterns like process substitution, environment variables, reading/streaming from files and null-terminated data.
If you know of the jo program, jb is similar, but type-safe by default and more flexible. jo coerces types, using flags like -n to coerce to a specific type (number for -n), without failing if the input is invalid. jb encodes values as strings by default, requiring type annotations to parse & encode values as a specific type (failing if the value is invalid).
If you know jq, jb is complementary in that jq is great at transforming data already in JSON format, but it's fiddly to get non-JSON data into jq. In contrast, jb is good at getting unstructured data from arguments, environment variables and files into JSON (so that jq could use it), but jb cannot do any transformation of data, only parsing & encoding into JSON types.
I feel rather guilty about having written this in bash. It's something of a boiled frog story. I started out just wanting to encode JSON strings from a shell script, without dependencies, with the intention of piping them into jq. After a few trials I was able to encode JSON strings in bash with surprising performance, using array operations to encode multiple strings at once. It grew from there into a complete tool. I'd certainly not choose bash if I was starting from scratch now...
jshn.sh: https://openwrt.org/docs/guide-developer/jshn src: https://git.openwrt.org/?p=project/libubox.git;a=blob;f=sh/j... :
> jshn (JSON SHell Notation), a small utility and shell library for parsing and generating JSON data
The Hunt for the Most Efficient Heat Pump in the World
> https://HeatPumpMonitor.org/ is a unique resource at present. It can be difficult to locate data on live heat pump performance. WIRED spent weeks scouring the web and speaking to experts to find examples of heat pump systems that could beat Rob Ritchie’s current SCOP and found only a handful of possible contenders.
Neuroscientists must not be afraid to study religion
- /? religion out of body experiences and epilepsy neuroimaging: https://www.google.com/search?q=religion+out+of+body+experie...
- /? auditory cortex verbal hallucinations: https://www.google.com/search?q=auditory%20cortex%20verbal%2...
- /? auditory cortex for schizophrenia treatment: https://www.google.com/search?q=auditory%20cortex%20for%20sc...
- Humanism > Varieties of humanism: https://en.wikipedia.org/wiki/Humanism#Varieties_of_humanism re: the many ways bootstrapping a sufficient morality because the Golden Rule and the 3 Laws of Robotics are insufficient in comparison to e.g. statutes printed out every year; and because LLMs lack reasoning, inference, and critical thinking and thus also ethics.
Sunmaxx PVT, Oxford PV launch perovskite-silicon TPV with 80% overall efficiency
> The new photovoltaic-thermal module, presented for the first time on the first day of Intersolar 2024, has a record electrical efficiency of 26.6% and thermal efficiency of 53.4%. The electrical output of the module with 6cm x 10cm M6 cells is 433 W
- Intersolar 2024 Liveblog: https://www.pv-magazine.com/2024/06/19/intersolar-2024-day-1...
- /?gnews Intersolar 2024: https://www.google.com/search?q=intersolar%202024&tbm=nws
- "French startup offers concrete solar carport" (2024-05) https://www.pv-magazine.com/2024/05/24/french-startup-offers...
- Concrete superconductor batteries that would work in datacenters with controlled relative humidity, but don't work unless kept dry: "Low-cost additive turns concrete slabs into super-fast energy storage" (2023-08) https://news.ycombinator.com/item?id=36964735
Google's carbon emissions surge nearly 50% due to AI energy demand
A few solutions for this:
- "Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
- "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
(Edit)
- "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39999225 ... "AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water" https://news.ycombinator.com/item?id=40783690
- "Ask HN: Does mounting servers parallel with the temperature gradient trap heat?" (2020) https://news.ycombinator.com/item?id=23033210 :
> Would mounting servers sideways (vertically) allow heat to transfer out of the rack? https://news.ycombinator.com/item?id=39555606
- Concrete parking lot TPV Thermal Solar canopies, And concrete supercapacitor batteries: https://news.ycombinator.com/item?id=40862190
Ladybird Web Browser becomes a non-profit with $1M from GitHub Founder
OTOH feature ideas: Formal Verification, Process Isolation, secure coding in Rust,
- Quark is written in Coq and is formally verified. What can be learned from the design of Quark and other larger formally-verified apps.
From "Why Don't People Use Formal Methods?" (2019) https://news.ycombinator.com/item?id=18965964 :
> - "Quark : A Web Browser with a Formally Verified Kernel" (2012) (Coq, Haskell) http://goto.ucsd.edu/quark/
From https://news.ycombinator.com/item?id=37451147 :
> - "How to Discover and Prevent Linux Kernel Zero-day Exploit using Formal Verification" (2021) [w/ Coq] http://digamma.ai/blog/discover-prevent-linux-kernel-zero-da... https://news.ycombinator.com/item?id=31617335
- Rootless containers require /etc/subuids to remap uids. Browsers could run subprocesses like rootless containers in addition to namespaces and application-level sandboxing.
- Chrome and Firefox use the same pwn2own'd sandbox.
- Container-selinux and rootless containers and browser tab processes
- "Memory Sealing "Mseal" System Call Merged for Linux 6.10" (2024) https://news.ycombinator.com/item?id=40474551
- Endokernel process isolation: From "The Docker+WASM Technical Preview" https://news.ycombinator.com/item?id=33324934
- QubesOS isolates processes with VMs.
- Gvisor and Kata containers further isolate container processes
- W3C Web Worker API and W3C Service Worker API and process isolation, and resource utilization
- From "WebGPU is now available on Android" https://news.ycombinator.com/item?id=39046787 :
>> What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU
- What can be learned from few methods and patterns from rust rewrites, again of larger applications
"MotorOS: a Rust-first operating system for x64 VMs" https://news.ycombinator.com/item?id=38907876 :
> "Maestro: A Linux-compatible kernel in Rust" (2023) https://news.ycombinator.com/item?id=38852360#38857185 ; redox-os, cosmic-de , Motūrus OS; MotorOS
From "Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=34743393 :
> - "The Rust Implementation of GNU Coreutils Is Becoming Remarkably Robust" https://news.ycombinator.com/item?id=34743393
> [Rust Secure Coding Guidelines, awesome-safety-critical,]
Pulsed THz radiation under ultrafast optical discharge of vacuum photodiode
"Pulsed THz radiation under ultrafast optical discharge of vacuum photodiode" (2024) https://link.springer.com/article/10.1007/s12200-024-00123-5 :
> Abstract: In this paper, we first present an experimental demonstration of terahertz radiation pulse generation with energy up to 5 pJ under the electron emission during ultrafast optical discharge of a vacuum photodiode. We use a femtosecond optical excitation of metallic copper photocathode for the generation of ultrashort electron bunch and up to 45 kV/cm external electric field for the photo-emitted electron acceleration. Measurements of terahertz pulses energy as a function of emitted charge density, incidence angle of optical radiation and applied electric field have been provided. Spectral and polarization characteristics of generated terahertz pulses have also been studied. The proposed semi-analytical model and simulations in COMSOL Multiphysics prove the experimental data and allow for the optimization of experimental conditions aimed at flexible control of radiation parameters.
- "When ultrashort electron bunch accelerates and drastically stops, it can generate terahertz radiation" (2024) https://phys.org/news/2024-07-ultrashort-electron-bunch-dras...
Photocathode, > Photocathode materials doesn't list Cu (Copper) https://en.wikipedia.org/wiki/Photocathode
"Earth-abundant Cu-based metal oxide photocathodes for photoelectrochemical water splitting" (2020) https://pubs.rsc.org/en/content/articlelanding/2020/ee/d0ee0...
LightSlinger antennae are FTL within the dielectric, but the EMR is not FTL; from https://news.ycombinator.com/item?id=37342016 :
> "Smaller, more versatile antenna could be a communications game-changer" (2022) https://news.ycombinator.com/item?id=37337628 :
> LightSlingers use volume-distributed polarization currents, animated within a dielectric to faster-than-light [FTL] speeds, to emit electromagnetic waves. (By contrast, traditional antennas employ surface currents of subluminally moving massive particles on localized metallic elements such as dipoles.) Owing to the superluminal motion of the radiation source, LightSlingers are capable of “slinging” tightly focused wave packets with high precision toward a location of choice. This gives them potential advantages over phased arrays in secure communications such as 4G and 5G local networks
From https://news.ycombinator.com/item?id=39365579 :
> "Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 ; "Researchers solve a foundational problem in transmitting quantum information" (2024) [w/ AlGaAs/GaAs]
Though the [qubit wave function transmission range] on that is only tens to a hundred micrometers. With 5pJ microwave like OT, what is the range?
Ancient lingams had Copper (Cu) and Gold (Au), and crystal FWIU.
From "Can you pump water without any electricity?" https://news.ycombinator.com/item?id=40619745 :
> - /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
- "High-sensitivity terahertz detection by 2D plasmons in transistors" (2024) https://news.ycombinator.com/item?id=38800406
America's startup boom around remote work and technology is still going strong
Nuclear spectroscopy breakthrough could rewrite fundamental constants of nature
> When trapped in a transparent, flourine-rich crystal, scientists can use a laser to excite the nucleus of a thorium-229 atom.
> [...] This accomplishment means that measurements of time, gravity and other fields that are currently performed using atomic electrons can be made with orders of magnitude higher accuracy
[deleted]
[deleted]
Show HN: Drop-in SQS replacement based on SQLite
Hi! I wanted to share an open source API-compatible replacement for SQS. It's written in Go, distributes as a single binary, and uses SQLite for underlying storage.
I wrote this because I wanted a queue with all the bells and whistles - searching, scheduling into the future, observability, and rate limiting - all the things that many modern task queue systems have.
But I didn't want to rewrite my app, which was already using SQS. And I was frustrated that many of the best solutions out there (BullMQ, Oban, Sidekiq) were language-specific.
So I made an SQS-compatible replacement. All you have to do is replace the endpoint using AWS' native library in your language of choice.
For example, the queue works with Celery - you just change the connection string. From there, you can see all of your messages and their status, which is hard today in the SQS console (and flower doesn't support SQS.)
It is written to be pluggable. The queue implementation uses SQLite, but I've been experimenting with RocksDB as a backend and you could even write one that uses Postgres. Similarly, you could implement multiple protocols (AMQP, PubSub, etc) on top of the underlying queue. I started with SQS because it is simple and I use it a lot.
It is written to be as easy to deploy as possible - a single go binary. I'm working on adding distributed and autoscale functionality as the next layer.
Today I have search, observability (via prometheus), unlimited message sizes, and the ability to schedule messages arbitrarily in the future.
In terms of monetization, the goal is to just have a hosted queue system. I believe this can be cheaper than SQS without sacrificing performance. Just as Backblaze and Minio have had success competing in the S3 space, I wanted to take a crack at queues.
I'd love your feedback!
Could you elaborate why you didn't choose e.g. RabbitMQ? I mean you talk about AMQP and such, it seems to me, that a client-side abstraction would be much more efficient in providing an exit strategy for SQS than creating an SQS "compatible" broker. For example in the Java ecosystem there is https://smallrye.io/smallrye-reactive-messaging/latest/ that serves a similar purpose.
Where AWS is the likely migration path for an app if it needs to be scaled beyond dev containers, already having tested with an SQS workalike prevents rework.
The celery Backends and Brokers docs compare SQS and RabbitMQ AMQP: https://docs.celeryq.dev/en/stable/getting-started/backends-...
Celery's flower utility doesn't work with SQS or GCP's {Cloud Tasks, Cloud Pub/Sub, Firebase Cloud Messaging FWIU} but does work with AMQP, which is a reliable messaging protocol.
RabbitMQ is backed by mnesia, an Erlang/OTP library for distributed Durable data storage. Mnesia: https://en.wikipedia.org/wiki/Mnesia
SQLite is written in C and has lots of tests because aerospace IIUC.
There are many extensions of SQLite; rqlite, cr-sqlite, postlite, electricsql, sqledge, and also WASM: sqlite-wasm, sqlite-wasm-http
celery/kombu > Transport brokers support / comparison table: https://github.com/celery/kombu?tab=readme-ov-file#transport...
Kombu has supported Apache Kafka since 2022, but celery doesn't yet support Kafka: https://github.com/celery/celery/issues/7674#issuecomment-12...
I don't get where you are pointing at.
RabbitMQ and other MOMs like Kafka are very versatile. What is the use case for not using SQS right now, but maybe later?
And if there is a use case (e.g. production-grade on-premise deployment), why not a client-side facade for a production-grade MOM (e.g. in Celery instead of sqs: amqp:)? Most MOMs should be more feature-rich than SQS. At-least-once delivery without pub/sub is usually the baseline and easy to configure.
I mean if this project reaches its goal to provide an SQS compatible replacement, that is nice, but I wonder if such a maturity comes with the complexity this project originally wants to avoid.
When you are developing an information system but don't want to pay for SQS in development or in production; for running tests with SQLite instead of MySQL/Postgres though.
SQS and heavier ESBs are overkill for some applications, and underkill for others where an HA configuration for the MQ / task queue is necessary.
Tests might be a valid use case, but doesn't seem to be the goal here and there are some solutions out there if you dont want to start a lightweight broker in a container.
Why would you want to use the SQS protocol in production without targeting the SQS "broker" as well? The timing and the AWS imposed quotas are bound to be different.
There are plenty brokers that fit different needs. I don't see the SQS protocol especially with security and so on in mind as a good fit in this case.
The switching cost from local almost-SQS to expensive HA SQS for scale and/or the client.
SQS is not a reliable exactly-once messaging protocol like AMQP, and it doesn't do task-level accounting or result storage (which SQLite also solves for).
Apache Kafka > See also: https://en.wikipedia.org/wiki/Apache_Kafka
SQS is not particularly expensive.
I don't know where you are getting at. I work on a project with SQS and I worked with RabbitMQ, ArtemisMQ and other MOM technologies as well. At least once delivery is something you can achieve easily. This would be the common ground that SQS can also provide. The same is true for Kafka.
Booting Linux off of Google Drive
Back in the the day it was possible to boot Sun Solaris over HTTP. This was called wanboot. This article reminded me of that.
This was basically an option of the OpenBoot PROM firmware of the SPARC machines.
It looked like this (ok is the forth prompt of the firmware):
ok setenv network-boot-arguments dhcp,hostname=myclient,file=https://192.168.1.1/cgi-bin/wanboot-cgi
ok boot net
This doesn't only load the initramfs over the (inter)network but also the kernel.https://docs.oracle.com/cd/E26505_01/html/E28037/wanboottask...
https://docs.oracle.com/cd/E19253-01/821-0439/wanboottasks2-...
Booting over HTTP would be interesting for device like Raspberry. Then you could run without memory card and have less things to break.
I would also prefer HTTP, but Pis can use PXE boot and mount their root filesystem over NFS already:) Official docs are https://www.raspberrypi.com/documentation/computers/raspberr... and they have a tutorial at https://www.raspberrypi.com/documentation/computers/remote-a...
Once you have PXE you can do all the things -- NFS boot, HTTP boot, iSCSI boot, and so on. There are several open source projects that support this. I think the most recent iteration is iPXE.
iPXE: https://en.wikipedia.org/wiki/IPXE :
> While standard PXE clients use only TFTP to load parameters and programs from the server, iPXE client software can use additional protocols, including HTTP, iSCSI, ATA over Ethernet (AoE), and Fibre Channel over Ethernet (FCoE). Also, on certain hardware, iPXE client software can use a Wi-Fi link, as opposed to the wired connection required by the PXE standard.
Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time, how does SecureBoot work with iPXE?
> Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time
For HTTPS booting, yes.
> how does SecureBoot work with iPXE?
It doesn't, unless you manage to get your iPXE (along with everything else in the chain of control) signed.
Show HN: Adding Mistral Codestral and GPT-4o to Jupyter Notebooks
Hey HN! We’ve forked Jupyter Lab and added AI code generation features that feel native and have all the context about your notebook. You can see a demo video (2 min) here: https://www.tella.tv/video/clxt7ei4v00rr09i5gt1laop6/view
Try a hosted version here: https://pretzelai.app
Jupyter is by far the most used Data Science tool. Despite its popularity, it still lacks good code-generation extensions. The flagship AI extension jupyter-ai lags far behind in features and UX compared to modern AI code generation and understanding tools (like https://www.continue.dev and https://www.cursor.com). Also, GitHub Copilot still isn’t supported in Jupyter, more than 2 years after its launch. We’re solving this with Pretzel.
Pretzel is a free and open-source fork of Jupyter. You can install it locally with “pip install pretzelai” and launch it with “pretzel lab”. We recommend creating a new python environment if you already have jupyter lab installed. Our GitHub README has more information: https://github.com/pretzelai/pretzelai
For our first iteration, we’ve shipped 3 features:
1. Inline Tab autocomplete: This works similar to GitHub Copilot. You can choose between Mistral Codestral or GPT-4o in the settings
2. Cell level code generation: Click Ask AI or press Cmd+K / Ctrl+K to instruct AI to generate code in the active Jupyter Cell. We provide relevant context from the current notebook to the LLM with RAG. You can refer to existing variables in the notebook using the @variable syntax (for dataframes, it will pass the column names to the LLM)
3. Sidebar chat: Clicking the blue Pretzel Icon on the right sidebar opens this chat (Ctrl+Cmd+B / Ctrl+Alt+B). This chat always has context of your current cell or any selected text. Here too, we use RAG to send any relevant context from the current notebook to the LLM
All of these features work out-of-the-box via our “AI Server” but you have the option of using your own OpenAI API Key. This can be configured in the settings (Menu Bar > Settings > Settings Editor > Search for Pretzel). If you use your own OpenAI API Key but don’t have a Mistral API key, be sure to select OpenAI as the inline code completion model in the settings.
These features are just a start. We're building a modern version of Jupyter. Our roadmap includes frictionless, realtime collaboration (think pair-programming, comments, version history), full-fledged SQL support (both in code cells and as a standalone SQL IDE), a visual analysis builder, a VSCode-like coding experience powered by Monaco, and 1-click dashboard creation and sharing straight from your notebooks.
We’d love for you to try Pretzel and send us any feedback, no matter how minor (see my bio for contact info, or file a GitHub issue here: https://github.com/pretzelai/pretzelai/issues)
Ramon here, the other cofounder of Pretzel! Quick update: Based on some early feedback, we're already working on adding support for local LLMs and Claude Sonnet 3.5. Happy to answer any questions!
braintrust-proxy: https://github.com/braintrustdata/braintrust-proxy
LocalAI: https://github.com/mudler/LocalAI
E.g promptfoo and chainforge have multi-LLM workflows.
Promptfoo has a YAML configuration for prompts, providers,: https://www.promptfoo.dev/docs/configuration/guide/
What is the system prompt, and how does a system prompt also bias an analysis?
/? "system prompt" https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Thank you for the links! We'll take a look.
At the moment, the system prompt is hardcoded in code. I don't think it should bias analyses because our goal is usually returning code that does something specific (for eg "calculate normalized rates for column X grouped by column Y") as opposed to something generic ("why is our churn rate going up"). So, the idea would be that the operator is responsible for asking the right questions - the AI merely acts as a facilitator to generate and understand code.
Also tangentially, we do want to allow users some king of local prompt management and allow easy access, potentially via slash commands. In our testing so far, the existing prompt largely works (took us a fair bit of hit-and-trial to find something that mostly works for common data usecases) but we're hitting limitations of a fixed system prompt so these links will be useful.
Supreme Court rules ex-presidents have immunity for official acts
The Constitution reads:
Judgment in Cases of Impeachment shall not extend further than to removal from Office,
[...]
The President, Vice President and all Civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
How does the comma matter in contracts?This:
"Impeachment for, and Conviction of"
Is distinct in meaning from this: "Impeachment for and Conviction of"
Furthermore: "Judgement in cases of Impeachment"
Is not: "Conviction in cases of Impeachment"
Doesn't this then imply that "Judgement in cases of Impeachment" (i.e. by the Senate) is distinct from "Conviction"?Such would imply that presidents can be Impeached and Judged, and Convicted.
(Furthermore, it clear that the founders' intent was not to create an immune King.)
The Constitution reads:
Judgment in Cases of Impeachment shall not extend further than to removal from Office,
[...]
The President, Vice President and all Civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
> Such would imply that presidents can be Impeached and Judged, and Convicted.Convicted just as other citizens with Limited Privileges and Immunities.
OPINION: In the US Constitution, removal upon "Impeachment for" is distinct from removal for "Conviction of". Thereby there is removal from office for both: a) Impeachment by the Judgement of the House and Senate, and also by b) Conviction by implied existing criminal procedure for non-immune acts including "Treason, Bribery, or other high Crimes and Misdemeanors."
Conviction is not wrought through Impeachment by the House & Senate, who can only remove from office.
Neither is Arrest Removal from Office, nor is Removal from Office Arrest.
Thereby, a President (like all other citizens) can be Convicted and then Impeached.
That the Executive's own DOJ doesn't prosecute a sitting President is simply a courtesy.
Structure and Interpretation of Classical Mechanics (2015)
I've finally gotten around to reading SICM, but I can't get MIT Scheme to install on the current OSX 14.2. Autoconf requires an old version of the OSX SDK. Has someone else already solved that problem?
I was surprised by how many ways Sussman and Wisdom found to modernise the Landau and Lipshitz treatment. There is the obvious change, where the first time they solve some equations of motion, it's done numerically, and the solution is chaotic.
There is also a more subtle change, where they keep sneaking in the concepts of differential geometry. The word "manifold" is reserved for a footnote, but if you know what tangent spaces and sprays are, it's straightforward to translate the "local tuples" and see what they're actually talking about.
I think this is a good idea. If physics undergraduates were exposed to manifolds and tangent spaces in their analytical mechanics course, then saw some exterior calculus in their first electromagnetism course, they might be ready for curvature and geodesics when they study general relativity.
I don't know if this is useful to you. I haven't tried to install MIT Scheme directly on macOS. Instead, I'm using the sample project at [1] to run a linux VM with Fedora on macOS x86-64. I was able to install MIT Scheme and scmutils on the VM by following the instructions at [2][3][4][5] for CentOS. It appears to have succeeded but I haven't really worked with the system yet.
I haven't tried the instructions for ARM-based macOS at [6] because I'm still on Intel.
It's less convenient to run a VM than to run on macOS directly. But I prefer to sandbox "random" software that way, and some things support linux better than macOS.
[1] https://developer.apple.com/documentation/virtualization/run...
[2] http://groups.csail.mit.edu/mac/users/gjs/6946/
[3] http://groups.csail.mit.edu/mac/users/gjs/6946/installation....
[4] http://groups.csail.mit.edu/mac/users/gjs/6946/mechanics-sys...
[5] https://www.gnu.org/software/mit-scheme/documentation/stable...
[6] http://groups.csail.mit.edu/mac/users/gjs/6946/mechanics-sys...
(You have to install texinfo for the online documentation part of [5]. Skip the 'install-pdf' documentation target if you don't want to depend on tex/latex.)
Does `brew install mit-scheme` work? The homebrew formula says : https://github.com/Homebrew/homebrew-core/blob/8c0bb91eea9c8... :
# Does not build: https://savannah.gnu.org/bugs/?64611
deprecate! date: "2023-11-20", because: :does_not_build
Podman runs Linux containers in a VM in QEMU on MacOS. https://podman.io/docs/installationAsk HN: Built a DB of over 50M+ Org Names for API use. Should it be made public?
TLDR: Built an Internal API that maps over 50M+ Organization Names to their Domains and Logos. Planning to make this API Endpoint Publicly available for other devs to use. Good Idea or Bad???
I run an AI startup where we fine-tune Open Source LLama and Falcon models to turn them into Enterprise Grade Models with longer context windows and better reasoning capabilities. We ended up collecting over 50 Million+ organization data. Recently we came across a use case from one of our customers that they want to use AI to create an auto populating CRM. So right now we have an Internal API that maps all Organization Domains to their Official Names and their Logos. Useful for all those Devs who want to fetch Organization Logos and Organization Names from Domains or vice versa. Should I be converting the API into a Publicly Accessible one for people to use it in their projects?!!??
Yeah, how do you indicate uncertainty in the aigen estimated correspondences? W3C CSVW supports dataset, column, and cell -level metadata. E.g. opencog atomspace hypergraph supports an Attention Value and a Truth Value.
Are there surprising regional and temporal trends in the names?
RDFS specifies a standard vocabulary for classes and subclasses, and properties and sub properties; rdfs:Class , rdfs:Property .
There are schema.org properties on the schema:LocalBusiness class for various business identifiers and other attributes ;
https://schema.org/url : domain
https://schema.org/identifier and subproperties : https://schema.org/duns , https://schema.org/taxID ,
https://schema.org/brand r: https://schema.org/Brand , https://schema.org/Organization
Maybe, a https://schema.org/Dataset :isPartOf a https://schema.org/ScholarlyArticle
SIMD-accelerated computer vision on a $2 microcontroller
> As I've been really interested in computer vision lately, I decided on writing a SIMD-accelerated implementation of the FAST feature detector for the ESP32-S3 [...]
> In the end, I was able to improve the throughput of the FAST feature detector by about 220%, from 5.1MP/s to 11.2MP/s in my testing. This is well within the acceptable range of performance for realtime computer vision tasks, enabling the ESP32-S3 to easily process a 30fps VGA stream.
What are some use cases for FAST?
Features from accelerated segment test: https://en.wikipedia.org/wiki/Features_from_accelerated_segm...
Is there TPU-like functionality in anything in this price range of chips yet?
Neon is an optional SIMD instruction set extension for ARMv7 and ARMv8; so Pi Zero and larger have SIMD extensions
Orrin Nano have 40 TOPS, which is sufficient for Copilot+ AFAIU. "A PCIe Coral TPU Finally Works on Raspberry Pi 5" https://news.ycombinator.com/item?id=38310063
From https://phys.org/news/2024-06-infrared-visible-device-2d-mat... :
> Using this method, they were able to up-convert infrared light of wavelength around 1550 nm to 622 nm visible light. The output light wave can be detected using traditional silicon-based cameras.
> "This process is coherent—the properties of the input beam are preserved at the output. This means that if one imprints a particular pattern in the input infrared frequency, it automatically gets transferred to the new output frequency," explains Varun Raghunathan, Associate Professor in the Department of Electrical Communication Engineering (ECE) and corresponding author of the study published in Laser & Photonics Reviews.
"Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" https://news.ycombinator.com/item?id=35117847#35120403 https://news.ycombinator.com/item?id=40275530
"Designing a SIMD Algorithm from Scratch" https://news.ycombinator.com/item?id=38450374
Thanks for reading!
> What are some use cases for FAST?
The FAST feature detector is an algorithm for finding regions of an image that are visually distinctive, which can be used as a first step in motion tracking and SLAM (simultaneous localization and mapping) algorithms typically seen in XR, robotics, etc.
> Is there TPU-like functionality in anything in this price range of chips yet?
I think that in the case of the ESP32-S3, its SIMD instructions are designed to accelerate the inference of quantized AI models (see: https://github.com/espressif/esp-dl), and also some signal processing like FFTs. I guess you could call the SIMD instructions TPU-like, in the sense that the chip has specific instructions that facilitates ML inference (EE.VRELU.Sx performs the ReLU operation). Using these instructions will still take away CPU time where TPUs are typically their own processing core, operating asynchronously. I’d say this is closer to ARM NEON.
SimSIMD https://github.com/ashvardanian/SimSIMD :
> Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE
github.com/topics/simd: https://github.com/topics/simd
https://news.ycombinator.com/item?id=37805810#37808036
From gh-topics/SIMD:
SIMDe: SIMD everywhere: https://github.com/simd-everywhere/simde :
> The SIMDe header-only library provides fast, portable implementations of SIMD intrinsics on hardware which doesn't natively support them, such as calling SSE functions on ARM. There is no performance penalty if the hardware supports the native implementation (e.g., SSE/AVX runs at full speed on x86, NEON on ARM, etc.).
> This makes porting code to other architectures much easier in a few key ways:
Gold nanoparticles kill cancer – but not as thought
Star-shaped gold nanoparticles up to 200nm kill at least some forms of cancer;
> Spherical nanoparticles of 10 nanometers in size, produced in Cracow, turned out to be practically harmless to the glioma cell line studied. However, high mortality was observed in cells exposed to nanoparticles as large as 200 nanometers, but with a star-shaped structure. [...]
> "We used the data from the Cracow experiments to build a theoretical model of the process of nanoparticle deposition inside the cells under study. The final result is a differential equation into which suitably processed parameters can be substituted—for the time being only describing the shape and size of nanoparticles—to quickly determine how the uptake of the analyzed particles by cancer cells will proceed over a given period of time," says Dr. Pawel Jakubczyk, professor at the UR and co-author of the model.
> He emphasizes, "Any scientist can already use our model at the design stage of their own research to instantly narrow down the number of nanoparticle variants requiring experimental verification."
> The ability to easily reduce the number of potential experiments to be carried out means a reduction in the costs associated with the purchase of cell lines and reagents, as well as a marked reduction in research time (it typically takes around two weeks just to culture a commercially available cell line). In addition, the model can be used to design better-targeted therapies than before—ones in which the nanoparticles will be particularly well absorbed by selected cancer cells, while maintaining relatively low or even zero toxicity to healthy cells in the patient's other organs.
"Modeling Absorption Dynamics of Differently Shaped Gold Glioblastoma and Colon Cells Based on Refractive Index Distribution in Holotomographic Imaging" (2024) https://onlinelibrary.wiley.com/doi/10.1002/smll.202400778
Holotomography: https://en.wikipedia.org/wiki/Holotomography :
> Holotomography (HT) is a laser technique to measure the three-dimensional refractive index (RI) tomogram of a microscopic sample such as biological cells and tissues. [...] In order to measure 3-D RI tomogram of samples, HT employs the principle of holographic imaging and inverse scattering. Typically, multiple 2D holographic images of a sample are measured at various illumination angles, employing the principle of interferometric imaging. Then, a 3D RI tomogram of the sample is reconstructed from these multiple 2D holographic images by inversely solving light scattering in the sample.
Could part of the cancer-killing nanoparticle star be more magnetic than Gold (Au), for targeting treatment ?
Is it only gold stars that do it?
Gold; AU; #79: https://en.m.wikipedia.org/wiki/Gold :
> Chemically, gold is a transition metal, a group 11 element, and one of the noble metals. It is one of the least reactive chemical elements, being the second-lowest in the reactivity series.
Is the effect entirely mechanical stress to the cell and cell membrane and/or oxidative stress?
(what were earlier oxygen levels on earth during the evolution of mammals),
Do highly-unreactive gold nanoparticles carry electronic, photonic, and sonic phononic charge?
Is there additional treatment value in [NIRS-targeted] ultrasound to charge the [magnetic] gold nanoparticles?
How does the resistivity of the cell membrane change with and without star-shaped gold nanoparticles?
> Is the effect entirely mechanical stress to the cell and cell membrane and/or oxidative stress?
The article appears to indicate the "tips" of the star shape are mechanically responsible for causing damage widespread enough to trigger cell-death.-
PS. This article needs to be widely read.-
PSS. Incidentally, the form of microscopy used by the researchers appears to be equaly novel and a *gamechanger*.-
Chrome is adding `window.ai` – a Gemini Nano AI model right inside the browser
This doesn't seem useful unless it's something standardized across browsers. Otherwise I'd still need to use a plugin to support safari, etc.
It seems like it could be nice for something like a bookmarklet or a one-off script, but I don't think it'll really reduce friction in engaging with Gemini for serious web apps.
"WebNN: Web Neural Network API" https://news.ycombinator.com/item?id=36158663 :
> - Src: https://github.com/webmachinelearning/webnn
W3C Candidate Recommendation Draft:
> - Spec: https://www.w3.org/TR/webnn/
> WebNN API: https://www.w3.org/TR/webnn/#api :
>> 7.1. The `navigator.ml` interface
>> webnn-polyfill
E.g. Promptfoo, ChainForge, and LocalAI all have abstractions over many models; also re: Google Desktop and GNU Tracker and NVIDIA's pdfgpt: https://news.ycombinator.com/item?id=39363115
promptfoo: https://github.com/promptfoo/promptfoo
ChainForge: https://github.com/ianarawjo/ChainForge
LocalAI: https://github.com/go-skynet/LocalAI
Supreme Court overturns 40-year-old "Chevron deference" doctrine
Congress can actually legislate the right of agencies to interpret the gaps in the laws back into effect - by passing a law that explicitly gives agencies this power.
Just like congress can legislate abortion laws rather than leaving it to judicial precedence.
Fundamentally there’s nothing wrong with the position of supreme court to push the responsibility of lawmaking back on congress.
Where I'm from, our "supreme court" can overthrow congress legislation for not following the constitution. Is this the case here AND is this the case in the US (generally speaking)?
Yes, that's exactly how it works. The US Supreme Court can rule a law unconstitutional, and that's that.
Little known fact: Congress can actually legislate around that by removing the possibility of judicial review from the law itself.
Little known, I suspect, on account of being entirely false.
> on account of being entirely false
Congress can absolutely limit judicial review by statute. (It can’t remove it entirely.)
Shouldn't that require a Constitutional Amendment?
Such a law would bypass Constitutional Separation of Powers (with limited privileges and immunities) i.e. checks and balances.
Why isn't the investigative/prosecutorial branch distinct from the executive and judicial branches though?
> Shouldn't that require a Constitutional Amendment?
No, Article III § 1 explicitly vests judicial power “in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish” [1].
> Why isn't the investigative/prosecutorial branch distinct from the executive and judicial branches though?
What do you think executing laws means?
[1] https://constitution.congress.gov/constitution/article-3/#ar...
No, to change the separation of powers they need a constitutional amendment because that's a change to the Constitution, and amendments are the process for changing the Constitution.
To interpret what was meant by Liberty and Equality as values, as a strict constructionist.
> to change the separation of powers they need a constitutional amendment because that's a change to the Constitution
The Constitution literally says the Congress has the power to establish inferior courts. Congress setting what is justifiable is highly precedented.
The words “separation of powers” never appear in the Constitution. It’s a phrase used to describe the system that document establishes.
Can Congress grant rights? No, because persons have natural rights, inalienable rights; and such enumeration of the rights of persons occurs only in the Declaration, which - along with the Articles of Confederation - frames the intent, spirit, and letter of the Constitution ; which itself very specifically limits the powers of the government and affords a process of amendment wit quorum for changes to such limits of the government in law.
Congress may not delegate right-granting privileges because the legislature hasn't right-granting privileges itself.
The Constitution is very clear that there are to be separate branches; each with limited privileges and immunities, and none with the total immunity of a Tyrant king.
A system of courts to hear offenses per the law determined by the federal and state legislatures with a Federal Constitutional Supremacy Clause, a small federal government, a federal minarchy, and a state divorce from British case law precedent but not common law or Natural Rights.
And so the Constitution limits the powers of each branch of government, and to amend the Constitution requires an amendment.
Why shouldn't we all filibuster court nominations?
Without an independent prosecutor, Can the - e.g. foreign-installed or otherwise fraudulent - executive obstruct DOJ investigations of themselves that conclude prior to the end of their term by terminating a nominated and confirmed director of an executive DOJ department, install justices with with his signature, and then pardon themselves and their associates?
The Court can or will only hear matters of law. Congress can impune and impeach but they're not trained as prosecutors either; so which competent court will hear such charges? Did any escape charges for war crimes, tortre without due process, terror and fear? Whose former counsel on the court now.
What delegations of power, duties, and immunities can occur without constitutional amendment?
Who's acting president today? Where's your birth certificate? You're not even American.
What amendments could we have?
1. You cannot pardon yourself, even as President. Presidents are not granted total immunity (as was recently claimed before the court), they are granted limited Privileges and Immunities.
2. Term limits for legislators, judges, and what about distinguished public/civil servants who pick expensive fights for the rest of us to fight and pay for? You sold us to the banks. Term limits all around.
3. Your plan must specify investment success and failure criteria. (Plan: policy, legislative bill, program, schedule,)
Can Congress just delegate privileges - for example, un-equal right-granting privileges - without an Amendment, because there is to be a system of lower courts?
Additional things that the Constitution, written in the 1770s, doesn't quite get, handle, or address:
US contractors operating abroad on behalf of the US government must obey US government laws while operating abroad. This includes "torture interrogation contractors" hired by an illegally-renditioning executive.
The Federal and State governments have contracted personal defense services to a privately-owned firm. Are they best legally positioned to defend, and why are they better funded than the military?
What prevents citizens from running a debtor blackmail-able fool - who is 35 and an American citizen - for president and puppeting them remotely?
Too dangerous to gamble.
Executive security clearance polices are determined by the actual installed executive; standard procedure was: tax return, arrest record, level of foreign debt.
Would a president be immune for slaving or otherwise aggravatedly human trafficking a vengeful, resentful prisoner on release who intentionally increases expenses and cuts revenue?
Did their regional accent change after college?
Can it be proven that nobody was remoting through anybody? No, it cannot.
And what about installs ostensibly to protect children in the past being used misappropriatingly for political harassment, intimidation, and blackmail? How should the court address such a hypothetical "yesterday" capability which could be used to investigate but also to tamper with and obstruct? Why haven't such capabilities been used to defend America from all threats foreign and domestic, why are there no countermeasure programs for such for chambers of justice and lawmaking and healthcare at least.
And what about US Marshalls or other protective services with witness protection reidentification authorization saboteurially "covering" for actual Candidate-elects?
Can a president be witness protected - i.e. someone else assumes their identity and assets - one day before or one day after an election? Are Justices protected from such fraud and identity theft either?
You're not even American.
And what about when persons are assailed while reviewing private, sensitive, confidential, or classified evidence; does such assault exfiltrate evidence to otherwise not-closed-door hearings and investigations?
Which are entitled to a private hearing?
Shouldn't prosecute tortuous obstruction? Or should we weakly refuse writ; and is there thus no competent authority (if nobody prosecutes torture and other war crimes)?
Let's all pay for healthcare for one another! Let's all pay for mental healthcare in the United States. A War on Healthcare!
Are branches of government prohibited from installing into, prosecuting, or investigating other branches of government; are there any specific immunities for any officials in any branch in such regard?
Sorry, it's not your fault either.
New ways to catch gravitational waves
- "Kerr-enhanced optical spring for next-generation gravitational wave detectors" (2024) https://news.ycombinator.com/item?id=39957123
- "Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" with a superconducting magnetic trap made out of Tantalum (2024) https://news.ycombinator.com/item?id=39495482
Ask HN: How to reuse waste heat and water from AI datacenters?
> Ask HN: How to reuse waste heat and water from AI datacenters?
- Steam Turbine
- Thermoelectrics (solid state)
- Gravel Battery, Sand Battery (solid state)
- OTEC: Ocean Thermal Energy Conversion (needs a 20C gradient)
- Water Sterilization
- Heat Pipe contract
- Commercial and Residential HVAC Heat
- Air Filter Production (CO2, Graphene, Heat)
- Water Filter Production (CO2, Graphene, Heat)
- Onsite manufacturing with Carbon & Heat (See: Datacenter Inputs)
- Carbon-based Chip Production (2D Graphene and other forms of Carbon)
- Thermoforming products (e.g. out of now easily-captured CO2 to Graphene and other forms of Carbon)
- Geopolymer production: pads to mount racks on, bricks, radially-curved outwall blocks
- Open-Source CFD: Computational Fluid Dynamics; use the waste heat to pay for CFD HPC time to reduce the waste heat with better materials, better facility design with passive thermofluidics and energy reclamation
- Datacenter Inputs: [Earthmoving, Infrastructure], Water, Air, Concrete, Beams, Electricity, Turbines, Bolts, Racks, Fans, PCs, Copper power Wires and data Cabling, Bandwidth, Fiber Cabling
- Datacenter Outputs: Boiled Water, Warm Air, Data, recyclable building materials, Electricity
> Ask HN: How to reuse waste heat and water from AI datacenters?
> - Steam Turbine
> - Thermoelectrics
> - Gravel Battery, Sand Battery
> - OTEC: Ocean Thermal Energy Conversion (needs a 20C gradient)
"Ask HN: Does OTEC work with datacenter heat, or thermoelectrics?" https://news.ycombinator.com/item?id=40821522
> - Water Sterilization
FWIU most datacenters waste the sterilized water as steam, which has some evaporative cooling value?
In which markets is it already possible for datacenters [and other industrial production facilities] to sell or give sterilized water to the water supply utility?
"Ask HN: Methods for removing parts per trillions of PFAS from water?" (2024) https://news.ycombinator.com/item?id=40036764 ; RO: Reverse Osmosis, Zwitterionic hydrogels (that are reusable if washed with ethanol)
> - Heat Pipe contract
E.g. Copenhagen Atomics wants to sell a heat pipe contract promise and manage the (Thorium+ waste burner) heat production.
It takes extra heat to pump heat through a tube underground to a building next door, but it is worth it.
> - Commercial and Residential HVAC Heat
The OTEC Wikipedia article mentions Ammonia as a Working fluid (a thermofluid), but Ammonia.
Sustainable thermofluids for heat pumps would be a good use for materials design AI.
"AI Designs Magnet Free of Rare-Earth Metals in Just 3 Months" (2024) https://news.ycombinator.com/item?id=40775959
> - Air Filter Production (CO2, Graphene, Heat)
> - Water Filter Production (CO2, Graphene, Heat)
> - Onsite manufacturing with Carbon & Heat (See: Datacenter Inputs)
Also meet ASTM spec for which products produced onsite.
> - Carbon-based Chip Production (2D Graphene and other forms of Carbon)
"Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
> - Thermoforming products (e.g. out of now easily-captured CO2 to Graphene and other forms of Carbon)
> - Geopolymer production: pads to mount racks on, bricks, radially-curved outwall blocks
Is this new peptide glass a functional substitute for "water glass" in other geopolymer formulas? "Self-healing glass from a simple peptide – just add water" (2024) https://news.ycombinator.com/item?id=40677095
> - Open-Source CFD: Computational Fluid Dynamics; use the waste heat to pay for CFD HPC time to reduce the waste heat with better materials, better facility design with passive thermofluidics and energy reclamation
Quantum CFD. https://westurner.github.io/hnlog/#comment-39954180 Ctrl-F CFD, SQS
> - Datacenter Inputs: [Earthmoving, Infrastructure], Water, Air, Concrete, Beams, Electricity, Turbines, Bolts, Racks, Fans, PCs, Copper power Wires and data Cabling, Bandwidth, Fiber Cabling
Biocomposites and Titanium are strong enough for beams and fan blades; though on corrosion due to water, titanium probably wins. "Cheap yet ultrapure titanium metal might enable widespread use in industry" (2024) https://news.ycombinator.com/item?id=40768549
Is there a carbon-based copper alternative for cabling that's flexible enough to work at CAT-5/6/7/8 ethernet cable; 40GBase-T at 2000 Mhz?
There appear to be electron vortices and superconductivity in carbon at room temperature.
Can we replace copper with carbon in cables and computers?
> - Datacenter Outputs: Boiled Water, Warm Air, Data, recyclable building materials, Electricity
- "Ask HN: Why don't datacenters have passive rooflines like Net Zero homes?" https://news.ycombinator.com/item?id=37579505
Ask HN: Does OTEC work with datacenter heat, or thermoelectrics?
OTEC > Thermodynamic efficiency: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio... :
> A heat engine gives greater efficiency when run with a large temperature difference. In the oceans the temperature difference between surface and deep water is greatest in the tropics, although still a modest 20 to 25 °C. It is therefore in the tropics that OTEC offers the greatest possibilities.[4] OTEC has the potential to offer global amounts of energy that are 10 to 100 times greater than other ocean energy options such as wave power. [44][45]
> OTEC plants can operate continuously providing a base load supply for an electrical power generation system. [4]
> The main technical challenge of OTEC is to generate significant amounts of power efficiently from small temperature differences. It is still considered an emerging technology. Early OTEC systems were 1 to 3 percent thermally efficient, well below the theoretical maximum 6 and 7 percent for this temperature difference.[46] Modern designs allow performance approaching the theoretical maximum Carnot efficiency.
Thermoelectric power: https://en.wikipedia.org/wiki/Thermoelectric_power
Google's new quantum computer may help us understand how magnets work
- "AI Designs Magnet Free of Rare-Earth Metals in Just 3 Months" (2024) https://news.ycombinator.com/item?id=40775959
Tunable entangled photon-pair generation in a liquid crystal
From https://news.ycombinator.com/item?id=40589972 :
> Observation of Bose–Einstein condensation of dipolar molecules" (2024) https://www.nature.com/articles/s41586-024-07492-z :
> Abstract: [...] This work opens the door to the exploration of dipolar quantum matter in regimes that have been inaccessible so far, promising the creation of exotic dipolar droplets, self-organized crystal phases and dipolar spin liquids in optical lattices.
> Given that each molecule is in an identical, known state, they could be separated to form quantum bits, or qubits, the units of information in a quantum computer. The molecules’ quantum rotational states — which can be used to store information — can remain robust for perhaps minutes at a time, allowing for long and complex calculations
Microsoft breached antitrust rules by bundling Teams and Office, EU says
Did they prevent OEMs from tampering with the system install image by adding apps that compete with Teams or from removing Teams?
Are they fascistly on notice for market cap, not antitrust offense?
Is the remedy that EU dominates MS until their market share is acceptable, or did MS prevent others from installing competing apps on the OS they and others distribute?
In the US, no bundling applies to ski resorts working in concert with other ski resorts. Materially, did MS prevent OEMs or users from installing competing apps?
HyperCard Simulator
As someone who’s too young to have been around for HyperCard, what was the main draw? Was it the accessibility of the tech or was it just really well executed?
https://en.wikipedia.org/wiki/HyperCard :
> It is among the first successful hypermedia systems predating the World Wide Web.
But HyperCard was not ported to OSX, now MacOS; which has Slides and TextEdit for HTML without MacPorts, or Brew,.
HTML is Hypertext because it has edges and scripting IIUC.
Instead of the DOM and JS addEventHandler, with React/preact you call setState(attr, val) to mutate the application state dict/object so that Components can register to be updated when keys in the state dict are changed with useState() https://react.dev/learn/adding-interactivity#responding-to-e...
The HyperTalk language has a fairly simple grammar compared to creating a static SPA (Single Page Application) with all of JS and e.g React and a router to support the browser back button and deeplink bookmarks with URI fragments that "don't break the web": https://github.com/antlr/grammars-v4/blob/master/hypertalk/H...
TodoMVC or HyperTalk? False dilemma.
More Memory Safety for Let's Encrypt: Deploying ntpd-rs
Unlike say, coreutils, ntp is something very far from being a solved problem and the memory safety of the solution is unfortunately going to play second fiddle to its efficacy.
For example, we only use chrony because it’s so much better than whatever came with your system (especially on virtual machines). ntpd-rs would have to come at least within spitting distance of chrony’s time keeping abilities to even be up for consideration.
(And I say this as a massive rust aficionado using it for both work and pleasure.)
The biggest danger in NTP isn't memory safety (though good on this project for tackling it), it's
(a) the inherent risks in implementing a protocol based on trivially spoofable UDP that can be used to do amplification and reflection
and
(b) emergent resonant behavior from your implementation that will inadvertently DDOS critical infrastructure when all 100m installed copies of your daemon decide to send a packet to NIST in the same microsecond.
I'm happy to see more ntpd implementations but always a little worried.
I really wish more internet infrastructure would switch to using NTS. It addresses these kinds of issues.
Never heard of it. Shockingly little on wikipedia for example.
Yeah. Seems it doesn’t even have its own article there.
Only a short mention in the main article about NTP itself:
> Network Time Security (NTS) is a secure version of NTPv4 with TLS and AEAD. The main improvement over previous attempts is that a separate "key establishment" server handles the heavy asymmetric cryptography, which needs to be done only once. If the server goes down, previous users would still be able to fetch time without fear of MITM. NTS is currently supported by several time servers, including Cloudflare. It is supported by NTPSec and chrony.
"RFC 8915: Network Time Security for the Network Time Protocol" (2020) https://www.rfc-editor.org/rfc/rfc8915.html
"NTS RFC Published: New Standard to Ensure Secure Time on the Internet" (2020) https://www.internetsociety.org/blog/2020/10/nts-rfc-publish... :
> NTS is basically two loosely coupled sub-protocols that together add security to NTP. NTS Key Exchange (NTS-KE) is based on TLS 1.3 and performs the initial authentication of the server and exchanges security tokens with the client. The NTP client then uses these tokens in NTP extension fields for authentication and integrity checking of the NTP protocol messages that exchange time information.
From "Simple Precision Time Protocol at Meta" https://news.ycombinator.com/item?id=39306209 :
> How does SPTP compare to CERN's WhiteRabbit, which is built on PTP [and NTP NTS]?
White Rabbit Project: https://en.wikipedia.org/wiki/White_Rabbit_Project
New device uses 2D material to up-convert infrared light to visible light
Use case: NIRS Near-Infrared Spectroscopy.
https://news.ycombinator.com/item?id=40332064 :
> predict e.g. [portable] low-field MRI with NIRS Infrared and/or Ultrasound?
> Idea: Do sensor fusion with all available sensors timecoded with landmarks, and then predict the expensive MRI/CT from low cost sensors [such as this coherent upconverter]
> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?
> Task: Learn a function f() such that f(lowcost_sensor_data) -> expensive_sensor_data
> FWIU OpenWater has moved to NIRS+Ultrasound for ~ live in surgery MRI-level imaging and now treatment?
> FWIU certain Infrared light wavelengths cause neuronal growth; and Blue and Green inhibit neuronal growth.
AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water
Sounds like the first problem super-intelligence (derived of this earth) should resolve is how to keep this system in dynamic equilibrium. Or, it should unwind itself if the solution is the problem.
"The Implications of Ending Groundwater Overdraft for Global Food Security" https://news.ycombinator.com/item?id=40686383
Computers made out of superconductors and Graphene instead of Copper would waste much less heat and water. "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" https://news.ycombinator.com/item?id=40719725
Passive thermal design of data centers could reduce waste, too. Would heat vent out of server racks better if the servers were mounted vertically instead of like pizza boxes trapping heat?
> - "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39999225
Most datacenters are not yet able to return their steamed and boiled water back to the water supply for treatment. Datacenters that process and steam sanitize water would rely upon the water utility company to then also remove PFAs.
Does OTEC work in datacenters; would that work eat some of the heat? https://news.ycombinator.com/item?id=38222695
AI Designs Magnet Free of Rare-Earth Metals in Just 3 Months
> According to the makers of MagNex, compared with conventional magnets, the material costs are 20 percent what they would otherwise be, and there's also a 70 percent reduction in material carbon emissions.
> "In the electric vehicle industry alone, the demand for rare-earth magnets is expected to be ten times the current level by 2030*
Electric fields boost graphene's potential, study shows
SSH as a Sudo Replacement
My main objection to this is just the added complexity. Instead of a single suid binary that reads a config file and calls exec(), now you have one binary that runs as root and listens on a UNIX socket, and another that talks to a UNIX socket; both of them have to do asymmetric crypto stuff.
It seems like the main argument against sudo/doas being presented is that you have a suid binary accessible to any user, and if there's a bug in it, an unauthorized user might be able to use it for privilege escalation. If that's really the main issue, then you can:
chgrp wheel /usr/bin/sudo
chmod o-rwx /usr/bin/sudo
Add any sudoers to the wheel group, and there you go: only users that can sudo are allowed to even read the bytes of the file off disk, let alone execute them. This essentially gives you the same access-related security as the sshd approach (the UNIX socket there is set up to be only accessible to users in wheel), with much much much less complexity.And since the sshd approach doesn't allow you to restrict root access to only certain commands (like sudo does), even if there is a bug in sudo that allows a user to bypass the command restrictions, that still gives no more access than the sshd approach.
If you are worried about your system package manager messing up the permissions on /usr/bin/sudo, you can put something in cron to fix them up that runs every hour or whatever you're comfortable with. Or you can uninstall sudo entirely, and manually install it from source to some other location. Then you have to maintain and upgrade it, manually, of course, unfortunately.
If you could configure your linux kernel without suid support, that would be huge benefit for security, IMO. suid feature is huge security hole.
Whether fighting one particular suid binary worth it, is questionable indeed. But this is good direction. Another modern approach to this problem is run0 from systemd.
"List of Unix binaries that can be used to bypass local security restrictions" (2023) https://news.ycombinator.com/item?id=36637980
"Fedora 40 Plans To Unify /usr/bin and /usr/sbin" (2024) https://news.ycombinator.com/item?id=38757975 ; a find expression to locate files with the setuid and setgid bits, setcap,
man run0: https://www.freedesktop.org/software/systemd/man/devel/run0....
Asynchronous Consensus Without Trusted Setup or Public-Key Cryptography
Their protocol only uses cryptographic hash functions, which means that it's post-quantum secure. One reason why this is significant is because existing post-quantum public key algorithms such as SPHINCS+ use much larger keys than classic public key algorithms such as RSA or ECDH.
*Edit: As other have pointed out, for SPHINCS+ it's the signature size and not the key size that's significantly larger.
Only PQ hash functions are PQ FWIU. Other cryptographic hash functions are not NIST PQ standardization finalists; I don't think any were even submitted with "just double the key/hash size for the foreseeable future" as a parameter as is suggested here. https://news.ycombinator.com/item?id=25009925
NIST Post-Quantum Cryptography Standardization > Round 3 > Selected Algorithms 2022 > Hash based > SPHINCS+: https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
SPHINCS+ is a hash based PQ Post Quantum (quantum resistant) cryptographic signature algorithm.
SPHINCS+: https://github.com/sphincs/sphincsplus
For those here who don’t know
SPHINCS is essentially Lamport Signatures and SPHINCS+ is essentially removing the need to store “state” by using chains of hashes and a random oracle
https://crypto.stackexchange.com/questions/54304/difference-...
I prefer tham to lattice-based methods because it seems to me that cryptographic hashes (and other trapdoor functions) are more likely to be quantum-resistant than lattices (for which an algorithm like Shor’s algorithm merely hasn’t been found yet).
In case you haven't seen, there was actually a quantum algorithm on preprint recently for solving some class of learning with errors problems in polynomial time. https://eprint.iacr.org/2024/555
Framework: A new RISC-V Mainboard from DeepComputing
What is the battery life like; maybe in ops/kwhr? https://news.ycombinator.com/item?id=40719630
The JH7110 CPU and 8 GB RAM on e.g. the VisionFive 2 uses about 5W of power at full load.
Battery life is going to be determined by your screen brightness, not the CPU.
From the reddit link in my this flgged (?) comment:
> About three years ago, I came across information that the RISC-V U74 core had roughly 1.8 times lower performance per clock compared to the ARM Cortex-A53.
Are there other advantages?
What is the immediate marginal cost of an ARM license per core(s)?
Likely to do with such information being unsourced, and contradicting every documented benchmark.
U74 does very easily outdo A53 in performance, power and area.
Versioning FreeCAD Files with Git (2021)
As text/code-based formats, e g. cadquery and the newer build123d work with text-based VCS systems like git.
Is there a diff tool for FreeCAD?
nbdime: https://nbdime.readthedocs.io/en/latest/#git-integration-qui... :
> Git doesn’t handle diffing and merging notebooks very well by default, but you can configure git to use nbdime:
nbdime config-git --enable --global
Oh, zippey, for zip archives of text files in git: https://bitbucket.org/sippey/zippey/src/master/Are there tools to test/validate a FreeCAD model to check for regressions over a range of commits?
git bisect might then be useful with FreeCAD models; https://www.git-scm.com/docs/git-bisect :
> This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a "bad" commit that is known to contain the bug, and a "good" commit that is known to be before the bug was introduced. Then git bisect picks a commit between those two endpoints and asks you whether the selected commit is "good" or "bad". It continues narrowing down the range until it finds the exact commit that introduced the change.
FWIW, LEGO Bricktales does automated design validation as part of the game. Similarly, what CI job(s) run when commits are pushed to a Pull Request git branch?
yes, i built this to solve that problem: https://github.com/khimaros/freecad-build
Which visual differences are occluded depending on the design and CAD software parameters?
If [FreeCAD] files are text-diffable, is there a way to topologically sort (with secondary insertion order) before git committing with e.g. precommit?
python has a sort_keys parameter for serialization, in order to alphabetically sort sibling nodes in nested datastructures; which makes it easier to diff json documents that have different key orderings but otherwise equal content.
martinvoz/jj: https://github.com/martinvonz/jj :
> Working-copy-as-a-commit: Changes to files are recorded automatically as normal commits, and amended on every subsequent change. This "snapshot" design simplifies the user-facing data model (commits are the only visible object), simplifies internal algorithms, and completely subsumes features like Git's stashes or the index/staging-area.
You'll need to put down ~35%, or almost $128,000, to afford a typical home
States That Have Passed Universal Free School Meals (So Far)
"Healthy, Hunger-Free Kids Act of 2010": https://en.wikipedia.org/wiki/Healthy,_Hunger-Free_Kids_Act_...
School meal programs in the United States: https://en.wikipedia.org/wiki/School_meal_programs_in_the_Un...
Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?
For transfering data it would be in theory possible. I haven't heard anything about semiconducting properties in graphene or other forms of carbon.
Minimally, a graphene based classical (or quantum) computer would require transistors and traces on a carbon wafer, from which NAND gates and better could be constructed.
FWIU there is certainly semiconductivity in graphene and there is also superconductivity in graphene at room temperature.
And traces can be lased into clothing, fruit peels, and probably carboniferous wafers.
Which electronic components cannot be made from graphene or other forms of carbon?
[deleted]
> semiconductivity in graphene
"Ultrahigh-mobility semiconducting epitaxial graphene on silicon carbide" (2024-01) https://www.nature.com/articles/s41586-023-06811-0 https://research.gatech.edu/feature/researchers-create-first...
> superconductivity in graphene at room temperature
From "Why There's a Copper Shortage" [...] https://news.ycombinator.com/item?id=40542826 re: the IEF report "Copper Mining and Vehicle Electrification" (2024) https://www.ief.org/focus/ief-reports/copper-mining-and-vehi... :
>> Graphene is an alternative to copper
>> In graphene, there is electronic topology: current, semiconductivity, room temperature superconductivity
>> - "Global Room-Temperature Superconductivity in Graphite" (2023) https://onlinelibrary.wiley.com/doi/10.1002/qute.202300230
>> - "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691
- [Superconducting nanowire] "Photon Detectors Rewrite the Rules of Quantum Computing" https://news.ycombinator.com/item?id=39537329 ; functionalized carbon nanotubes work as single photon sources
- From "High-sensitivity terahertz detection by 2D plasmons in transistors" https://news.ycombinator.com/item?id=38800406 .. "Electrons turn piece of wire into laser-like light source" (2023) https://news.ycombinator.com/item?id=33490730 :
> "Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2023) https://doi.org/10.21203/rs.3.rs-1572967/v1 ; Surface plasmons and photolithography and imaging electrons within dielectrics, emitter but probably not yet with graphene or other carbon?
I think its not the problem of if, but of how. graphene is delicate and can easily break. The manufacturing process is so hard that its still cheaper to use silicon to make chips
Nanoassembled multilayer graphene plus structure is probably already doable?
- "Towards Polymer-Free, Femto-Second Laser-Welded Glass/Glass Solar Modules" (2024) https://ieeexplore.ieee.org/document/10443029 .. "Femtosecond Lasers Solve Solar Panels' Recycling Issue" https://news.ycombinator.com/item?id=40325433
- "A self-healing multispectral transparent adhesive peptide glass" (2024) https://www.nature.com/articles/s41586-024-07408-x .. "Self-healing glass from a simple peptide – just add water" https://news.ycombinator.com/item?id=40677095
- "High-strength, lightweight nano-architected silica" (2023) https://www.sciencedirect.com/science/article/pii/S266638642... .. "A New Wonder Material Is 5x Lighter—and 4x Stronger—Than Steel" https://www.popularmechanics.com/science/a44725449/new-mater... ..
From https://news.ycombinator.com/item?id=38253573 :
> maybe glass on quantum dots in synthetic DNA, and then still wave function storage and transmission; scale the quantum interconnect
From https://news.ycombinator.com/item?id=38608859 re: 2DPA-1 polyamide :
>> However, in the new study, Strano and his colleagues came up with a new polymerization process that allows them to generate a two-dimensional sheet called a polyaramide. For the monomer building blocks, they use a compound called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can grow in two dimensions, forming disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.
If not glass or polyamide 2DPA-1,
Maybe aerogels? Can high-carbon aerogels be lased into multilayer graphene circuits?
In addition to nanolithography and nanoassembly, there is 3d printing with graphene.
From https://all3dp.com/2/graphene-3d-printing-can-it-be-done/ :
> However, because the atomic bonds are only in the lateral direction, it can only exist in 2D. Once multiple layers of graphene are stacked one onto each other, you get graphite, which has relatively poor properties. This means 3D printing pure graphene is impossible. A 3D structure is only possible when graphene is mixed with a binder.
There are so many non-graphene forms of carbon.
Is peptide glass a suitable binder for multilayered graphene for semiconductor and superconductor computing?
Do or can aerogels already contain binder?
Reconstructing Public Keys from Signatures
Fun fact: Ethereum transaction does not include sender's address or pubkey.
It is calculated from the signature.
I'm not sure if Bitcoin can use this trick, at least the classic transaction types explicitly included pubkey.
Electromechanical Lunar Lander
New algorithm discovers language just by watching videos
It's a stretch to call this discovering language. It's learning the correlations between sounds, spoken words and visual features. That's a long way from learning language.
I don’t think that you are wrong but what do you mean by language that can’t covered by p(y|x)?
I think he's wrong with the same logic. If sound, visual and meaning aren't correlated, what can language be? This looks like an elegant insight by Mr Hamilton and very much in line with my personal prediction that we're going to start encroaching on a lot of human-like AI abilities when it is usual practice to feed them video data.
There is a chance that abstract concepts can't be figured out visually ... although that seems outrageously unlikely since all mathematical concepts we know of are communicated visually and audibly. If it gets more abstract than math that'll be a big shock.
Linguistic relativity: https://en.m.wikipedia.org/wiki/Linguistic_relativity :
> The idea of linguistic relativity, known also as the Whorf hypothesis, [the Sapir–Whorf hypothesis], or Whorfianism, is a principle suggesting that the structure of a language influences its speakers' worldview or cognition, and thus individuals' languages determine or influence their perceptions of the world.
Does language fail to describe the quantum regime with which we could have little intuition? Verbally and/or visually, sufficiently describe the outcome of a double-slit photonic experiment onto a fluid?
Describe the operator product of (qubit) wave probability distributions and also fluid boundary waves with words? Verbally or visually?
I'll try: "There is diffraction in the light off of it and it's wavy, like <metaphor> but also like a <metaphor>"
> If it gets more abstract than math
There is a symbolic mathematical description of a [double slit experiment onto a fluid], but then sample each point in a CFD simulation and we're back to a frequentist sampling (and not yet a sufficiently predictive description of a continuum of complex reals)
Even without quantum or fluids to challenge language as a sufficient abstraction, mathematical syntax is already known to be insufficient to describe all Church-Turing programs even.
Church-Turing-Deutsch extends Church-Turing to cover quantum logical computers just: any qubit/qudit/qutrit/qnbit system is sufficient to simulate any other such system; but there is no claim to sufficiency for universal quantum simulation. When we restrict ourselves to the operators defined in modern day quantum logic, such devices are sufficient to simulate (or emulate) any other such devices; but observed that real quantum physical systems do not operate as closed systems with intentional reversibility like QC.
For example, there is a continuum of random in the quantum foam that is not predictable with and thus is not describeable by any Church-Turing-Deutsch program.
Gödel's incompleteness theorems: https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th... :
> Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
ASM (Assembly Language) is still not the lowest level representation of code before electrons that don't split 0.5/0.5 at a junction without diode(s) and error correction; translate ASM to mathematical syntax (LaTeX and ACM algorithmic publishing style) and see if there's added value
> Even without quantum or fluids to challenge language as a sufficient abstraction, mathematical syntax is already known to be insufficient to describe all Church-Turing programs even.
"When CAN'T Math Be Generalized? | The Limits of Analytic Continuation" by Morphocular https://www.youtube.com/watch?v=krtf-v19TJg
Analytic continuation > Applications: https://en.wikipedia.org/wiki/Analytic_continuation#Applicat... :
> In practice, this [complex analytic continuation of arbitary ~wave functions] is often done by first establishing some functional equation on the small domain and then using this equation to extend the domain. Examples are the Riemann zeta function and the gamma function.
> The concept of a universal cover was first developed to define a natural domain for the analytic continuation of an analytic function. The idea of finding the maximal analytic continuation of a function in turn led to the development of the idea of Riemann surfaces.
> Analytic continuation is used in Riemannian manifolds, solutions of Einstein's [GR] equations. For example, the analytic continuation of Schwarzschild coordinates into Kruskal–Szekeres coordinates. [1]
But Schwarzschild's regular boundary does not appear to correlate to limited modern observations of such "Planc relics in the quantum foam"; which could have [stable flow through braided convergencies in an attractor system and/or] superfluidic vortical dynamics in a superhydrodynamic thoery. (Also note: Dirac sea (with no antimatter); Godel's dust solutions; Fedi's unified SQS (superfluid quantum space): "Fluid quantum gravity and relativity" with Bernoulli, Navier-Stokes, and Gross-Pitaevskii to model vortical dynamics)
Ostrowski–Hadamard gap theorem: https://en.wikipedia.org/wiki/Ostrowski%E2%80%93Hadamard_gap...
> For example, there is a continuum of random in the quantum foam that is not predictable with and thus is not describeable [sic]
From https://news.ycombinator.com/item?id=37712506 :
>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
> The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
If there cannot be a sufficient set of axioms for all mathematics, can there be a Unified field theory?
Unified field theory: https://en.wikipedia.org/wiki/Unified_field_theory
> translate ASM to mathematical syntax
On the utility of a syntax and typesetting, and whether it gains fidelity at lower levels of description
latexify_py looks neat; compared to sympy's often-unfortunately-reordered latex output: https://github.com/google/latexify_py/blob/main/docs/paramet...
"systemd-tmpfiles –purge" will delete /home in systemd 256
Why?
man systemd-tmpfiles https://www.freedesktop.org/software/systemd/man/latest/syst...
man tmpfiles.d https://www.freedesktop.org/software/systemd/man/latest/tmpf...
/home is specified as a path to create if it doesn't already exist.[1] Despite the name "tmpfiles", there is nothing necessarily temporary about /home being created.[2] Think of a system such as a mobile phone that has an read-only system partition always present, and on first boot of the system, /home is automatically created. /home is effectively considered to be "temporary" until the next "factory reset". The --purge command is deliberately meant to revert this temporarily created state by destroying /home and effectively returning the system back to just a read-only system partition.[3]
[1] https://github.com/systemd/systemd/blob/main/tmpfiles.d/home...
Why doesn't it delete /mnt, and /run/media, too?
Mountpoints are explicitly/hardcoded to never be umountable and removable via tmpfiles.d specification.[1] Possibly because such mountpoints are generally expected to be systemd .mount units, and as such unmounting and removal of paths would be tied in with systemd unit handling, rather than systemd tmpfiles.d handling? I suppose one could still create tmpfiles.d configuration file that handles files within a fixed mountpoint of /mnt/alwaysmounted/tmp/ as I couldn't see anything in tmpfiles.c[1] at a quick glance that would prevent this occurring.
[1] https://github.com/systemd/systemd/blob/main/src/tmpfiles/tm...
[2] https://www.freedesktop.org/software/systemd/man/latest/syst...
Quantum entangled photons react to Earth's spin
> "The core of the matter lays in establishing a reference point for our measurement, where light remains unaffected by Earth's rotational effect. Given our inability to halt Earth's from spinning, we devised a workaround: splitting the optical fiber into two equal-length coils and connecting them via an optical switch," explains lead author Raffaele Silvestri.
> By toggling the switch on and off, the researchers could effectively cancel the rotation signal at will, which also allowed them to extend the stability of their large apparatus. "We have basically tricked the light into thinking it's in a non-rotating universe," says Silvestri.
"Experimental observation of Earth's rotation with quantum entanglement" (2024) https://www.science.org/doi/10.1126/sciadv.ado0215
Sagnac interferometer: https://en.wikipedia.org/wiki/Sagnac_effect
So almost like an optical wheatstone bridge ?
Wheatstone bridge > Modifications of the basic bridge: https://en.wikipedia.org/wiki/Wheatstone_bridge#Modification... :
> Carey Foster bridge, for measuring small resistances; Kelvin bridge, for measuring small four-terminal resistances; Maxwell bridge, and Wien bridge for measuring reactive components; Anderson's bridge, for measuring the self-inductance of the circuit, an advanced form of Maxwell's bridge
Is there a rectifier for gravitational waves, and what would that do?
Diode bridge > Current flow: https://en.wikipedia.org/wiki/Diode_bridge#Current_flow
And actually again, electron flow is fluidic:
- "How does electricity find the "Path of Least Resistance"?" (and the other paths) https://www.youtube.com/watch?v=C3gnNpYK3lo
- "Observation of current whirlpools in graphene at room temperature" (2024) https://news.ycombinator.com/item?id=40360684 :
> How are spin currents and vorticity in electron vortices related?
But, back to photons from electrons; like in a wheatstone bridge.
Are photonic fields best described with rays, waves, or as fluids in gravitational vortices like in SPH and SQS and CFD? (Superhydrodynamic, Superfluid Quantum Space, Computational Fluid Dynamics)
Actually, photons do interact with photons; as phonons in matter: "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 https://news.ycombinator.com/item?id=40600762
Perhaps there's an even simpler sensor apparatus for this experiment?
Recycling plastic is a dangerous waste of time
From https://news.ycombinator.com/item?id=40335733 :
> "Making hydrogen from waste plastic could pay for itself" (2024) https://news.ycombinator.com/item?id=37886955 ; flash heating plastic yields Graphene and Hydrogen
> "Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763
"Fungus breaks down ocean plastic" (2024) https://news.ycombinator.com/item?id=40676239
What does "Zero Net Cost" mean and how does that relate to margin?
Voyager 1 is back online: NASA spacecraft returns data from all 4 instruments
I always joke that NASA should win the nobel prize in engineering for their work on the mars rovers. where the punchline is that there is no nobel prize for engineering... I didn't say it was a good joke.
But the voyager missions... wow. NASA should totally win the nobel prize in engineering for them. What an accomplishment.
WARC-GPT: An open-source tool for exploring web archives using AI
Looking into this, could be very useful for working with personal Zotero archives of webpages
As previously seen on HN:
paper-qa: https://github.com/whitead/paper-qa :
> LLM Chain querying documents with citations [e.g. a scientific Zotero library]
> This is a minimal package for doing question and answering from PDFs or text files (which can be raw HTML). It strives to give very good answers, with no hallucinations, by grounding responses with in-text citations.
pip install paper-qa
> If you use Zotero to organize your personal bibliography, you can use the paperqa.contrib.ZoteroDB to query papers from your library, which relies on pyzotero. Install pyzotero to use this feature: pip install pyzotero
> If you want to use [ paperqa with pyzotero ] in an jupyter notebook or colab, you need to run the following command: import nest_asyncio
nest_asyncio.apply()
... there is also neuml/paper-ai w/ paperetl; https://news.ycombinator.com/item?id=39363115 : gh topic: pdfgpt, chatpdf,neuml/paperai has a YAML report definition schema: https://github.com/neuml/paperai :
> Semantic search and workflows for medical/scientific paper
python -m paperai.report report.yml 50 md <path to model directory>
> The following [columns and answers] report [output] formats are supported: [Markdown, CSV], Annotation - Columns and answers are extracted from articles with the results annotated over the original PDF files. Requires passing in a path with the original PDF files.The Implications of Ending Groundwater Overdraft for Global Food Security
"The Implications of Ending Groundwater Overdraft for Global Food Security" Nature Sustainability (2024) https://www.nature.com/articles/s41893-024-01376-w
"Study emphasizes trade-offs between arresting groundwater depletion and food security" (2024) by International Food Policy Research Institute https://phys.org/news/2024-06-emphasizes-offs-groundwater-de... :
> [...] a transdisciplinary approach combining regulatory, financial, technological, and awareness measures across water and food systems is essential to achieve sustainable groundwater management while preventing increased food insecurity.
Overdrafting: https://en.wikipedia.org/wiki/Overdrafting :
> Overdrafting is the process of extracting groundwater beyond the equilibrium yield of an aquifer. Groundwater is one of the largest sources of fresh water and is found underground. The primary cause of groundwater depletion is the excessive pumping of groundwater up from underground aquifers. Insufficient recharge can lead to depletion, reducing the usefulness of the aquifer for humans. Depletion can also have impacts on the environment around the aquifer, such as soil compression and land subsidence, local climatic change, soil chemistry changes, and other deterioration of the local environment.
From "How a Solar Revolution in Farming Is Depleting Groundwater" (2024) https://news.ycombinator.com/item?id=39731290 :
> - "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39507692
ChromeOS will soon be developed on large portions of the Android stack
Android Binder is already rewritten in Rust.
"Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=40680722
ChromiumOS, ChromeOS Flex,
https://news.ycombinator.com/item?id=36998964 :
> Students can't run `git --help`, `python`, or even `adb` on Chromebooks.
> (Students can't run `git` on a Chromebook without transpiling to WASM (on a different machine) and running all apps as the same user account without SELinux or `ls -Z` on the host).
> Students can run `bash`, `git`, and `python` on Linux / Mac / or Windows computers.
> There should be a "Computers for STEM" spec.
Docker containers can be run on Android with root with termux:
- https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d...
Podman and podman-desktop run rootless containers (without having root).
containers/container-selinux: https://github.com/containers/container-selinux/tree/main .spec: https://src.fedoraproject.org/rpms/container-selinux/blob/ra...
"The best WebAssembly runtime may be no runtime" https://news.ycombinator.com/item?id=38609105 : google/minijail, SyscallFilter=,
E.g. https://vscode.dev/ runs in a browser tab on a Chromebook and supports Pyodide and IIUC hopefully more native Python in WASM support soon?
Hopefully Bash, Git, and e.g. Python will work offline on the kids' Chromebooks someday.
Nvidia Warp: A Python framework for high performance GPU simulation and graphics
I love how many python to native/gpu code projects there are now. It's nice to see a lot of competition in the space. An alternative to this one could be Taichi Lang [0] it can use your gpu through Vulkan so you don't have to own Nvidia hardware. Numba [1] is another alternative that's very popular. I'm still waiting on a Python project that compiles to pure C (unlike Cython [2] which is hard to port) so you can write homebrew games or other embedded applications.
CuPy is also great – makes it trivial to port existing numerical code from NumPy/SciPy to CUDA, or to write code than can run either on CPU or on GPU.
I recently saw a 2-3 orders of magnitude speed-up of some physics code when I got a mid-range nVidia card and replaced a few NumPy and SciPy calls with CuPy.
Don’t forget JAX! It’s my preferred library for “i want to write numpy but want it to run on gpu/tpu with auto diff etc”
From https://news.ycombinator.com/item?id=37686351 :
>> sympy.utilities.lambdify.lambdify() https://github.com/sympy/sympy/blob/a76b02fcd3a8b7f79b3a88df... :
>> """Convert a SymPy expression into a function that allows for fast numeric evaluation""" [e.g. the CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, SymPy, numexpr,]
sympy#20516: "re-implementation of torch-lambdify" https://github.com/sympy/sympy/pull/20516
Fungus breaks down ocean plastic
Microplastic accumulation in the body makes me wonder if natural biopolymers could have the same problem. We cannot break down cellulose; what happens to micro-cellulose in the body? Or lignin, which is even more refractory to decomposition?
Do plant microfibers accumulate in the body over time, like plastic or asbestos fibers? Do we end up loaded with this stuff in old age?
One of the most lethal professions of old was baker, because of all the flour dust inhaled.
The breakdown of lignin by fungi shows the lengths organisms have to go to decompose refractory organic materials. A whole suite of enzymes and associated compounds are released into the extracellular environment by these fungi, including hydrogen peroxide, some of which is decomposed to highly oxidizing hydroxyl radicals. This also shows why it should not be surprising fungi are capable of attacking plastics, at least to some extent: they are already blasting biopolymers with a barrage of non-specific highly reactive oxidants.
I wonder if there’s a type of fungus humans could eat that would break down unwanted fibers and microplastics?
I don't think I'd want my internals exposed to the stuff fungi have to excrete to break down the fibers. Fenton chemistry is used to clean lab glassware, I think.
https://en.wikipedia.org/wiki/Fenton%27s_reagent
Review article on fungal degradation of lignin:
Are microplastics fat soluble and/or bound with the cholesterols and sugar alcohols that cake the arteries?
- "Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming" https://www.science.org/doi/10.1126/scitranslmed.aad6100 https://www.popsci.com/compound-in-powdered-alcohol-can-also...
Cellulose and Lignin are dietary fiber:
- "Dietary cellulose induces anti-inflammatory immunity and transcriptional programs via maturation of the intestinal microbiota" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583510/ :
> Dietary cellulose is an insoluble fiber and consists exclusively of unbranched β-1,4-linked glucose monomers. It is the major component of plant cell walls and thus a prominent fiber in grains, vegetables and fruits. Whereas the importance of cellulolytic bacteria for ruminants was described already in the 1960s, it still remains enigmatic whether the fermentation of cellulose has physiological effects in monogastric mammals. [6–11] Under experimental conditions, it has been shown that the amount of dietary cellulose influences the richness of the colonic microbiota, the intestinal architecture, metabolic functions and susceptibility to colitis. [12,13] Moreover, mice fed a cellulose-enriched diet were protected from experimental autoimmune encephalomyelitis (EAE) through changes in their microbial and metabolic profiles and reduced numbers of pro-inflammatory T cells.
But what about fungi in the body and diet?
What about lignin; Is lignin dietary fiber?
From "Dietary fibre in foods: a review" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583510/ :
> Dietary fibre includes polysaccharides, oligosaccharides, lignin and associated plant substances.
From https://news.ycombinator.com/item?id=40649844 re: sustainable food packaging solutions :
> Cellulose and algae are considered safe for human consumption and are also biodegradable; but is that an RCT study?
> CO2 + Lignin is not edible but is biodegradable and could replace plastics.
>> "CO2 and Lignin-Based Sustainable Polymers with Closed-Loop Chemical Recycling" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202403035
> What incentives would incentivize the market to change over to sustainable biodegradable food-safe packaging?
Industry forms consortium to drive adoption of Rust in safety-critical systems
> The consortium aims to develop guidelines, tools, libraries, and language subsets to meet industrial and legal requirements for safety-critical systems.
> Moreover, the initiative seeks to incorporate lessons learned from years of development in the open source ecosystem to make Rust a valuable component of safety toolkits across various industries and severity levels
Resources and opportunities for a safety critical Rust initiative:
- "The First Rust-Written Network PHY Driver Set to Land in Linux 6.8" https://news.ycombinator.com/item?id=38677600
- awesome-safety-critical > Software safety standards: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
- rust smart pointers: https://news.ycombinator.com/item?id=33563857 ; LLVM signed pointers for pointer authentication: https://news.ycombinator.com/item?id=40307180
From https://news.ycombinator.com/item?id=33563857 :
> - Secure Rust Guidelines > Memory management, > Checklist > Memory management: https://anssi-fr.github.io/rust-guide/05_memory.html
Rust OS projects to safety critical with the forthcoming new guidelines: Redox, Cosmic, MotorOS, Maestro, Aerugo
- "MotorOS: a Rust-first operating system for x64 VMs" https://news.ycombinator.com/item?id=38907876: "Maestro: A Linux-compatible kernel in Rust" (2023) https://news.ycombinator.com/item?id=38852360#38857185 ; redox-os, cosmic-de , Motūrus OS; MotorOS
- https://news.ycombinator.com/item?id=38861799 : > COSMIC DE (Rust-based) supports rust-windowing/winit apps, which compile to a <canvas> tag in WASM.
> winit: https://github.com/rust-windowing/winit
- "Aerugo – RTOS for aerospace uses written in Rust" https://news.ycombinator.com/item?id=39245897
- "The Rust Implementation of GNU Coreutils Is Becoming Remarkably Robust" https://news.ycombinator.com/item?id=34743393
From a previous Ctrl-F rust,; "Rust in the Linux kernel" (2021) https://news.ycombinator.com/item?id=35783214 :
- > Is this the source for the rust port of the Android binder kernel module?: https://android.googlesource.com/platform/frameworks/native/...
> This guide with unsafe rust that calls into the C, and then with next gen much safer rust right next to it would be a helpful resource too.
From https://news.ycombinator.com/item?id=34744433 ... From "Are software engineering “best practices” just developer preferences?" https://news.ycombinator.com/item?id=28709239 :
>>>>> Which universities teach formal methods?
/?hnlog "TLA" and "side channel"
Land value tax in online games and virtual worlds (2022)
> Land Crisis [on the internet, too]
Land Value Tax: https://en.wikipedia.org/wiki/Land_value_tax
Virtual economies: https://en.wikipedia.org/wiki/Virtual_economies
Category:Virtual economies: https://en.wikipedia.org/wiki/Category:Virtual_economies
Gold farming > Secondary effects on in-game economy; inflation,: https://en.wikipedia.org/wiki/Gold_farming#Secondary_effects...
WSE: World Stock Exchange (2007-2008) https://en.wikipedia.org/wiki/World_Stock_Exchange
The opportunity costs of spending GPU hours on various alternatives.
Economy of Second Life > Monetary policy: https://en.wikipedia.org/wiki/Economy_of_Second_Life
Circle, Centre > History: https://en.wikipedia.org/wiki/Circle_(company)
Stablecoin > Reserve-backed stablecoins: https://en.wikipedia.org/wiki/Stablecoin
Economic stability: https://en.wikipedia.org/wiki/Economic_stability
Stabilization policy > Crisis stabilization: https://en.wikipedia.org/wiki/Stabilization_policy#Crisis_st...
Macroeconomics > Macroeconomic policy; a "policy wonk" may also be a "quant" https://en.wikipedia.org/wiki/Macroeconomics#Macroeconomic_p...
Clearing house (finance) > https://en.wikipedia.org/wiki/Clearing_house_(finance) :
> Impact: A 2019 study in the Journal of Political Economy found that the establishment of the New York Stock Exchange (NYSE) clearinghouse in 1892 "substantially reduced volatility of NYSE returns caused by settlement risk and increased asset values", indicating "that a clearinghouse can improve market stability and value through a reduction in network contagion and counterparty risk."
From https://news.ycombinator.com/item?id=40649612 :
> Interledger > Peering, Clearing and Settling: https://interledger.org/developers/rfcs/peering-clearing-set...
But your problem's this land bubble and then unfortunately unrealized losses and how does that happen
ADB tips and tricks: Commands that every power user should know about (2023)
ADB: Android Debug Bridge: https://en.wikipedia.org/wiki/Android_Debug_Bridge
- Scrcpy does remote GUI over ADB: https://en.wikipedia.org/wiki/Scrcpy
New theory links quantum geometry to electron-phonon coupling
Also referenced in a comment the other day on "Physicists Puzzle over Emergence of Electron Aggregates" https://news.ycombinator.com/item?id=40524238#40547258
21.2× faster than llama.cpp? plus 40% memory usage reduction
Project Github link:https://github.com/SJTU-IPADS/PowerInfer
The video/litepaper they linked is also a great read: https://powerinfer.ai/v2/
Truly mind-blowing to see Mixtral 47B running on a smartphone.
> TL;DR PowerInfer is a CPU/GPU LLM inference engine leveraging activation locality for your device.
So is it running on the phone or not?
From the abstract https://arxiv.org/abs/2406.06282 :
> Notably, PowerInfer-2 is the first system to serve the TurboSparse-Mixtral-47B model with a generation rate of 11.68 tokens per second on a smartphone. For models that fit entirely within the memory, PowerInfer-2 can achieve approximately a 40% reduction in memory usage while maintaining inference speeds comparable to llama.cpp and MLC-LLM
What does this do with a 40 TOPS+ NPU/TPU?
Python wheel filenames have no canonical form
There’s also no package caching across venvs(?). Every time I have to install something AI it’s the same 2GB torch + few hundred megabytes libs + cuda update of downloads again and again. On top of that, python projects love to start downloading large resources without asking, and if you ctrl-c it, the setup process is usually non-restartable. You’re just left with a broken folder that doesn’t run anymore. Feels like everything about python’s “install” is an afterthought at best.
It would be nice if all packages (and each version) were cached somewhere global and then just referred to by a local environment, to deduplicate. I think Maven does this.
The downloaded packages themselves are shared in pip's cache. Once installed though they cannot be shared, for a number of reasons:
- Compiled python code is (can be) generated and stored alongside the installed source files, sharing that installed package between envs with different python versions would lead to weird bugs
- Installed packages are free to modify their own files (especially when they embed data files)
- Virtualenv are essentially non relocatable and installed package manifests use absolute paths
Really that shouldn't be a problem though. Downloaded packages are cached by pip, as well as locally built wheels of source packages, meaning once you've installed a bunch of packages already, installing new ones in new environments is a mere file extraction.
The real issue IMHO causing install slowness is that package maintainers have not taken the habit to be mindful of the size of their packages. If you install multiple data science packages for instances, you can quickly get to multi GB virtualenvs, for the sole reason that package maintainers didn't bother to remove (or separate as "extra") their test datasets, language translation files, static assets, etc.
They can be, pip developers just have to care about this. Nothing you described precludes file-based deduplication, which is what pnpm does for JS projects: it stores all library files in a global content-addressable directory, and creates {sym,hard,ref}links depending on what your filesystem supports.
Being able to mutate files requires reflinks, but they're supported by e.g. XFS, you don't have to go to COW filesystems that have their own disadvantages.
You can do something like that manually for virtualenvs too, by running rmlint on all of them in one go. It will pick an "original" for each duplicate file, and will replace duplicates with reflinks to it (or other types of links if needed). The downside is obvious: it has to be repeated as files change, but I've saved a lot of space this way.
Or just use a filesystem that supports deduplication natively like btrfs/zfs.
This is unreasonably dismissive: the `pip` authors care immensely about maintaining a Python package installer that successfully installs billions of distributions across disparate OSes and architectures each day. Adopting deduplication techniques that only work on some platforms, some of the time means a more complicated codebase and harder-to-reproduce user-surfaced bugs.
It can be worth it, but it's not a matter of "care": it's a matter of bandwidth and relative priorities.
"Having other priorities" uses different words to say exactly the same thing. I'm guessing you did not look at pnpm. It works on all major operating systems; deduplication works everywhere too, which shows that it can be solved if needed. As far as I know, it has been developed by one guy in Ukraine.
Send a PR!
Are there package name and version disclosure considerations when sharing packages between envs with hardlinks and does that matter for this application?
Practically, caching ~/.pip/cache should save resources; From "What to do about GPU packages on PyPI?" https://news.ycombinator.com/item?id=27228963 :
> "[Discussions on Python.org] [Packaging] Draft PEP: PyPI cost solutions: CI, mirrors, containers, and caching to scale" [...]
> How to persist ~/.cache/pip between builds with e.g. Docker in order to minimize unnecessary GPU package re-downloads:
RUN --mount=type=cache,target=/root/.cache/pip
RUN --mount=type=cache,target=/home/appuser/.cache/pip
Researchers make a supercapacitor from water, cement, and carbon black
I'm happy this invention is still in the news cycle after the initial announcement 10 months ago...
[0] https://news.ycombinator.com/item?id=36958531
[1] https://news.ycombinator.com/item?id=36993411
[2] https://news.ycombinator.com/item?id=36951089
While I'm fairly dubious about the proposed dual-purpose structural implementation of this material -- if this works at scale it would be a boon for low cost DIY local energy storage in the developing world and remote areas in other places.
The idea that someone with minimal education/training can construct a durable electrical storage solution using commonly available materials and techniques is an absolute game changer!!
- "Low-cost additive turns concrete slabs into super-fast energy storage" (2023) https://news.ycombinator.com/item?id=36964735
Google shuts down GPay app and P2P payments in the US
I am exhausted switching services. please can we pick one and be done with it?
How about adversarial integration? Why can't I send money from cashapp to venmo? Is the technology not there yet?
I wish the post office was a bank and they hosted their own pay apps.
FedNow is being implemented now across the industry. It is the "SEPA" for the USA. It will replace the 3rd party hacks (covering up the slow and tedious nature of ACH) that currently exist via Venmo, Cashapp, Zelle[0], etc
[0]: Zelle is ran by a consortium of FIs. I don't know for certain what will happen to it, but I suspect if it stays around in name it would switch to the FedNow rail behind the scenes.
/? fednow API: https://www.google.com/search?q=fednow+api [...]
- From https://explore.fednow.org/resources?id=67#_Who_should_downl... :
> Who should download the FedNow ISO 20022 message specifications and how can I access them?
> The Federal Reserve is using the MyStandards® platform to provide access to the FedNow ISO 20022 message specifications and accompanying implementation guide. You can access these on the Federal Reserve Financial Services portal (Off-site) under the FedNow Service. Users need a MyStandards account, which you can create on the SWIFT website (Off-site). View our step-by-step guide (PDF, Off-site) for tips on accessing the specifications.
FedNow charges a $0.045/tx fee. (edit: And a waived $25/mo fee [...] https://explore.fednow.org/explore-the-city?id=3&building=&p... )
From "X starts experimenting with a $1 per year fee for new users" https://news.ycombinator.com/item?id=37933366 ... From "Anatomy of an ACH transaction" (2023) https://news.ycombinator.com/item?id=36503888 :
> The WebMonetization spec [4] and docs [5] specifies the `monetization` <meta> tag for indicating where supporting browsers can send payments and micropayments:
<meta
name="monetization" content="$<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">wallet.example.com/alice">
And from "Paying Netflix $0.53/H, etc." https://news.ycombinator.com/item?id=38685348 :> W3C Web Monetization does micropayments with a <$0.01/tx fee, but not KYC or AML.
> [Web Monetization] is an open spec: https://webmonetization.org/specification/
Built on W3C Interledger: https://interledger.org/interledger :
> Built on modern technologies, Interledger can handle up to 1 million transactions per second per participant
> ILP is not tied to a single company, payment network, or currency.
Interledger > Peering, Clearing and Settling: https://interledger.org/developers/rfcs/peering-clearing-set...
Can FedNow handle micropayments volumes? > Micropayments: https://en.wikipedia.org/wiki/Micropayment
Does FedNow integrate with Interledger; i.e. is there support for "ILP Addresses"?
Some of these sentences seem sort of related. You're just linking to Google searches and providing random snippets from linked sites. Overall this comment and many of the comments in your history look like spam.
Compared to aigen with citations found for whatever was generated without consideration for truth and ethical reasoning?
The heuristics should be improved, then; we shouldn't ask me to put these in PDFs without typed edges to URLs and schema.org/Datasets, to support a schema.org/Claim or a ClaimReview in a SocialMediaPosting about a NewsArticle about a ScholarlyArticle :about a Thing skos:Concept with a :url.
Would you have deep linked your way to those resources (which are useful enough to be re-cited here for later reference) without the researcher doing it for you?
Technical writing courses for example advise to include the searches that you used in the databases that you found oracles in, in order to help indicate the background research bias prior to the hypothesis at least.
Transparency and Accountability: https://westurner.github.io/hnlog/
So, this sort of cited input is valuable and should be valued not as link spam but as linked edge-dense research sans generatble prose.
> Technical writing courses for example advise to include the searches that you used in the databases that you found oracles in, in order to help indicate the background research bias prior to the hypothesis at least.
I'm not writing a doctoral thesis. These comments are not my livelihood. I'm just sharing industry knowledge on a topic I know about.
Your random links without context, some of the links to literal Google searches, are not in any way useful or interesting.
It's the style of comments you get on a lot of conspiracy sites. I see comment styles like that on a lot of articles linked from places like rense.com.
No, these are all hand prepared.
This is called research, and it does require citations.
Please do not harass overachievers who do the research work instead of just talking to talk.
#areyouarobotorsimilar
I certainly didn't think the comments weren't human-generated. I was actually defending you. I just think your style of commenting comes across as unusual here, which is why it attracts attention. On other parts of the Internet the style you use is more common and would not raise eyebrows.
FDA denies petition against use of phthalates in food packaging
I was surprised to learn that some of the highest concentrations of phthalates can be found in alcoholic beverages in glass bottles (Table 3: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7460375/ )
This occurs due to the plastics used in producing the alcoholic beverages combined with ethanol (alcohol) acting as a solvent to liberate the phthalates
I was equally surprised to learn that the food industry was already moving toward using fewer phthalate plasticizers already. More recent sampling efforts have actually found fewer or sometimes no phthalates at all in tubing, whereas decades ago it was ubiquitous.
Which plastics could and should be replaced with other materials?
"How Do You Know If Buckets Are Food Grade" https://epackagesupply.com/blog/how-do-you-know-if-buckets-a... :
> Buckets made of HDPE (number 2) are generally considered the best material for food storage, especially over the long term. A vast majority of plastic buckets that are sold for food storage purposes will be made of HDPE. It’s important to note that not all HDPE buckets are food grade; to be sure, you’ll want to look for the cup and fork logo (described below) or other indication of “food safe” or “food grade” materials.
> Cup and Fork: Elsewhere on the bottom of the bucket, some food-grade buckets will have a symbol consisting of a cup and fork on them. There may also be markings like “USDA approved” or “FDA approved.”
A bunch of comments here are roughly "there's not great evidence that these are harmful" -- but is that the right standard? Or should there be an obligation to provide evidence that something is "safe" in order to use it in some consumer applications (like food packaging)? It's tempting to say that we should conclusively establish that something isn't harmful before we use it absolutely everywhere ... but what standard of evidence would be both achievable and ethical?
Because generally people aren't getting acutely ill after eating food from a plastic package, we're left with the possibility that accumulative impacts over years might be harmful -- but it doesn't seem feasible to run long term studies where a treatment group is exposed to plastics for decades and a control group is not. It hardly seems achievable to do correlation studies, because you often don't know what's been in the packaging for all the food you've consumed, which may not even be in your control.
Your second paragraph is a very good argument against requiring that everything used in food packaging be shown to be safe before it can be used.
Can we reasonably run such a study to prove that wax paper is safe? What about plain paper? Do we just require all food producers to use no packaging at all until these controlled longitudinal studies are completed? If we allow them to use packaging, how do we define in a principled way what packaging is allowed while there are still unknowns?
One possibility would be to say that if something is currently widely used in some significant (think 5%) portion of the industry then we allow it, but that has two problems: First, it wouldn't exclude phthalates anyway, so it doesn't address the current concern. Second, it might exclude future packaging materials that we think might be safer than our current materials but which have yet to be tested.
Cellulose and algae are considered safe for human consumption and are also biodegradable; but is that an RCT study?
CO2 + Lignin is not edible but is biodegradable and could replace plastics. "CO2 and Lignin-Based Sustainable Polymers with Closed-Loop Chemical Recycling" https://onlinelibrary.wiley.com/doi/10.1002/adfm.202403035
What incentives would incentivize the market to change over to sustainable biodegradable food-safe packaging?
Possible exposure of Earth to dense interstellar medium 2-3M years ago
Are there fossil records that correspond to higher levels of dense interstellar medium radiation 2-3M years ago?
Punctuated equilibrium: https://en.wikipedia.org/wiki/Punctuated_equilibrium
Punctuated equilibrium is not about higher mutation rates or even faster evolution, but about long periods of _stasis_ interrupted by "normal rate" evolution due to an upset in the equilibrium such as a sudden geographic isolation of two populations.
Was there a higher rate of cladogenesis 2-3M years ago?
Higher levels of radiation do correlate to higher rates of genetic mutation; presumably because radiation so affects transcription.
Does the punctuated equilibrium theory model radiation level in estimating rate of evolution?
Rate of evolution: https://en.wikipedia.org/wiki/Rate_of_evolution
Show HN: Thread – AI-powered Jupyter Notebook built using React
Hey HN, we're building Thread (https://thread.dev/) an open-source Jupyter Notebook that has a bunch of AI features built in. The easiest way to think of Thread is if the chat interface of OpenAI code interpreter was fused into a Jupyter Notebook development environment where you could still edit code or re-run cells. To check it out, you can see a video demo here: https://www.youtube.com/watch?v=Jq1_eoO6w-c
We initially got the idea when building Vizly (https://vizly.fyi/) a tool that lets non-technical users ask questions from their data. While Vizly is powerful at performing data transformations, as engineers, we often felt that natural language didn't give us enough freedom to edit the code that was generated or to explore the data further for ourselves. That is what gave us the inspiration to start Thread.
We made Thread a pip package (`pip install thread-dev`) because we wanted to make Thread as easily accessible as possible. While there are a lot of notebooks that improve on the notebook development experience, they are often cloud hosted tools that are hard to access as an individual contributor unless your company has signed an enterprise agreement.
With Thread, we are hoping to bring the power of LLMs to the local notebook development environment while blending the editing experience that you can get in a cloud hosted notebook. We have many ideas on the roadmap but instead of building in a vacuum (which we have made the mistake of before) our hope was to get some initial feedback to see if others are as interested in a tool like this as we are.
Would love to hear your feedback and see what you think!
This seems cool! Is there a way to try it locally with an open LLM? If you provide a way to set the OpenAI server URL and other parameters, that would be enough. Is the API_URL server documented, so a mock a local one can be created?
Thanks.
Definitely, it is something we are super focused on as it seems to be a use case that is important for folks. Opening up the proxy server and adding local LLM support is my main focus for today and will hopefully update on this comment when it is done :)
From https://news.ycombinator.com/item?id=39363115 :
> https://news.ycombinator.com/item?id=38355385 : LocalAI, braintrust-proxy; [and promptfoo, chainforge]
(Edit)
From "Show HN: IPython-GPT, a Jupyter/IPython Interface to Chat GPT" https://news.ycombinator.com/item?id=35580959#35584069
I built an ROV to solve missing person cases
/? underwater infrared camera: https://www.google.com/search?q=underwater+infrared+camera
r/rov: https://www.reddit.com/r/rov/
Bioradiolocation: https://en.wikipedia.org/wiki/Bioradiolocation
FMCW: https://en.wikipedia.org/wiki/Continuous-wave_radar
mmWave (60 Ghz) can do heartbeat detection above water FWIU. As can WiFi.
mmwave (millimeter wave), UWA (Underwater Acoustic)
Citations of "Analysis and estimation of the underwater acoustic millimeter-wave communication channel" (2016) https://scholar.google.com/scholar?cites=8297460493079369585...
Citations of "Wi-Fi signal analysis for heartbeat and metal detection: a comparative study of reliable contactless systems" https://scholar.google.com/scholar?cites=3926358377223165726...
/? does WiFi work underwater? https://www.google.com/search?q=does+wifi+work+underwater
https://scholar.google.com/scholar?cites=1308760257416493671... ... "Environment-independent textile fiber identification using Wi-Fi channel state information", "Measurement of construction materials properties using Wi-Fi and convolutional neural networks"
"Underwater target detection by measuring water-surface vibration with millimeter-wave radar" https://scholar.google.com/scholar?cites=1710768155624387794... :
> UWSN (Underwater Sensor Network)
I'm reminded of Baywatch S09E01; but those aren't actual trained lifeguards. The film Armageddon works as a training film because of all of the [safety,] mistakes: https://www.google.com/search?q=baywatch+s09e01
Can you pump water without any electricity?
Observed that the Second Law of Thermodynamics holds with water pumps, too
A third way: evaporation due to relative humidity
A fourth way to raise water without electricity: solar concentration to produce steam in a loop
"Using solar energy to generate heat at 1050°C high temperatures" https://news.ycombinator.com/item?id=40419617
> A fourth way to raise water without electricity: solar concentration to produce steam in a loop
Nice. Perhaps some sort of system leveraging this application be used to charge batteries on a local, distributed grid? Or, simultaneously charge your electric car and heat/cool a home on a sunny day.
Is solar concentration more or less efficient than loud electric pumps at raising water for a water tower, which stores gravitational potential energy and pressurizes the tubes?
Could heat a water tank full of gravel or sand.
"Engineers develop 90-95% efficient electricity storage with piles of gravel" https://news.ycombinator.com/item?id=39927280
How efficiently does a solar concentrator fill a tank of water at a higher altitude?
(How) Does relative humidity at the top and bottom of the tube significantly affect fill/lift rate?
Is PV to batteries to pumps more or less efficient than solar concentrated steam?
A fifth way to raise water: n-body gravity; wait for the moon and the tides
A sixth way to raise water through a pipe possibly at an angle possibly into a water tank: MHD: Magnetohydrodynamic drive.
- "Designing a Futuristic Magnetic Turbine (MHD drive)" @PlasmaChannel https://youtube.com/watch?v=WgAIPOSc4TA&"
- "A Compact Electrodynamic Pump using Copper and TPU" https://news.ycombinator.com/item?id=40635173
- Is MHD more or less efficient than the other methods?
(Edit) I wouldn't matter whether the pipe to lift water though is at an angle; Is my intuition bad on it this?
Shallower orbital trajectories are preferred to perpendicular to the gravitational field of the greatest local mass because: is it just Max Q or also total energy?
But the downward force of a column of water in a tube at an angle is no greater than the two pointing straight up, is it?
- Herodotus > Life > Early Travels: https://en.wikipedia.org/wiki/Herodotus#Early_travels ; https://www.secretofthepyramids.com/herodotus:
> For this, they said, the ten years were spent, and for the underground chambers on the hill upon which the pyramids stand, which he caused to be made as sepulchral chambers for himself in an island, having conducted thither a channel from the Nile
- From https://news.ycombinator.com/item?id=39246398:
> According to the video series on "How the pyramids were built" at https://thepump.org/ by The Pharaoh's Pump Society, the later pyramids had a pool of water at the topmost layer of construction such that they could place and set water-tight blocks of carved stone using a crane barge that everybody walked to the side of to lift.
- "Egypt's pyramids may have been built on a long-lost branch of the Nile" (2024) https://news.ycombinator.com/item?id=40410572 :
- Edward Kunkel - "The Pharaoh's Pump" (1967,1977) https://www.google.com/books/edition/_/Vq04nQEACAAJ?hl=en https://search.worldcat.org/formats-editions/4049868
- Stephen Myers: https://www.amazon.com/stores/Steven-Myers/author/B003GNIQ4K : "How the Great Pyramid Operated as a Water Pump" https://www.youtube.com/playlist?list=PLPD2zVdMCA-gPa6RqoEHX...
- Chris Massey: https://www.amazon.com/stores/Chris-Massey/author/B00DWXY3FU : "How were the pyramids of egypt really built - Part 1" https://www.youtube.com/watch?v=TJcp13hAO3U&t=401s
- Lingams, though, produce electricity; and there may be electrode marks on some ancient prehistoric possibly geopolymer masonry stones. AI suggested that low level electricity in geopolymer would more quickly remove water from forming "blocks".
- Praveen Mohan: https://www.youtube.com/channel/UCe3OmUXohXrXnNZSRl5Z9kA
- /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
- Was the Osireon a hydraulic lift facility without electricity? FWIU the Osireon is a floating megalithic stone block island over a different water source which flows when pressure is applied to it? [citation: one of the following yt videos IIRC]
/?youtube osireon https://www.youtube.com/results?search_query=osireion :
- "Secrets of the Osirion | Who Built Egypt's Biggest Megalithic Temple? | Megalithomania" https://www.youtube.com/watch?v=sx-zRw8HEAo&t=271s
> [...] it could be much older with its huge megalithic, perfectly cut blocks resembling those in the 4th Dynasty Valley Temple at Giza. Strabo, who visited the Osirion in the first century BC, said that it was constructed by Ismandes, or Mandes (Amenemhet III), the same builder as the Labyrinth at Hawara. John Anthony West says there were huge floods that occurred in the distant past and the river silt is evident at the site, so could easily be dated. The Osirion at Abydos in Egypt could have had therapeutic effects, according to Dorothy Eady (Omm Sety) who says she got healed by its waters, as did many others. It also has beautiful geometric 'flower of life' carvings on one of the uprights which have yet to be explained, and nubs, polygonal masonry and intricate, subtle striations and smoothing of the stones, which would not look out of place in ancient Peru. Whether it was 'discovered' by Seti, or whether he built it during his reign is discussed in this video.
- "Pre-Egyptian Technology Left By an Advanced Civilization That Disappeared" https://www.youtube.com/watch?v=K3uiMsqptOs
- "The Megalithic Osirion of Egypt: Live Walkthrough and New Observations!" (2024) https://www.youtube.com/watch?v=-TtsKKYLxPM
- Osireion: https://en.wikipedia.org/wiki/Osireion#Purpose
- Interglacial > Specific interglacials: https://en.wikipedia.org/wiki/Interglacial#Specific_intergla...
- Lake Agassiz > Formation of beaches: https://en.wikipedia.org/wiki/Lake_Agassiz#Formation_of_beac...
A Compact Electrodynamic Pump using Copper and TPU
"Fiber pumps for wearable fluidic systems" (2024) https://www.science.org/doi/10.1126/science.ade8654
Electromagnetic pump: https://en.wikipedia.org/wiki/Electromagnetic_pump
MHD: Magnetohydrodynamic drive: https://en.wikipedia.org/wiki/Magnetohydrodynamic_drive
New "X-Ray Vision" Technique Sees Inside Crystals
Wouldn't this be useful if it were possible to match lattice defects to wave operators and wave functions for QC?
- "Single atom defect in 2D material can hold quantum information at room temp" (2024) https://news.ycombinator.com/item?id=40461325 https://news.ycombinator.com/item?id=40478219
- "GAN semiconductor defects could boost quantum technology" (2024) https://news.ycombinator.com/item?id=39365467
- "Reversible optical data storage below the diffraction limit" (2023) https://news.ycombinator.com/item?id=38528844
"Enabling three-dimensional real-space analysis of ionic colloidal crystallization” (2024) https://www.nature.com/articles/s41563-024-01917-w
Zero Tolerance for Bias
python has random.shuffle() and random.sample() with an MT Mersenne Twister PRNG for random. https://docs.python.org/3/library/random.html#random.shuffle Modules/_randommodule.c: https://github.com/python/cpython/blob/main/Modules/_randomm... , Library/random.py: https://github.com/python/cpython/blob/main/Lib/random.py#L3...
From "Uniting the Linux random-number devices" (2022) https://news.ycombinator.com/item?id=30377944 :
> > In 2020, the Linux kernel version 5.6 /dev/random only blocks when the CPRNG hasn't initialized. Once initialized, /dev/random and /dev/urandom behave the same. [17]
From https://news.ycombinator.com/item?id=37712506 :
> "lock-free concurrency" [...] "Ask HN: Why don't PCs have better entropy sources?" [for generating txids/uuids] https://news.ycombinator.com/item?id=30877296
> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid...
Google Mesop: Build web apps in Python
This tool fills a very narrow use case for demos of AI chat.
If you know Python and can run inference models on a colab then you can quickly whip up a demo UI with builtin components like chat with text and images. Arguably everyone is testing these apps so making them easy is cool for demo and prototyping.
For anything more than a demo you don't use this.
What other use cases is this tool insufficient for?
BentoML already wins at model hosting in Python IMHO.
What limits this to demos?
IIRC there are a few ways to do ~ipywidgets with react patterns in notebooks, and then it's necessary to host notebooks for users with no online kernel, one kernel for all users (not safe), or a container/vm per user (Voila, JupyterHub, BinderHub, jupyter-repo2docker), or you can build a WASM app and host it statically (repo2jupyterlite,) so that users run their own code in their own browser.
xoogler/ML researcher here. Everyone at Google used colab because the bazel build process is like 1-2 minutes and even just firing up a fully bazel-built script usually takes several seconds. Frontend is also near impossible to get started with if you aren't in a large established team. Colab is the best solution to a self-inflicted problem, and colab widgets are super popular internally for building hacky apps. This library makes perfect sense in this context...
How could their monorepo build system with distributed build component caching be improved? What is faster at that scale?
FWIU Blaze was rewritten as Bazel without the Omega scheduler integration?
gn wraps Ninja build to build Chromium and Fuschia: https://gn.googlesource.com/gn
reminds me of GWT (google web toolkit)
And Wt and JWt, which also handle the server side in C++ and Java. https://en.wikipedia.org/wiki/Wt_(web_toolkit) :
> The only server-side framework implementing the strategy of progressive enhancement automatically
Is this still true?
Ask HN: Should Python web frameworks care about ASGI?
Hey Everyone
I am the author of a Python framework called Robyn. Robyn is one of the fastest Python web frameworks with a Rust runtime.
Robyn offers a variety of features designed to enhance your web development experience. However, one topic that has sparked mixed feelings within the community is Robyn's choice of not supporting ASGI. I'd love to hear your thoughts on this. Specifically, what specific features of ASGI do you miss in Robyn?
You can find Robyn's documentation here(https://robyn.tech/documentation). We're aiming for a v1.0 release soon, and your feedback will be invaluable in determining whether introducing ASGI support should be a priority.
Please avoid generic responses like "ASGI is a standard and should be supported." Instead, I request some tangible insights and evidence-based arguments to help me understand the tangible benefits ASGI could bring to Robyn or the lack of a specific ASGI feature that will hinder you from using Robyn.
Looking forward to your input!
Robyn - https://github.com/sparckles/robyn Docs - https://robyn.tech/documentation
If Robyn is not an instrumented SSL terminating load balancer with HTTP/3 support, there must still be an upstream HTTP server that could support ASGI.
ASGI (like WSGI) makes it possible to compose an application with reusable, tested "middleware" because there's an IInterface there.
awesome-asgi: https://github.com/florimondmanca/awesome-asgi
ASGI docs > Implementations: https://asgi.readthedocs.io/en/latest/implementations.html
> If Robyn is not an instrumented SSL terminating load balancer with HTTP/3 support, there must still be an upstream HTTP server that could support ASGI.
See I'd like to believe that. However, there is no ASGI-compliant server that supports this properly(at least in the Python world).
And when the community demands/needs the feauture, I see no reason why I won't bake the above features in Robyn itself.
> reusable, tested "middleware" because there's an IInterface there.
Robyn has a drop-in replacement for these. So, these should not be a problem.
Having said that, I feel I could document these facts a bit boldly to clear out the doubts.
Ventoy – Bootable USB Solution
> No need to update Ventoy when a new distro is released
> Most types of OS supported, 1200+ iso files tested ... 90%+ distros in distrowatch.com supported
Physicists Propose a Doppler Telegraph Wave-Based Theory of Heat Transport
"The generalized telegraph equation with moving harmonic source: Solvability using the integral decomposition technique and wave aspects" (2024) https://www.sciencedirect.com/science/article/pii/S001793102... :
> Highlights: [...] It is shown that the Cattaneo-Vernotte equation may be used to describe the wave properties of the heat transport.
Study of photons in QC reveals photon collisions in matter create vortices
"Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315
Python notebooks for fundamentals of music processing
this one can also be helpful: https://allendowney.github.io/ThinkDSP/
Additional Open Source Music and Sound Production tools:
"Raspberry Pi for Dummies" has chapters on SonicPi and PyGame.
SonicPi is a live coding environment for music creation and instruction. Like ChucK IIUC
It looks like PyGame has pygame.mixer, which is built on SDL's mixer.
The Godot game engine supports Audio Effects: https://docs.godotengine.org/en/stable/tutorials/audio/audio...
BespokeSynth has a "script" module that supports Python for synthesizing notes and chords and also for transforming audio streams: https://github.com/BespokeSynth/BespokeSynth/blob/main/resou... ( https://github.com/BespokeSynth/BespokeSynth/blob/main/bespo... )
AllenDowney/ThinkDSP notebooks reference NumPy, freesound,: https://github.com/AllenDowney/ThinkDSP
My most recommended method for beginners has always been PD (https://puredata.info/) combined with The Theory and Technique of Electronic Music: (https://msp.ucsd.edu/techniques/latest/book.pdf) and this book (https://mitpress.mit.edu/9780262014410/designing-sound/).
Eli's tutorials on SuperCollider are also very helpful: https://www.youtube.com/@elifieldsteel
Of course, you can have a look at my project Glicol that can also be helpful for people to get some intuition on live coding in browsers immediately: https://glicol.org/
"Show HN: Polymath: Convert any music-library into a sample-library with ML" https://news.ycombinator.com/item?id=34756949#34782253
Looks like there's a Hooktheory Classroom product, in addition to Hookpad and their music theory book which features the music of Avicii.
/r/musictheory/wiki suggests that "Open Music Theory" is the best text: https://www.reddit.com/r/musictheory/wiki/index/
The search keyword "Melodic"
Do Windows VST plugins work with Glicol on Linux with Yabridge, LinVst, Carla, or WineASIO?
Glicol is written in Rust and has its own non-Rust syntax?
Is there a syntax/grammar API doc?
> Show HN: Polymath: Convert any music-library into a sample-library with ML
I tried this the other day. There's several issues regardless of using the pinned versions in requirements.txt and trying to use debian bullseye and python 3.10.10 as implied by the provided dockerfile. At least these I noted:
- fix deprecated np.bool usage
- module 'scipy.signal' has no attribute 'hann' --> change to scipy.signal.windows.hann
- librosa.feature.melspectrogram(y, change to y=y
- needs CUDA 11
I got it to work without gpu acceleration cause of the last point. If I come back to this I'll probably just look at using demucs, librosa etc. directly (polymath.py is only ~600 lines gluing things together) and hopefully get it working with latest nvidia drivers.
Also if I have days to let this compile maybe I could get CUDA 11 from this https://aur.archlinux.org/packages/cuda-11.7
Installing conda-forge/tensorflow-gpu installs conda-forge/cudatoolkit-feedstock ; https://conda-forge.org/blog/2021/11/03/tensorflow-gpu/ :
conda install -c conda-forge install tensorflow-gpu # cudatoolkit
mamba install tensorflow-gpy
pixi install tensorflow-gpu
FWICS the polymath README mentions the reference docker container, which should install all of the dependencies: docker build -t polymath ./
docker run \
-v "$(pwd)"/processed:/polymath/processed \
-v "$(pwd)"/separated:/polymath/separated \
-v "$(pwd)"/library:/polymath/library \
-v "$(pwd)"/input:/polymath/input \
polymath python /polymath/polymath.py -a ./input/song1.wav
There's a music utility called "time travel" IIRC, that records all recent audio and you can then save a recent window of that.
(Edit)jellyjampreserve: https://github.com/betodealmeida/jellyjampreserve :
> JellyJamPreserve is a Raspberry Pi project that uses jack_capture to record audio on the toggle of a switch. It continuously records audio into a 5 minute circular buffer, and when the switch is turned on the buffer is dumped to a file and it starts recording audio to the file until the switch is turned off. This way you can preserve any cool improvisations by recording sounds from the past!
swh/timemachine; "JACK timemachine" https://github.com/swh/timemachine manpage: https://manpages.org/timemachine
Is there a better way to do this with Pipewire instead of Pipewire-jack?
Physicists coax molecules into exotic quantum state – ending decades-long quest
"Observation of Bose–Einstein condensation of dipolar molecules" (2024) https://www.nature.com/articles/s41586-024-07492-z :
> Abstract: [...] This work opens the door to the exploration of dipolar quantum matter in regimes that have been inaccessible so far, promising the creation of exotic dipolar droplets [14], self-organized crystal phases [15] and dipolar spin liquids in optical lattices [16].
From TA:
> by tuning the microwave fields to allow some interaction between molecules, the team expects to see the system separate into quantum droplets, a new phase of matter. By confining the condensate in two dimensions using lasers, the team also hopes to watch while the molecules arrange themselves, under a microscope, to form a kind of crystal. “That’s something that has never been possible,” says Will.
> The condensate molecules could also form the basis of a new kind of quantum computer, adds Will. Given that each molecule is in an identical, known state, they could be separated to form quantum bits, or qubits, the units of information in a quantum computer. The molecules’ quantum rotational states — which can be used to store information — can remain robust for perhaps minutes at a time, allowing for long and complex calculations.
Coherent for perhaps minutes?
An animated introduction to Fourier series
Seeing an epicycles animation years ago did wonders for my understanding of the complex representation of Fourier series, and I’ve been looking for that animation ever since to share with others getting into signals for the first time. This post far surpasses that page though! Excellent work, excited to share it with people in the future.
There's quite a few of those out there, e.g. https://youtu.be/k8FXF1KjzY0.
My favorite is https://youtu.be/n-43Uje-OuQ
"The more general uncertainty principle, regarding Fourier transforms" by 3blue1brown https://youtube.com/watch?v=MBnnXbOM5S4
From https://news.ycombinator.com/item?id=25190770#25194040 :
> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products. 1. https://en.wikipedia.org/wiki/Convolution_theorem :
>> In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
"QFT and iQFT; Inverted Quantum Fourier Transform" https://en.wikipedia.org/wiki/Quantum_Fourier_transform
MicroPython 1.23 Brings Custom USB Devices, OpenAMP, Much More
mcauser/awesome-micropython: https://github.com/mcauser/awesome-micropython
pfalcon/awesome-micropython: https://github.com/pfalcon/awesome-micropython
Physicists Puzzle over Emergence of Electron Aggregates
The behavior of electrons in a material depend on the material. Crystals and sheets of graphene are special because of their periodicity, i.e. the inability to distinguish one location in the lattice from another over long distances. The wavefuntions of electrons are to first order due to this periodic potential.
Crystals are also interesting mathematically because there is a finite number of possible lattices that can exist, and it isn't even that many. All the useful and interesting effects of doping semiconductors is the result of throwing some contaminants in to disrupt the perfect periodicity of the silicon crystal and therebey make junctions that can ultimately be used for computation etc. "Holes" are sort of emergent quasiparticles with charge +1.
I think a key term in the article is "moire". When two regular patterns are overlayed and then twisted or offset, they interfere to produce a new emergent pattern, sometimes with radically different periodicity than the original. The electrons must now inhabit this very artificial pattern of pottentials that is neither a crystal or a doped crystal and may not occur in nature at all. Plus there are probably effects caused by the material being nearly 2d.
Very exciting stuff, emergent complexity seems to be a major theme in physics so far this century.
"Catalog of topological phonon materials" (2024) https://www.science.org/doi/10.1126/science.adf8458
"Non-trivial quantum geometry and the strength of electron–phonon coupling" (2024) https://www.nature.com/articles/s41567-024-02486-0 :
> Here, we devise a theory that incorporates the quantum geometry of the electron bands into the electron–phonon coupling, demonstrating the crucial contributions of the Fubini–Study metric or its orbital selective version to the dimensionless electron–phonon coupling constant. We apply the theory to two materials, that is, graphene and MgB2, where the geometric contributions account for approximately 50% and 90% of the total electron–phonon coupling constant, respectively. The quantum geometric contributions in the two systems are further bounded from below by topological contributions
> Here, we devise a theory that incorporates the quantum geometry of the electron bands into the electron–phonon coupling, demonstrating the crucial contributions of the Fubini–Study metric or its orbital selective version to the dimensionless electron–phonon coupling constant. We apply the theory to two materials, that is, graphene and MgB2, where the geometric contributions account for approximately 50% and 90% of the total electron–phonon coupling constant, respectively. The quantum geometric contributions in the two systems are further bounded from below by topological contributions
There is emergence in complex fluid attractor systems.
Superhydrodynamics models fluid attractor systems; for example with vorticity in "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/ .. https://news.ycombinator.com/item?id=38871054
Vorticity in Electron-Electron interactions: https://news.ycombinator.com/item?id=40360691
Is there emergence from just vorticity?
How to copy a file from a 30-year-old laptop
> I try a bunch of different OCR programs, but can't find any that can transcribe the document with 100% accuracy. They often confuse certain letters or numbers (like 0 and C, 9 and 4, 0 and D). Sometimes they omit characters, sometimes they introduce new ones. I try different font sizes and different fonts, but it doesn't matter.
I decided to OCR a hex dump from an old computer magazine a while back and fixed this problem by writing a tool to help verify the OCR result. Basically you input the OCR'd result and segment the numbers. It'll display the original segmented characters ordered by their class, and the human eye will very quickly find any chars that do not belong, e.g. 3s sorted under "8" etc.
https://blog.qiqitori.com/2023/03/ocring-hex-dumps-or-other-...
https://blog.qiqitori.com/2023/03/ai-day-in-retroland-prolog...
I wrote two blog posts about this, and the tools are also linked from the blog posts. Note: the tools are just slightly more user-friendly than sendmail.
That said, I don't know if these old Apple laptops came with anything resembling a programming environment (or at least that ancient version of Microsoft Word?), but even if not... There must be a better way (even without hardware hacking)!
Programming environments were available of course, but none were built-in. The fact that the external floppy doesn't work is the main problem here.
I think a better solution would have been to use the serial port rather than the modem port to send a fax. Serial-to-USB adapters are easy to find.
>I think a better solution would have been to use the serial port rather than the modem port to send a fax.
From what I could Google, that laptop doesn't have a traditional RS-232 serial port but a DIN-8 RS-422 serial port. Still, it's serial, it's standard and very well documented, easy to acquire and easy to hook up.
I'd definitely would have used that instead to transfer data, especially since I could find USB to RS-422 port converters online easy and cheap[1]. I'm not sure why the author didn't try this, or maybe he tried but had no app to serialize file transfers?
Feels like a missing lead the article doesn't touch on.
[1] https://www.amazon.com/Serial-Converter-Adapter-Supports-Win...
I think if there was another "bridge" mac it would've been easier (setup LocalTalk over the serial port, could theoretically send the required TCP/IP libraries, to make it easier to go straight to a modern computer). I'd guess that the comment about "but no networking software installed." includes not having something like ZTerm installed already for direct serial communication, and also not having MacTCP or OpenTransport installed on the old mac (not to mention the missing hardware to set that up)
https://en.wikipedia.org/wiki/ZTerm https://lowendmac.com/1998/tcpip-over-localtalk/
I have a few old macs that I play with from time to time, and it's certainly possible to network them to modern computers, just becoming increasingly hard with the number of old software and hardware items you have to keep around to do it.
>I think if there was another "bridge" mac it would've been easier (setup LocalTalk over the serial port
But serial communication doesn't require another Mac to receive the data since it's just raw HEX/ASCII bytes over the standard serial protocol. You can pick up the data with an Arduino.
Nor does serial mandate any handshake or a master-slave sync to work. You're free to shoot those bytes on the empty wire for all it cares. If someone picks them up good, if not, just as good.
Sure, but IDK how you'd send them on a mac without the software to do it?
Is there even an environment you can program in? Can Applescript on classic macos do serial stuff in any form?
I wonder if you could convince the mac there was was a postscript printer on the other end, then retrieve the text from the ps commands that were sent.
You could use local talk and another Mac with ethernet or a floppy drive. Mac file sharing is built in with system 7 and later
https://apple.fandom.com/wiki/Serial_port links to "Macintosh Serial Throughput" (1998) https://lowendmac.com/1998/macintosh-serial-throughput/ which mentions a Farallon Mac serial to Ethernet adapter.
/? farallon Mac serial port Ethernet https://www.google.com/search?q=farallon+mac+serial+port+eth... : $50 and no FreePPP or Zmodem or AppleScript serial port to a [soldered] cable to an Arduino/RP2040W/ESP32 (which is probably faster than the old Mac).
https://unix.stackexchange.com/a/548343/44804 mentions minicom and gnu screen, and the need to implement e.g. file-level checksums because over serial like ROM flashing
Why There's a Copper Shortage
From https://news.ycombinator.com/item?id=40542826 re: the IEF report "Copper Mining and Vehicle Electrification" (2024) https://www.ief.org/focus/ief-reports/copper-mining-and-vehi... :
> Graphene is an alternative to copper
In graphene, there is electronic topology: current, semiconductivity, room temperature superconductivity
Copper Mining and Vehicle Electrification
Graphene is an alternative to copper for very many applications:
- "Researchers create first functional graphene semiconductor" https://news.ycombinator.com/item?id=38870695
- "Shattering the 'copper or optics' paradigm: humble plastic waveguides outperform" https://news.ycombinator.com/item?id=39493373 :
>> Point2’s E-Tube provides a low-cost, low-loss broadband dielectric waveguide solution that could serve as an advanced alternative to existing electrical and optical interconnects in high-speed, short-reach communication links. It’s 80% lighter and 50% less bulky than copper cables, and could reduce power consumption and the cost of optical cables by 50%, with picosecond latencies that are three orders of magnitude better
- /? Can graphene replace copper? https://www.google.com/search?q=Can+graphene+replace+copper
- Graphene can be made from trash. Flash heating waste plastic hydrocarbons yields Hydrogen and Graphene.
- Graphene can be made by lasering a fruit peel
- "Global Room-Temperature Superconductivity in Graphite" (2023) https://onlinelibrary.wiley.com/doi/10.1002/qute.202300230
- "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691
FTA:
> Just to meet business-as-usual trends, 115% more copper must be mined in the next 30 years than has been mined historically until now. To electrify the global vehicle fleet requires bringing into production 55% more new mines than would otherwise be needed. On the other hand, hybrid electric vehicle manufacture would require negligible extra copper mining.
Presumably that's because the reference EV model(s) contain more copper than the reference hybrid model(s)?
Which automotive designers have yet designed with copper reduction as an optimization objective?
Why can't we build with graphene instead of copper today?
Are there comparable ~nanolithography tools and methods for building computer chips out of graphene?
Can power cables be made from graphene?
Researchers create dispersion-assisted photodetector to decipher high-dim light
From "Physicists use [Huygens' mechanical oscillation] 350-year-old theorem to reveal new properties of light waves" https://news.ycombinator.com/item?id=37226160 regarding "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
Which contradicts the article; though, high-fidelity polarization measurement should then also be useful.
See also; "All-optical complex field imaging using diffractive processors" (2024) https://news.ycombinator.com/item?id=40541785
In an optical (photonic) system, polarization is the phase of photon particles and it is expressed as an angular quantity in radians or degrees.
Angular quantities "wrap around" such that `divmod(angle_theta, 2*pi) == n_rotations, angular displacement from zero`; and IIRC all phase spaces must too?
Optical phase space: https://en.wikipedia.org/wiki/Optical_phase_space
If it is possible to estimate photonic phase given intensity, how much more useful is a phase sensor than the existing intensity sensors?
From "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation":
> A generic complementary identity relation is obtained for arbitrary light fields. More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
There are various patterns of polarization, relative to a signal origin IIUC:
Polarization (waves) https://en.wikipedia.org/wiki/Polarization_(waves)
Types of polarization: Linear, Elliptical, and Circular; and also Radial and Azimuthal polarization
Radial polarization: https://en.wikipedia.org/wiki/Radial_polarization :
> Beams with radial and azimuthal polarization are included in the class of cylindrical vector beams. [7]
> A radially polarized beam can be used to produce a smaller focused spot than a more conventional linearly or circularly polarized beam, [8] and has uses in optical trapping. [9]
> It has been shown that a radially polarized beam can be used to increase the information capacity of free space optical communication via mode division multiplexing, [10] and radial polarization can "self-heal" when obstructed. [11]
Vector fields in cylindrical and spherical coordinates: Todo
Laser beam profiler: https://en.wikipedia.org/wiki/Laser_beam_profiler
Gaussian beam ... Quasioptics || Geometrical optics with rays and arcs: https://en.wikipedia.org/wiki/Gaussian_beam :
> Fundamentally, the Gaussian is a solution of the axial Helmholtz equation, the wave equation for an electromagnetic field. Although there exist other solutions, the Gaussian families of solutions are useful for problems involving compact beams.
Photonic integrated circuit: https://en.wikipedia.org/wiki/Photonic_integrated_circuit :
> A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components that form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits utilize photons (or particles of light) as opposed to electrons that are utilized by electronic integrated circuits.
> Types of polarization: Linear, Elliptical, and Circular; and also Radial and Azimuthal polarization
From https://news.ycombinator.com/item?id=39132365 :
> Astrophysical jets produce helically and circularly-polarized emissions, too FWIU.
All-optical complex field imaging using diffractive processors
"All-optical complex field imaging using diffractive processors" (2024) https://www.nature.com/articles/s41377-024-01482-6 :
> Abstract: Complex field imaging, which captures both the amplitude and phase information of input optical fields or objects, can offer rich structural insights into samples, such as their absorption and refractive index distributions. However, conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field. This limitation can be overcome using interferometric or holographic methods, often supplemented by iterative phase retrieval algorithms, leading to a considerable increase in hardware complexity and computational demand. [...]
- "New imager acquires amplitude and phase information without digital processing" (2024) https://phys.org/news/2024-05-imager-amplitude-phase-digital...
From "Researchers create dispersion-assisted photodetector to decipher high-dim light" (2024-05) https://news.ycombinator.com/context?id=40492160 :
> From "Physicists use [Huygens' mechanical oscillation] 350-year-old theorem to reveal new properties of light waves" https://news.ycombinator.com/item?id=37226160 regarding "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
>>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
> Which contradicts the article; though, high-fidelity polarization measurement should then also be useful.
Why scientists say we need to send clocks to the moon
"Private moon lander will carry Nokia's 4G cell network to the lunar surface this year" https://www.space.com/nokia-4g-cell-network-on-the-moon
Doesn't 4G include a time synchronization service for receivers with no GPS; and is the SI second unit (which is currently defined in terms of the cesium decay) const hardcoded in the new system?
> Doesn't 4G include a time synchronization service for receivers with no GPS
Yes, but I believe accuracy is on the order of minutes.
> is the SI second unit (which is currently defined in terms of the cesium decay) const hardcoded in the new system?
Are you saying that a moon second should be defined to be slightly longer/shorter than an earth second to compensate for the relativistic difference? That wouldn't help, since relativity affects all physical processes; if you do that, all your other measures would be off too, including e.g. length (defined as the distance light travels in a given unit of time).
"Matrix Theory: Relativity Without Relative Space or Time" https://youtube.com/watch?v=B94o-P93ExU&
Does the time difference due to time dilation due to relativity change suddenly upon orbiting or landing on another planet spherical reference frame?
Not suddenly, but gradually as you enter/leave the gravitational well.
The dominating relativistic factor is the gravitational potential (ie general relativity), not the orbital velocity (special relativity), as far as I understand.
GPS compensates for both by offsetting the frequency standard of the satellites’ atomic clocks accordingly.
New attention mechanisms that outperform standard multi-head attention
Because self-attention can be replaced with FFT for a loss in accuracy and a reduction in kWh [1], I suspect that the Quantum Fourier Transform can also be substituted for attention in LLMs.
[1] https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-ba...
"You Need to Pay Better Attention" (2024) https://arxiv.org/abs/2403.01643 :
> Our first contribution is Optimised Attention, which performs similarly to standard attention, but has 3/4 as many parameters and one matrix multiplication fewer per head. Next, we introduce Efficient Attention, which performs on par with standard attention with only 1/2 as many parameters as many parameters and two matrix multiplications fewer per head and is up to twice as fast as standard attention. Lastly, we introduce Super Attention, which surpasses standard attention by a significant margin in both vision and natural language processing tasks while having fewer parameters and matrix multiplications.
"Leave No Context Behind: Efficient Infinite Context Transformers" (2024) https://arxiv.org/abs/2404.07143 :
> A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.
Can't believe that FNet paper flew under my radar all this time—what a cool idea! It's remarkable to me that it works so well considering I haven't heard anyone mention it before! Do you know if any follow-up work was done?
This is the paper, which appears to be cited in hundreds of others, some of which appear to be about efficiency gains. https://arxiv.org/abs/2105.03824
"Fnet: Mixing tokens with fourier transforms" (2021) https://arxiv.org/abs/2105.03824
https://scholar.google.com/scholar?cites=1423699627588508486...
Fourier Transform and convolution.
Shouldn't a deconvolvable NN be more explainable? #XAI
Deconvolution: https://en.wikipedia.org/wiki/Deconvolution
The t-test was invented at the Guinness brewery
The students' t distribution has a symmetric PDF (with no skew), and thus you assume that the sample and/or population also have such a PDF (Probability Distribution Function).
t statistic > History: https://en.wikipedia.org/wiki/T-statistic#History
Students' t distribution: https://en.wikipedia.org/wiki/Student%27s_t-distribution
"What are some alternatives to sample mean and t-test when comparing highly skewed distributions" https://www.quora.com/What-are-some-alternatives-to-sample-m... :
>> the Kolmogorov-Smirnov two-sample test, which essentially compares the empirical distribution functions of the two samples without implicitly assuming normality. http://en.wikipedia.org/wiki/Kolmogorov–Smirnov_test
> You may also be interested in the Wald-Wolfowitz runs test (http://en.wikipedia.org/wiki/Wald–Wolfowitz_runs_test ) and the Mann-Whitney test (http://en.wikipedia.org/wiki/Mann–Whitney_U ).
Statistical Significance > Limitations, Challenges: https://en.wikipedia.org/wiki/Statistical_significance#Limit...
Statistical hypothesis test > Criticism, Alternatives: https://en.wikipedia.org/wiki/Statistical_hypothesis_test#Cr...
There are Multivariate Students' t distributions: https://en.wikipedia.org/wiki/Multivariate_t-distribution
Matrix t distribution: https://en.wikipedia.org/wiki/Matrix_t-distribution :
> The generalized matrix t-distribution is the compound distribution that results from an infinite mixture of a matrix normal distribution with an inverse multivariate gamma distribution placed over either of its covariance matrices.
But does a matrix t-distribution describe nonlinear variance in complex wave functions?
Quantum statistical mechanics: https://en.wikipedia.org/wiki/Quantum_statistical_mechanics :
> In quantum mechanics a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system.
A Q12 question: How frequently are quantum density operators described by a parametric t distribution?
Single atom defect in 2D material can hold quantum information at room temp
To provide some context:
- Usually the problem with room temperature is that the various thermal mechanical excitation and thermal radiation surrounding the qubit lead to interaction with the qubit and the information stored in it "leaks"; that is the main reason a lot of quantum hardware needs cryogenics.
- There are already a couple of similar types of "defects" that are known as a promising way to retain superposition states at room temperature. They work because the defect happens to be mechanically and/or optically isolated from the rest of the environment due to some idiosyncrasies in its physical composition.
- You might have heard of "trapped ion" or "trapped neutral atom" quantum hardware. In those the qubits are atoms (or the electrons of atoms) and they are trapped (with optical or RF tweezers) so they are kept accessible. The type of "defects" discussed here are "simply" atoms trapped in a crystal lattice instead of trapped in optical tweezers -- there are various tradeoffs for that choice.
- While discoveries like this are extremely exciting, there is a completely separate set of issues about scalability, reliability, longevity, controlability, and reproducible fabrication of devices like this. That is true for any quantum hardware and while there are great improvements over the last 15 years and while there is truly exponential progress over that time, we are still below the threshold of this technology being engineeringly and economically useful.
Lastly, the usual reminder: there are a few problems in computing, communication, and sensing, where quantum devices can do things that are impossible classically, but these are very restricted and niche problems. For most things a classical device is not only practically better today, but also is in-principle better even if quantum computers become trivial to build.
>Usually the problem with room temperature is that the various thermal mechanical excitation and thermal radiation surrounding the qubit lead to interaction with the qubit and the information stored in it "leaks"; that is the main reason a lot of quantum hardware needs cryogenics.
This issue, as I understand it, is quantum decoherence, when quantum properties are lost as an object transitions into the behavior of classical physics.
The article describes a period of a millisecond before quantum decoherence occurs. This is (relatively) a long time, and perhaps could be exploited in part to build a quantum computer.
Decoherence is what GP means with information leakage. The quantum state of the object becomes correlated with the environment.
The article is saying this can be avoided in their system for a _microsecond_, still long enough to be potentially useful.
From "Computer-generated holography with ordinary display" (2024) https://opg.optica.org/viewmedia.cfm?r=1&rwjcode=ol&uri=ol-4... :
> The hologram cascade is synthesized by solving an inverse problem with respect to the propagation of incoherent light.
From "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171 :
>> The relaxation technique is well known in the field of approximation algorithms, but it had never been tried in quantum learning.
Relaxation (iterative method) https://en.wikipedia.org/wiki/Relaxation_(iterative_method)
Pertubation theory > History > Beginnings in the study of planetary motion ... QFT > History: https://en.wikipedia.org/wiki/Perturbation_theory#Beginnings...
QFT > History: https://en.wikipedia.org/wiki/Quantum_field_theory#History :
> Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory.
But the Standard Model Lagrangian doesn't describe n-body gravity, n-body quantum gravity, photons in Bose-Einstein Condensates; liquid light in superfluids and superconductors, black hole thermodynamics and external or internal topology, unreversibility or not, or even fluids with vortices or curl that certainly affect particles interacting in multiple fields.
TIL again about Relaxation theory for solving quantum Hamiltonians.
OTOH other things on this topic
- "Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 :
> By illuminating the system with THz radiation [a wave function (Hamiltonian) is coherently transmitted over a small chip-scale distance]
- "Room Temperature Optically Detected Magnetic Resonance of Single Spins in GaN" (2024) https://www.nature.com/articles/s41563-024-01803-5 ... "GAN semiconductor defects could boost quantum technology" https://news.ycombinator.com/item?id=39365467
- "Reversible non-volatile electronic switching in a near-room-temperature van der Waals ferromagnet" (2024) https://www.nature.com/articles/s41467-024-46862-z ... "Nonvolatile quantum memory: Discovery points path to flash-like qubit storage" (2024) https://news.ycombinator.com/item?id=39956368 :
>> "That's the key finding," she said of the material's switchable vacancy order. "The idea of using vacancy order to control topology is the important thing. That just hasn't really been explored. People have generally only been looking at materials from a fully stoichiometric perspective, meaning everything's occupied with a fixed set of symmetries that lead to one kind of electronic topology. Changes in vacancy order change the lattice symmetry. This work shows how that can change the electronic topology. And it seems likely that vacancy order could be used to induce topological changes in other materials as well."
- "Catalog of topological phonon materials" (2024) https://www.science.org/doi/10.1126/science.adf8458 ... "Topological Phonons Discovered and Catalogued in Crystal Lattices" (2024-05) https://news.ycombinator.com/item?id=40410475
- "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. "Electron Vortices in Graphene Detected" https://news.ycombinator.com/item?id=40360691 :
> re: the fractional quantum hall effect, and decoherence: How are spin currents and vorticity in electron vortices related?
Memory Sealing "Mseal" System Call Merged for Linux 6.10
The mseal patch: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
Userspace API > System Calls: https://www.kernel.org/doc/html/next/userspace-api/index.htm...
kernel.org/doc/html/next/userspace-api/mseal.html: https://www.kernel.org/doc/html/next/userspace-api/mseal.htm... :
> Modern CPUs support memory permissions such as RW and NX bits. The memory permission feature improves security stance on memory corruption bugs, i.e. the attacker can’t just write to arbitrary memory and point the code to it, the memory has to be marked with X bit, or else an exception will happen.
> Memory sealing additionally protects the mapping itself against modifications. This is useful to mitigate memory corruption issues where a corrupted pointer is passed to a memory management system. For example, such an attacker primitive can break control-flow integrity guarantees since read-only memory that is supposed to be trusted can become writable or .text pages can get remapped. Memory sealing can automatically be applied by the runtime loader to seal .text and .rodata pages and applications can additionally seal security critical data at runtime.
> A similar feature already exists in the XNU kernel with the VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2].
NX bit, Memory tagging, Modified Harvard architecture: https://news.ycombinator.com/item?id=36726077#36740262
TEE, SGX, .data, .code: https://news.ycombinator.com/item?id=33584502
Thanks! Another important bit:
> sealing changes the lifetime of a mapping, i.e. the sealed mapping won’t be unmapped till the process terminates or the exec system call is invoked. Applications can apply sealing to any virtual memory region from userspace, but it is crucial to thoroughly analyze the mapping’s lifetime prior to apply the sealing.
Making my dumb A/C smart with Elixir and Nerves
Only yesterday I successfully exposed the CPU temps of all my machines as custom sensors in Home Assistant by creating a 30 line python webserver. The Home Assistant configuration took a few minutes more than the webserver portion, but it was mostly a matter of locating and reading the appropriate documentation. Entire job done in 45 minutes (from idea to completion, including SystemD'ification, deployment to 3 snowflakes, and git'ification for reproducibility).
Erlang, Elixir, and the BEAM are super cool space tech for sure, and are also overkill sometimes. For example, for a simple LAN service where a minimalist solution will get the entire job done in 20-30 minutes (or in the case of TFA, a few hours).
Other things with temperature stats probably worth monitoring and/or orchestrating with e.g. HA: attic fans, crawlspace fans, vents, motorized blinds, bath fans, inline duct fans,
There are recycled mining rig space heaters (that may or may not need Internet access to keep heating), pool heating immersion mining rigs, and probably also water heaters with temperature stats that might be useful in a smart home interface instead of something like Grafana.
Show HN: We open sourced our entire text-to-SQL product
Long story short: We (Dataherald) just open-sourced our entire codebase, including the core engine, the clients that interact with it and the backend application layer for authentication and RBAC. You can now use the full solution to build text-to-SQL into your product.
The Problem: modern LLMs write syntactically correct SQL, but they struggle with real-world relational data. This is because real world data and schema is messy, natural language can often be ambiguous and LLMs are not trained on your specific dataset.
Solution: The core NL-to-SQL engine in Dataherald is an LLM based agent which uses Chain of Thought (CoT) reasoning and a number of different tools to generate high accuracy SQL from a given user prompt. The engine achieves this by:
- Collecting context at configuration from the database and sources such as data dictionaries and unstructured documents which are stored in a data store or a vector DB and injected if relevant
- Allowing users to upload sample NL <> SQL pairs (golden SQL) which can be used in few shot prompting or to fine-tune an NL-to-SQL LLM for that specific dataset
- Executing the SQL against the DB to get a few sample rows and recover from errors
- Using an evaluator to assign a confidence score to the generated SQL
The repo includes four services https://github.com/Dataherald/dataherald/tree/main/services:
1- Engine: The core service which includes the LLM agent, vector stores and DB connectors.
2- Admin Console: a NextJS front-end for configuring the engine and observability.
3- Enterprise Backend: Wraps the core engine, adding authentication, caching, and APIs for the frontend.
4- Slackbot: Integrate Dataherald directly into your Slack workflow for on-the-fly data exploration.
Would love to hear from the community on building natural language interfaces to relational data. Anyone live in production without a human in the loop? Thoughts on how to improve performance without spending weeks on model training?
/? awesome "sql" llm site:github.com https://www.google.com/search?q=awesome+%22sql%22+llm+site%3... :
- awesome-Text2SQL: https://github.com/eosphoros-ai/Awesome-Text2SQL :
> Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
- Awesome-code-llm > Benchmarks > Text to SQL: https://github.com/codefuse-ai/Awesome-Code-LLM#text-to-sql
- underlines/awesome-ml//llm-tools.md > RAG > OpenAI > dataherald,: https://github.com/underlines/awesome-ml/blob/master/llm-too...
- underlines/awesome-ml//llm-tools.md > Benchmarking > Benchmark Suites, Leaderboards: https://github.com/underlines/awesome-ml/blob/master/llm-too... :
- sql-eval: https://github.com/defog-ai/sql-eval :
> This repository contains the code that Defog uses for the evaluation of generated SQL. It's based off the schema from the Spider, but with a new set of hand-selected questions and queries grouped by query category. For an in-depth look into our process of creating this evaluation approach, see this.
> Our testing procedure comprises the following steps. For each question/query pair: 1. We generate a SQL query (possibly from an LLM). 2. We run both the "gold" query and the generated query on their respective database to obtain 2 dataframes with the results. 3. We compare the 2 dataframes using an "exact" and a "subset" match. TODO add link to blogpost. 4. We log these alongside other metrics of interest (e.g. tokens used, latency) and aggregate the results for reporting
- dataherald/services/engine/dataherald/tests/sql_generator/test_generator.py: https://github.com/Dataherald/dataherald/blob/main/services/...
Three New Supercomputers Reach Top of Green500 List
Top500 > Green500 > 2024/06: https://www.top500.org/lists/green500/2024/06/ :
> Energy Efficiency (GFlops/watts)
What about TOps (Tensor Operations per second) and NPUs; is there a new energy efficiency stat like TOps/watts?
Copilot+ specs specify an NPU that does 40 TOPS; 40 (int8 integer) TOPS, but not watts?
Moore's Law describes growth in transistor density per area with cost but not energy efficiency; operations per watts.
Costed opcodes incentivize energy efficiency.
eWASM has costed opcodes, for example.
/?hnlog "costed opcodes"
Toward electrochemical synthesis of cement–An electrolyzer-based process
... Learned of this via "Electrochemistry Might Help Solve Cement’s Carbon Problem" which describes Portland cement production methods in general and also Sublime Systems' electrochemical synthesis efficiency and sustainability advantages: https://cleantechnica.com/2024/05/23/electrochemistry-might-...
> In fact, all of the heat can be electrified too, and has been both historically and in demonstration units today. Electric heat for the limestone kiln is a relatively trivial thing to build into new kilns, although more challenging to retrofit to existing ones.
And from the paper,
"Toward electrochemical synthesis of cement—An electrolyzer-based process for decarbonating CaCO3 while producing useful gas streams" (2019) https://www.pnas.org/doi/10.1073/pnas.1821673116 :
> Electrochemical calcination produces concentrated gas streams from which CO2 may be readily separated and sequestered, H2 and/or O2 may be used to generate electric power via fuel cells or combustors, O2 may be used as a component of oxyfuel in the cement kiln to improve efficiency and lower CO2 emissions, or the output gases may be used for other value-added processes such as liquid fuel production. Analysis shows that if the hydrogen produced by the reactor were combusted to heat the high-temperature kiln, the electrochemical cement process could be powered solely by renewable electricity.
Could probably get it started with sunlight:
> "Solar thermal trapping at 1,000°C and above" (2024) https://www.cell.com/device/fulltext/S2666-9986(24)00235-7 https://news.ycombinator.com/item?id=40419777
... Concrete can be more sustainable and efficient. Concrete may or may not be a better thermal battery than gravel.
Other things concrete fwiw:
- Roman Concrete was made with volcanic ash
- Uncoated steel rebar rusts in less than 100 years FWIU?
- Concrete block is easier to repair than large, interlocking slabs
- Plant-based blocks can be cut with a handsaw
- (Catalan) brick vault roofs do not require forms or support; and round structures handle wind and storms best
- Hempcrete is made with Lime, too
- Sugarcrete is made with Sugarcane and sand
- Bio-based block breathes with relative humidity
- Higher R-Value, higher strength construction materials reduce HVAC and post-disaster reconstruction costs
- (sinusoidal) wavy walls need only be one brick wide
- Dry stone walls of e.g. Peru, which may be made with geopolymers; have irregularly-shaped blocks, that are locked together, and settle down into V's for earthquake resilience.
- There are new like infinitely repeating tile shapes that may or may not be as easy to form as bricks
one thing is the energy required and another one the chemical reaction itself
Writing a Unix clone in about a month
From "Linux System Call Table – Chromiumos" https://www.chromium.org/chromium-os/developer-library/refer... https://news.ycombinator.com/item?id=33395777 :
> google/syzkalleR
> Fuschia / Zircon syscalls: https://fuchsia.dev/fuchsia-src/reference/syscalls
And a new one, a new syscall this year: mseal()
"Memory Sealing "Mseal" System Call Merged for Linux 6.10" https://news.ycombinator.com/context?id=40474551
First topological light-matter quantum simulator device at room temperatures
"Topological valley Hall polariton condensation" (2024) https://www.nature.com/articles/s41565-024-01674-6
[deleted]
Researchers use 'smart' rubber structures to carry out computational tasks
"Controlled pathways and sequential information processing in serially coupled mechanical hysterons" (2024) https://www.pnas.org/doi/10.1073/pnas.2308414121 :
> Abstract: The complex sequential response of frustrated materials results from the interactions between material bits called hysterons. Hence, a central challenge is to understand and control these interactions, so that materials with targeted pathways and functionalities can be realized. Here, we show that hysterons in serial configurations experience geometrically controllable antiferromagnetic-like interactions. We create hysteron-based metamaterials that leverage these interactions to realize targeted pathways, including those that break the return point memory property, characteristic of independent or weakly interacting hysterons. We uncover that the complex response to sequential driving of such strongly interacting hysteron-based materials can be described by finite state machines. We realize information processing operations such as string parsing in materia, and outline a general framework to uncover and characterize the FSMs for a given physical system. Our work provides a general strategy to understand and control hysteron interactions, and opens a broad avenue toward material-based information processing.
So good, it works on barbed wire (2001)
> In summary, the barbed wire had zero impact on signal quality. The signals went through perfectly undistorted. The only thing the barbed wire did was impress the heck out of Broadcom's customers.
Next time you look at a transmission line, I hope you'll focus on the big four properties: characteristic impedance, high-frequency loss, delay, and crosstalk. These properties determine how well a transmission structure functions, regardless of the physical appearance or configuration of its conductors.
FWIU from "The Information" by Gleick, Shannon entropy Shannon started out with digital two-state modulations on wire fences
[deleted]
Combined cement and steel recycling could cut CO2 emissions
Interesting. But paywalled.
This may be the study thereim discussed?
"Electric recycling of Portland cement at scale" (2024) https://www.nature.com/articles/s41586-024-07338-8
Microsoft, Khan Academy Partner to Make Khanmigo Teaching Tool Free
Electric recycling of Portland cement at scale
There was some discussion yesterday [0] (16 points, 3 comments)
Dual antibacterial properties of copper-coated nanotextured stainless steel
As someone that has built and operated large multipractice and acute care facilities some of the most vicious fights I've had with medical entities and boards is about changing hardware back to the older brass/copper which is inherently antibacterial and only needs cleaning with soap/water to self disinfect versus stainless steel which needs cleaning with bleach and can foster or even lead to the creation of resistant and other problematic bacteria. The objections were mostly about the visual appearance. I am not sure I see the advantage to this new material over the good old brass/copper. There is a perceived cost premium to it but it isn't meaningful in practice. This paper alleges this new process is cheaper but I would be incredibly dubious of that. At ClearHealth we were successful in substantially reducing hospital acquired infection (HAI) rates at managed facilities to industry best in part because of our "reversion" to copper/brass hardware.
One of the most obvious signs you are in a well managed health system is seeing copper/brassy touch surfaces instead of stainless.
https://www.copper.org/publications/newsletters/ba-news/2010... https://en.wikipedia.org/wiki/Antimicrobial_properties_of_co... https://www.statnews.com/2020/09/24/as-hospitals-look-to-pre...
Do the useful properties of copper change with oxidation?
Silver is also antimicrobial, and also expensive.
FWIU nanospikes in silicon achieved virucidal outcomes in small trials as well.
Is copper virucidal without nanospikes?
(On this topic, Hemp textiles are antimicrobial / bactericidal: https://news.ycombinator.com/item?id=39196781#39197019 )
"Scientists create virucidal silicon surface without any chemicals" https://pubs.acs.org/doi/10.1021/acsnano.3c07099 https://news.ycombinator.com/item?id=39196822
Oxidation is a problem as is physical debris but touch surfaces are cleaned at least daily in a hospital setting anyway. Plain old Copper/brass that has been used since forever is virucidal without any special treatment.
Is the claim then that copper nanospikes on stainless steel are of no greater value than copper for healthcare applications?
Is that claim supported by evidence?
The claim is that spikes kill faster, but probably not fast enough to matter since spikes are destroyed with shear cleaning.
What happens if you breathe the coating in?
If you ever spend time within a few hundred meters of other cars such as while driving around or walking along a road you are inhaling large amounts of aerosolized metal particles from brake dust. If you want a large sample to verify this phenomenon, pull up to a semi tractor trailer as the truck brakes to a sudden stop such as when slowing down on the freeway upon hitting heavy traffic and take a good whiff.
Copper toxicity > Signs and symptoms: https://en.wikipedia.org/wiki/Copper_toxicity#Signs_and_symp... :
> The US EPA lists copper as a micronutrient and a toxin. [11] Toxicity in mammals includes a wide range of animals and effects such as liver cirrhosis, necrosis in kidneys and the brain, gastrointestinal distress, lesions, low blood pressure, and fetal mortality. [12][13][14] The Occupational Safety and Health Administration (OSHA) has set a limit of 0.1 mg/m3 for copper fumes (vapor generated from heating copper) and 1 mg/m3 for copper dusts (fine metallic copper particles) and mists (aerosol of soluble copper) in workroom air during an eight-hour work shift, 40-hour work week. [15] Toxicity to other species of plants and animals is noted to varying levels. [11]
A reasonable production process would need to contain and could probably reuse copper emissions
"Dual Antibacterial Properties of Copper-Coated Nanotextured Stainless Steel" (2024) https://onlinelibrary.wiley.com/doi/10.1002/smll.202311546 :
> Abstract: Bacterial adhesion to stainless steel, an alloy commonly used in shared settings, numerous medical devices, and food and beverage sectors, can give rise to serious infections, ultimately leading to morbidity, mortality, and significant healthcare expenses. In this study, Cu-coated nanotextured stainless steel (nSS) fabrication have been demonstrated using electrochemical technique and its potential as an antibiotic-free biocidal surface against Gram-positive and negative bacteria. As nanotexture and Cu combine for dual methods of killing, this material should not contribute to drug-resistant bacteria as antibiotic use does. This approach involves applying a Cu coating on nanotextured stainless steel, resulting in an antibacterial activity within 30 min. Comprehensive characterization of the surface revealing that the Cu coating consists of metallic Cu and oxidized states (Cu2+ and Cu+), has been performed by this study. Cu-coated nSS induces a remarkable reduction of 97% in Gram-negative Escherichia coli and 99% Gram-positive Staphylococcus epidermidis bacteria. This material has potential to be used to create effective, scalable, and sustainable solutions to prevent bacterial infections caused by surface contamination without contributing to antibiotic resistance.
- "This modified stainless steel could kill bacteria without antibiotics or chemicals" (2024) https://phys.org/news/2024-05-stainless-steel-bacteria-antib...
"Scientists create virucidal silicon surface without any chemicals" (2023) https://news.ycombinator.com/item?id=39196781#39196822 :
"Piercing of the Human Parainfluenza Virus by Nanostructured Surfaces" (2024) https://pubs.acs.org/doi/10.1021/acsnano.3c07099 :
> We used reactive ion etching to fabricate silicon (Si) surfaces featuring an array of sharp nanospikes with an approximate tip diameter of 2 nm and a height of 290 nm. The nanospike surfaces exhibited a 1.5 log reduction in infectivity of human parainfluenza virus type 3 (hPIV-3) after 6 h, a substantially enhanced efficiency, compared to that of smooth Si.
Show HN: Optigraph – optimum graph network generator
I've created a tool that helps plan graph networks for the best possible connections between nodes. The idea is for it to be used as a kind of underground system planner. I am still working on improving the algorithms it uses, but please consider checking it out for new ideas/bug catching.
https://news.ycombinator.com/item?id=36942305#36946741 :
> Is re-planning routes for regenerative braking solvable with the Modified Snow Plow Problem (variation on TSP Traveling Salesman Problem), on a QC Quantum Computer; with Quantum Algorithmic advantage due to the complexity of the problem?
FWIU the Modified Snow Plow Problem is a variant of TSP the Traveling Salesman Problem which takes topological grade into account; only plow downhill.
Regenerative braking charges on downhills.
TSP can be implemented with quantum algorithms for a quantum computer.
There could be a call for and/or an ml competition for QC algos for TSP and similar:
> - QISkit tutorials > Max-Cut and Traveling Salesman Problem: docs/tutorials/06_examples_max_cut_and_tsp.ipynb: https://qiskit.org/ecosystem/optimization/tutorials/06_examp...
Quantum Algorithm Zoo probably lists existing quantum algorithms that might be useful for this application
The statement that this can be implemented with a quantum algorithm is a bit ambiguous. If you look in detail, the problem is only formulated on the quantum computer while the optimization routine which essential solves the problem is left to a classical computer. There are some notions of quantum gradients. But I wouldn’t know how it applies to such problems
A promising 3-terminal diode for wireless comm. and optically driven computing
https://news.ycombinator.com/item?id=36310594#36356444 :
> What is the maximum presumed distance over which photon emissions from blue [sapphire,] LEDs can be entangled? What about with [time-synchronized] applied magnetic fields? Could newer waveguide approaches - for example, dual beams - improve the distance and efficiency of transceivers operating with such a quantum communication channel?
> [...]
>> Physicists at the University of Konstanz have generated one of the shortest signals ever produced by humans: Using paired laser pulses, they succeeded in compressing a series of electron pulses to a numerically analyzed duration of only 0.000000000000000005 seconds [...]
>> For Peter Baum, physics professor and head of the Light and Matter Group at the University of Konstanz, these results are still clearly basic research, but he emphasizes the great potential for future research: "If a material is hit by two of our short pulses at a variable time interval, the first pulse can trigger a change and the second pulse can be used for observation—similar to the flash of a camera"
- "Fiber-optic data transfer speeds hit a rapid 301 Tbps" (2024-03) https://news.ycombinator.com/item?id=39864107
Can Nvidia or AMD GPU's Be Used with Qualcomm's New ARM CPU's?
> 8+4 lanes of PCIe 4.0, 2+2 lanes of PCIe 3.0
I’d be surprised if you can’t attach a gpu to that eventually somehow
https://www.androidauthority.com/snapdragon-x-models-3429369...
- "Meta seeks ASIC designers for ML accelerators and datacenter SoCs" (2024) https://news.ycombinator.com/item?id=39485115#39518293 :
> With some newer architectures, the GPU(s) are directly connected to [HBM] RAM; which somewhat eliminates the CPU and Bus performance bottlenecks that ASICs and FPGAs are used to accelerate beyond.
- "Jim Keller suggests Nvidia should have used Ethernet to link Blackwell GPUs" (2024) https://news.ycombinator.com/item?id=40027933 :
> The HBM3E Wikipedia article says 1.2TB/s.
> Fiber optics: 301 Tbps
- "New RISC-V microprocessor can run CPU, GPU, and NPU workloads simultaneously" (2024) https://news.ycombinator.com/item?id=39938538
- eGPU enclosure with USB A and USB C ports, too
- iGPU + dGPU
Using Postgres for Everything
Postgres notes:
ElectricSQL, SQLedge, pglite: https://news.ycombinator.com/item?id=38690588
pgreplay, pgkit: https://news.ycombinator.com/item?id=37959945
pg_timeseries: https://news.ycombinator.com/item?id=40417347
pgvector, postgresml: https://news.ycombinator.com/item?id=37812219
cmu ottertune, pg_auto_tune, postgresqltuner,
timescaledb-tune: https://github.com/timescale/timescaledb-tune
awesome-postgres: https://github.com/dhamaniasad/awesome-postgres
Using solar energy to generate heat at 1050°C high temperatures
"Solar thermal trapping at 1,000°C and above" (2024) https://www.cell.com/device/fulltext/S2666-9986(24)00235-7 :
> Summary: Common semi-transparent materials like quartz and water feature spectral optical properties characterized by low attenuation of visible radiation and strong absorption of infrared (IR) radiation emitted by hot surfaces. This leads to the thermal trap effect, exploitable in solar-concentrating applications to attain higher absorber temperatures and thermal efficiencies. Here, we demonstrate this effect at high temperatures with a quartz rod attached to an opaque absorber plate reaching 1,050°C when exposed to 135 suns of concentration, while the quartz front face remains at 450°C. A 3D heat transfer model, validated against the experimental data, is applied to determine the performance map of solar receivers exploiting thermal trapping. These are shown to achieve the target temperature with higher efficiency and/or needing a lower concentration than the reference unshielded absorber. Solar process heat at above 1,000°C can decarbonize key industrial applications such as cement manufacturing and metallurgical extraction.
Flash-heating waste plastic at 2827 C yields Hydrogen, CO2, and Graphene;
"Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763 PDF: https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/...
"Making hydrogen [and graphene] from waste plastic could pay for itself" (2023) https://news.rice.edu/news/2023/making-hydrogen-waste-plasti...
> 3100 Kelvin [ = 2827 C ]
gscholar citations for: https://scholar.google.com/scholar?cites=1338501696868292077...
What are the fundamental frequencies of plastic?
Is it possible yield Hydrogen and Graphene from plastic at lower than 3100K / 2827C?
Is it possible to generate 3100K / 2827C with this solar thermal trap design?
(edit) Other solar concentrator specs:
NREL's High-Flux Solar Furnace (HFSF) https://www.nrel.gov/csp/facility-hfsf.html L
> The solar furnace can quickly concentrate solar radiation to 10 kilowatts over a 10-cm diameter (2,500 "suns"), achieving temperatures of 1,800°C—and up to peak solar fluxes of 20,000 suns with specialized secondary optics to produce temperatures of up to 3,000°C. Secondary concentrators can modify the focal point and tailor flux levels and distributions to suit the needs of each research activity.
When the sun shines brightly, a concentrator with specialized secondary optics should thus be enough to flash heat plastic to yield hydrogen and graphene.
OT is quartz with 135 suns of concentration.
Quartz (SiO2) melting points, according to wikipedia: https://en.wikipedia.org/wiki/Quartz :
> 1670 °C (β tridymite); 1713 °C (β cristobalite)
Granite melting points too due to a recent geopolymer research interest: https://en.wikipedia.org/wiki/Granite :
> 1215–1260 °C [at ambient pressure]
> Composition: SiO2 72.04% (silica), Al2O3 14.42% (alumina), K2O ~4%
Granite and Quartz and Silicic magmas.
Physicists create five-lane superhighway for electrons
"Interlayer electron-phonon coupling in Moire superlattices" (2021) https://ui.adsabs.harvard.edu/abs/2021APS..MARR42010H/abstra... https://scholar.google.com/citations?view_op=view_citation&h... :
> Electron-phonon coupling is lying at the heart of many condensed matter phenomena. The van der Waals stacking of two-dimensional materials and the twisting-angle-controlled Moire superlattice offer new opportunities for studying and engineering interlayer electron-phonon coupling. I will present our infrared spectroscopy study of several graphene-based Moire superlattices, showing strong features of interlayer electron-phonon coupling. This coupling can be further tuned through controlling the Moire wavelength and gate electric fields.
Is this potentially relevant to solving for ancient Geopolymer methods with electricity and possibly acoustic notes and/or chords?
Anyways, the original exercise:
"A scattering-type scanning nearfield optical microscope probes materials at the nanoscale" (2021) https://phys.org/news/2021-07-scattering-type-scanning-nearf... :
> The new tool is based on atomic force microscopy (AFM), in which an extremely sharp metallic tip with a radius of only 20 nanometers, or billionths of a meter, is scanned across the surface of a material. AFM creates a map of the physical features, or topography, of a surface, of such high resolution that it can identify "mountains" or "valleys" less than a nanometer in height or depth.
> Ju is adding light to the equation. Focusing an infrared laser on the AFM tip turns that tip into an antenna "just like the antenna on a television that's used to receive signals," he says. And that, in turn, greatly enhances interactions between the light and the material beneath the tip. The back-scattered light collected from those interactions can be analyzed to reveal much more about the surface than would be possible with a conventional AFM.
"Correlated insulator and Chern insulators in pentalayer rhombohedral-stacked graphene" (2023) https://www.nature.com/articles/s41565-023-01520-1 :
> [...] Our results establish rhombohedral multilayer graphene as a suitable system for exploring intertwined electron correlation and topology phenomena in natural graphitic materials without the need for moiré superlattice engineering.
"Fractional quantum anomalous hall effect in a graphene moire superlattice" (2023)
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024)
"Electron correlation and topology in rhombohedral multilayer graphene" (2024)
"Large quantum anomalous Hall effect in spin-orbit proximitized rhombohedral graphene" (2024) https://www.science.org/doi/10.1126/science.adk9749
Ask HN: Video streaming is expensive yet YouTube "seems" to do it for free. How?
Can anyone help me understand the economics of video streaming platforms?
Streaming, encoding, and storage demands enormous costs -- especially at scale (e.g., on average each 4k video with close to 1 million views). Yet YouTube seems to charge no money for it.
I know advertisements are a thing for YT, but is it enough?
If tomorrow I want to start a platform that is supported with Advert revenues, I know I will likely fail. However, maybe at YT scale (or more specifically Google Advert scale) the economics works?
ps: I would like this discussion to focus on the absolute necessary elements (e.g., storing, encoding, streaming) and not on other factors contributing to latency/cost like running view count algorithms.
Disclaimer: I used to work at a live video streaming company as a financial analyst so quite familiar with this
The biggest cost is as you imagine the streaming - getting the video to the viewer. It was a large part of our variable cost and we had a (literal) mad genius dev ops person holed up in his own office cave that managed the whole operation.
Ive long forgotten the special optimizations he did but he would keep finding ways to improve margin / efficiency.
Encoding is a cost but I don’t recall it being significant
Storage isnt generally expensive. Think about how cheap you as a consumer can go get 2 TB of storage, and extrapolate.
The other big expense - people! All those engineers to build back and front end systems. That’s what ruined us - too many people were needed and not enough money coming in so we were burning cash.
I'm guessing live video looks a lot different from a more static video site. I think encoding and storage are both quite expensive. You want to encode videos that are likely to be watched in the most efficient ways possible to reduce network bandwidth usage, and every video needs at least some encoding.
Based on some power laws etc., I would guess most videos have only a handful of views, so storing them forever and the cost to encode them initially is probably significant.
Encoding and storage aren't significant, relative to the bandwidth costs. Bandwidth is the high order bit.
The primary difference between live and static video is the bursts -- get to a certain scale as a static video provider, and you can roughly estimate your bandwidth 95th percentiles. But one big live event can blow you out of the water, and push you over into very expensive tiers that will kill your economics.
If they're not significant, then why does youtube build ASICs for doing video encoding? See e.g., https://arstechnica.com/gadgets/2021/04/youtube-is-now-build...
VA-API, NVENC,
nvenc > See also: https://en.wikipedia.org/wiki/Nvidia_NVENC#See_also
NVIDIA Video Codec SDK v12.1 > NVENC Application Note: https://docs.nvidia.com/video-technologies/video-codec-sdk/1... :
> NVENC Capabilities: encoding for H.264, HEVC 8-bit, HEVC 10-bit, AV1 8-bit and AV1 10-bit. This includes motion estimation and mode decision, motion compensation and residual coding, and entropy coding. It can also be used to generate motion vectors between two frames, which are useful for applications such as depth estimation, frame interpolation, encoding using other codecs not supported by NVENC, or hybrid encoding wherein motion estimation is performed by NVENC and the rest of the encoding is handled elsewhere in the system. These operations are hardware accelerated by a dedicated block on GPU silicon die. NVENCODE APIs provide the necessary knobs to utilize the hardware encoding capabilities.
FFMPEG > Platform [hw video encoder] API Availability table: https://trac.ffmpeg.org/wiki/HWAccelIntro#PlatformAPIAvailab... :
> AMF, NVENC/NVDEC/CUVID (CUDA, cuda-nvcc and libnpp) (NVIDIA), VCE (AMD), libmfx (Intel), MediaCodec, Media Foundation, MMAL, OpenMAX, RockChip MPP, V4L2 M2M, VA-API (Intel), Video Toolbox, Vulkan
Egypt's pyramids may have been built on a long-lost branch of the Nile
It makes a lot of sense because obviously having a river there makes the transport of materials a lot easier, but i do wonder how nobody noticed this before.
IIRC it's been well-known for a while how they moved the vast majority of materials by land (similar to how the Stonehenge megaliths were moved, highly dissimilar to how the Rapa Nui moai were).
How? Last I heard, it seemed either "rolling logs" or "powerful aliens" were equally plausible...
Were there wheels before the stone pillars of the Osireion were created; and if so how how were those stonemasonry methods lost?
The Great Pyramid Grand Gallery locks have spots for things that roll or that rope rigging may have pulled around.
Hydrological engineering pyramid construction methods: Herodotus, Strabo, Edward J. Kunkel "The Pharaohs Pump", Steven Meyers, Chris Massey
Herodotus > Life > Early Travels: https://en.wikipedia.org/wiki/Herodotus#Early_travels
https://www.secretofthepyramids.com/herodotus :
> For this, they said, the ten years were spent, and for the underground chambers on the hill upon which the pyramids stand, which he caused to be made as sepulchral chambers for himself in an island, having conducted thither a channel from the Nile.
Sounds like the same story for the Osireion, according to Strabo.
There's a speculative map in "Water Transportation During Khufu Time" https://www.secretofthepyramids.com/projects/project-three-w... :
"Probable look at waterways of Giza during Khufu time" https://images.squarespace-cdn.com/content/v1/600615bf2f57da...
https://www.secretofthepyramids.com/latestnews references:
"Nile waterscapes Facilitated the Construction of Giza Pyramids During the 3rd Millennium BCE." (2022) https://www.pnas.org/doi/full/10.1073/pnas.2202530119
Timeline of Glaciation; the last ice age ended around 11,700 years ago: https://en.wikipedia.org/wiki/Timeline_of_glaciation
"Rock art indicates cows once grazed a lush, green Sahara" (2024) https://newatlas.com/science/cattle-rock-art-sahara-desert/
Ancient megalithic Geopolymer masonry made with electrodes and [Lingam,] electricity is apparently lost to modern day as well.
FWIU, in the Great Pyramid, there were/are copper rods in the shafts out from the King's Chamber, and the conductive gold at the top of the pyramid was added after construction over top of a perhaps more ancient well shaft (that is not as geomagnetically-aligned) that may have been a hydraulic/hydrologic water tunnel given the water erosion in the subterranean chamber.
Fairly, Demonstrate moving and then placing an 80 ton granite stone with ancient materials and tools: copper, gold, limestone, granite, probably fulgurite (sand glass due to lightning) and/or volcanic glass, obsidian, grain dust, papyrus rope, papyrus boats, barges, [variable buoyancy] crane machines, masonry forms and jigs, chemistry in jars, large sceptre tuning forks, sand, porous cliffs by the sea
What are the dates on the outer structure, and on the oldest largest object within the structure?
Well: Cyprus (8400 BC),
Wheel: Pillars, Potter's wheel (4500–3300 BCE), chariot (2200–1550 BCE), stone saw
Gears: Antikythera (200 BC), Watchmaking c. 1300 AD
But the boat, and things that float due to ballast or no; how old is that?
Aliens of similar height, from the tunnel and stair heights and sarcophagi.
Such as the [presumed] Sarcophagus of Senusret II - which has a pyramid built around it with perhaps newer and less precise masonry methods - which one might've hoped had contained instructions on how to produce spec granite at those tolerances back then; [1]
[1] "The MOST precisely made granite object of Ancient Egypt - and why it's NOT Geopolymer" https://youtu.be/d8Ejf5etV5U?si=4GfDTL-QSGE5mO05?t=11m30s
There is little evidence of advanced mechanical masonry tools at the time, except for the remaining megalithic stonework that later cultures built upon.
FWIU there are only a few examples of circular polishing, and the core drilling method leaves different signatures than known methods in modern day.
Topological Phonons Discovered and Catalogued in Crystal Lattices
"Catalog of topological phonon materials" (2024) https://www.science.org/doi/10.1126/science.adf8458
Scientists show that there is indeed an 'entropy' of quantum entanglement
"Reversibility of quantum resources through probabilistic protocols" (2024) https://www.nature.com/articles/s41467-024-47243-2 :
> Abstract: Among the most fundamental questions in the manipulation of quantum resources such as entanglement is the possibility of reversibly transforming all resource states. The key consequence of this would be the identification of a unique entropic resource measure that exactly quantifies the limits of achievable transformation rates. Remarkably, previous results claimed that such asymptotic reversibility holds true in very general settings; however, recently those findings have been found to be incomplete, casting doubt on the conjecture. Here we show that it is indeed possible to reversibly interconvert all states in general quantum resource theories, as long as one allows protocols that may only succeed probabilistically. Although such transformations have some chance of failure, we show that their success probability can be ensured to be bounded away from zero, even in the asymptotic limit of infinitely many manipulated copies. As in previously conjectured approaches, the achievability here is realised through operations that are asymptotically resource non-generating, and we show that this choice is optimal: smaller sets of transformations cannot lead to reversibility. Our methods are based on connecting the transformation rates under probabilistic protocols with strong converse rates for deterministic transformations, which we strengthen into an exact equivalence in the case of entanglement distillation.
> The key consequence of this would be the identification of a unique entropic resource measure that exactly quantifies the limits of achievable transformation rates
IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
Also I don't understand QKD 'repeaters'; to 'store a forward' wave function implies that the transmitted wave state is collapsed, but that's not a MITM if it's keyed and time synchronized? Is there no good way to amplify quantum photonic signal without distorting / transforming it?
> smaller sets of transformations cannot lead to reversibility
"Thermodynamics of Computations with Absolute Irreversibility, Unidirectional Transitions, and Stochastic Computation Times" (2024) https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.02... :
- "New work extends the thermodynamic theory of computation" https://phys.org/news/2024-05-thermodynamic-theory.amp https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.02...
'Quantum discord'
From https://news.ycombinator.com/context?id=39954180 and [2] https://en.wikipedia.org/wiki/Quantum_discord :
> FWIU, there are various types of quantum entropy; entanglement and non-entanglement entropy. Quantum discord [2]:
>> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
'Nonquantum entanglement'
From https://news.ycombinator.com/context?id=40302396 :
> "Nonquantum Entanglement Resolves a Basic Issue in Polarization Optics"
'Multi-particle entanglement'
From https://news.ycombinator.com/context?id=40281756 :
> "Designing three-way entangled and nonlocal two-way entangled single particle states via alternate quantum walks" (2024) https://arxiv.org/abs/2402.05080 :
>> Herein, we generate three-way entanglement from an initially separable state involving three degrees of freedom of a quantum particle, which evolves via a 2D alternate quantum walk employing a resource-saving single-qubit coin. We achieve maximum possible values for the three-way entanglement quantified by the π-tangle between the three degrees of freedom. We also generate optimal two-way nonlocal entanglement, quantified by the negativity between the nonlocal position degrees of freedom of the particle. This prepared architecture using quantum walks can be experimentally realized with a photon.
Isn't there more total entropy if there is n-way entanglement?
Learning quantum Hamiltonians at any temperature in polynomial time
From "Scientists Find a Fast Way to Describe Quantum Systems" (2024) https://www.quantamagazine.org/scientists-find-a-fast-way-to... :
> So that’s what the team behind the new paper ended up doing: They ported an optimization tool from mathematics into their field of quantum learning. First they reformulated the problem of calculating a system’s Hamiltonian into a family of polynomial equations. Now the goal was to prove that they could solve these equations reasonably quickly — which seemed like an equally hard goal. “In general, if I have an arbitrary polynomial system, I cannot hope to solve it efficiently,” Bakshi said. Even simple polynomial systems are just too hard.
> But theoretical computer scientists are good at finding workarounds in such situations, by using what’s called a relaxation technique. This approach converts problems that are hard to optimize — they have too many solutions that appear right but aren’t valid everywhere — into simpler ones with a unique global solution. By approximating difficult problems via simpler ones, the relaxation technique helps find solutions closer to the true solution.
> The relaxation technique is well known in the field of approximation algorithms, but it had never been tried in quantum learning.
Scientists show how to treat burns with susty plant-based Vitamin C bandage
- "Blessed Thistle Enhances Nerve Regeneration" (2024) https://news.ycombinator.com/item?id=40105779
"susty"? lol
GPT-4o's Memory Breakthrough – Needle in a Needlestack
I'd like to see this for Gemini Pro 1.5 -- I threw the entirety of Moby Dick at it last week, and at one point all books Byung Chul-Han has ever published, and it both cases it was able to return the single part of a sentence that mentioned or answered my question verbatim, every single time, without any hallucinations.
A number of people in my lab do research into long context evaluation of LLMs for works of fiction. The likelihood is very high that Moby Dick is in the training data. Instead the people in my lab have explored recently published books to avoid these issues.
See BooookScore (https://openreview.net/forum?id=7Ttk3RzDeu) which was just presented at ICLR last week and FABLES (https://arxiv.org/abs/2404.01261) a recent preprint.
HN post re: FABLES: https://news.ycombinator.com/item?id=39982362
FABLES/booklist.md: https://github.com/mungg/FABLES/blob/main/booklist.md
/gscholar_related? FABLES: https://scholar.google.com/scholar?q=related:Y-Hx-kplbEUJ:sc...
/gscholar_citations? BoookScore: https://scholar.google.com/scholar?cites=1796862036168524911...
...
From that one day awhile ago: https://news.ycombinator.com/item?id=38347868#38354679 :
> "LLMs cannot find reasoning errors, but can correct them" [ https://arxiv.org/abs/2311.08516 ] https://news.ycombinator.com/item?id=38353285
Show HN: Pico: An open-source Ngrok alternative built for production traffic
Pico is an open-source alternative to Ngrok. Unlike most other open-source tunnelling solutions, Pico is designed to serve production traffic and be simple to host (particularly on Kubernetes).
Upstream services connect to Pico and register endpoints. Pico will then route requests for an endpoint to a registered upstream service via its outbound-only connection. This means you can expose your services without opening a public port.
Pico runs as a cluster of nodes in order to be fault tolerant, scale horizontally and support zero downtime deployments. It is also easy to host, such as a Kubernetes Deployment or StatefulSet behind a HTTP load balancer.
Is there a helm chart? Didn’t see one off the bat.
Given a helm chart, also podman kube kubernetes YAML
podman-kube-play: https://docs.podman.io/en/latest/markdown/podman-kube-play.1...
helm template: https://helm.sh/docs/helm/helm_template/
"RFE Allow podman to run Helm charts" https://github.com/containers/podman/issues/15098#issuecomme...
helm template --dry-run --debug . --generate-name \
--values values.yaml | tee kube42.yml && \
podman kube play kube42.yml;
The new APT 3.0 solver
> Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies. [...]
> If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination
DPLL algorithm: https://en.wikipedia.org/wiki/DPLL_algorithm
prefix-dev/pixi is a successor to Mamba and Conda which wraps libsolv like dnf. https://github.com/prefix-dev/pixi
OpenSUSE/libsolv: https://github.com/openSUSE/libsolv
OpenSUSE/zypper may have wrapped or may still wrap libsolv?
TIL libsolv also supports arch .apk and .deb; there's a deb2solv utility to create libsolv .solv files: https://github.com/openSUSE/libsolv/blob/master/tools/deb2so...
"[mamba-org/rattler:] A new memory-safe SAT solver for package management in Rust (port of libsolv)" (2024) https://prefix.dev/blog/the_new_rattler_resolver https://news.ycombinator.com/item?id=37101862
Pixi is integrating uv: https://github.com/astral-sh/uv/issues/1572#issuecomment-194...
astral-sh/uv [1] solves versions with PubGrub [2], which was written for Dart: [1] https://github.com/astral-sh/uv [2] https://github.com/pubgrub-rs/pubgrub
uv's benchmarks are run with hyperfine, a CLI benchmarking tool: https://github.com/sharkdp/hyperfine
astral-sh/rye was mitsuhiko/rye, and rye wraps uv with "fallback to unearth [resolvelib] and pip-tools": https://github.com/astral-sh/rye https://lucumr.pocoo.org/2024/2/15/rye-grows-with-uv/
So, TIL it looks like rye for python and pixi for conda (python, r, rust, go,) are the latest tools for python packaging, not apt packaging.
fpm is one way to create deb packages from python virtualenvs for install with the new APT solver, which resulted in this research.
Vikings razed the forests. Can Iceland regrow them?
Boats and Timber-framed houses probably.
Could sell or use the volcanic ash for Roman Concrete or Geopolymer Concrete?
Topsoil takes forever to create. What is the relation between glaciation and topsoil?
FWIU planting a monoculture of trees in the Gobi desert to reverse desertification has not been as successful as planting complete grassland and then forest ecosystems and trees.
https://news.ycombinator.com/item?id=37590409#37623093
Water bunds look like they work to prevent water from flashing off, but probably initially disturb the mycorrhizae in the topsoil: https://en.wikipedia.org/wiki/Semicircular_bund
"Inoculating soil with mycorrhizal fungi can increase plant yield: study" (2024) https://news.ycombinator.com/item?id=38526579#38527264
https://westurner.github.io/hnlog/#story-36158663 Ctrl-F "soil" [depletion], myco
America is in the midst of an extraordinary startup boom
Not to get off on the wrong foot (and I'm sure it's obvious on HN) but starting your own business !== startup. In the broader culture, startup has become a trendy word (along with entrepreneur). Most people using these words don't actually understand what they mean.
It's as if they're holding their pet in their arms. They say, "Would you like to pet my dog" and they're actually holding a cat.
I'm certainly not knocking SMB biz owners. I'm one. But that doesn't make me a startup. It doesn't make me an entrepreneur. Entrepreneurial? Or entrepreneur-minded? Sure. But I - along with many others - don't meet the definition of entrepreneur.
Out of curiosity, how do you define “entrepreneur” so as to exclude people who create small businesses?
Back of the envelope (and of couse there are exceptions and subtleties to all these)...
- scale, as in they think about it. Franchising your brand === entrepreneur. Buying a franchise - despite Entrepreneur mag's narrative - does not.
- exit, as in they don't lose sight of it
- hit by a bus (similar to exit). An entrepreneur's brand continues without them.
- innovation and/ uniqueness in product. Opening another cookie cutter lemonade stand don't cut it
- risk, their lens is not about risk, it's about opportunity. That is, where other see risk, the true entrepreneur sees opportunity.
I might have one or two others but that's the basic filter.
<edit>
One to add.
- upsetting - I don't like the word disruption. It's over used and it's the wrong goal. The goal is to identify a market and find fit. Disruption in and of itself is silly. That said, if you're not pissing some one(s) off then you're likely not an entrepreneur. Again, another lemonade stand isn't grounds enough to be considered an entrepreneur.
Maybe that's more of a "savvy, likeable entrepreneur".
Google / Oxford Languages has: https://www.google.com/search?q=entrepreneur :
> en·tre·pre·neur: a person who organizes and operates a business or businesses, taking on greater than normal financial risks in order to do so
I'd categorize that as the simplistic 20th Century definition. It needs to be updated at this point. Financial risk is not enough of a metric.
Why? If a word has a perfectly workable definition that everybody is already familiar with, why do we “need” to hijack it with a new, different definition?
Electron Vortices in Graphene Detected
"Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 :
> Electron–electron interactions in high-mobility conductors can give rise to transport signatures resembling those described by classical hydrodynamics. Using a nanoscale scanning magnetometer, we imaged a distinctive hydrodynamic transport pattern—stationary current vortices—in a monolayer graphene device at room temperature. By measuring devices with increasing characteristic size, we observed the disappearance of the current vortex and thus verified a prediction of the hydrodynamic model. We further observed that vortex flow is present for both hole- and electron-dominated transport regimes but disappears in the ambipolar regime. We attribute this effect to a reduction of the vorticity diffusion length near charge neutrality. Our work showcases the power of local imaging techniques for unveiling exotic mesoscopic transport phenomena.
Could this imaging capability help observe this known free electrons from graphene approach?
From "Physicists Build Circuit That Generates Clean, Limitless Power from Graphene (2020)" https://news.ycombinator.com/item?id=30825430#30839636 :
> "Fluctuation-induced current from freestanding graphene" (2020) https://scholar.google.com/scholar?cites=3748350803951231734...
[Fractional] Quantum Hall Effect: https://news.ycombinator.com/item?id=39384522 :
> In 1988, it was proposed that there was quantum Hall effect without Landau levels. [3] This quantum Hall effect is referred to as the quantum anomalous Hall (QAH) effect. There is also a new concept of the quantum spin Hall effect which is an analogue of the quantum Hall effect, where spin currents flow instead of charge currents. [4]
How are spin currents and vorticity in electron vortices related?
Google is building Gemini Nano AI model into Chrome using WebGPU and WASM
WebNN: spec: https://www.w3.org/TR/webnn/ ; src: https://github.com/webmachinelearning/webnn ; https://news.ycombinator.com/item?id=36159049
Notes re: https://news.ycombinator.com/item?id=40114567#40115099 :
> lancedb/lance is faster than pandas with dtype_backend="arrow" and has a vector index [and is written in Rust, which compiles to WASM]
Model Explorer: intuitive and hierarchical visualization of model graphs
google-ai-edge/model-explorer//example_colabs/quick_start.ipynb: https://github.com/google-ai-edge/model-explorer/blob/main/e...
XAI: Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
16 years of CVE-2008-0166 – Debian OpenSSL Bug
So to check your keys, you are asked to grab a pip package.
That is OK, but I heard there were lots of security issues with pips too. From one issue to maybe another ?
If you are worried, I would just recreate your keys.
PyPI has only implemented the "server signs whatever's uploaded with that account" part of TUF; there are no Publisher Signatures for packages uploaded to pypi (because GPG ASC support was removed from PyPI, the ad-hoc signature support was removed from wheel, and only server signatures are yet implemented).
https://SLSA.dev/ recommends TUF and Sigstore.dev and trusted containers for build-signing.
Someday, Twine should prompt a PyPI package uploader to sign the package before uploading it, and download it to (prime the CDN cache and) check the Publisher and Package repo signature(s) at least once.
PyPI doesn’t currently implement any part of TUF in a public facing manner, so I’m not sure where you got that from :-). There’s some limited deployment experiments with rstuf, but these are not part of any current server-side package signing scheme.
PEP 740 does the rest of what you’ve described, however. And twine already has initial support for it, although it isn’t part of a release yet.
"Roadmap for PEP 458" https://github.com/pypi/warehouse/issues/10672
Scientists have figured out way to make algae-based plastic that decomposes
This is specifically a biodegradable polyurethane. It's just one kind of plastic and not suitable for everything.
We have others including old stuff like cellophane, pPLA, PGA, PGL, PHB even biodegradable polyesters... They're cheap. They can be fiber reinforced as needed. However...
The tricky bit is that almost always they need to be properly composted to decompose. This is called procedure 7 and cannot be usually applied to plastics that have a pigment added or are painted - the additives would poison the compost heap or leech. You need to be particularly choosy about paints and plasticizers if you want it to decompose safely.
Any thoughts on CO2 + Lignin for the same applications?
"CO2 and Lignin-Based Sustainable Polymers with Closed-Loop Chemical Recycling" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202403035
- https://news.ycombinator.com/item?id=40079540
(Edit)
"Making hydrogen from waste plastic could pay for itself" (2024) https://news.ycombinator.com/item?id=37886955 ; flash heating plastic yields Graphene and Hydrogen
"Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763
Ancient Egyptian Stone-Drilling (1983)
This is one of the most intriguing ancient mysteries, and is sufficiently well grounded in engineering practice that it can be experimentally validated. Things like the hardness of the cutting edge, the RPM required to mimic the drill markings, the shape and progression of the drill bit etc.
So, it's reasonably clear that the bronze-age Egyptians must have had some drilling rig for boring hard stone - one which operated at 1,000 rpm or similar and could make an impression into something like quartz. All the ancient aliens nonsense aside, this rig in itself must have been quite impressive in itself - probably a composite of pulleys and ropes with a tubular emery-embedded cutting bit. Would have loved to have seen it working!
If the claims of levitation, ultrasonic resonance, and/or lingam electrical plasma stone masonry methods are valid; perhaps the necessary RPMs for core drilling are lower than 1000 RPM.
Given drilled cores of e.g. (limestone, granite,), which are heavy cylinders that roll, when was the wheel invented in that time and place?
Wheel: https://en.wikipedia.org/wiki/Wheel
"Specific cutting energy reduction of granite using plasma treatment: A feasibility study for future geothermal drilling" https://www.sciencedirect.com/science/article/pii/S235197892... :
> The plasma treatment showed a maximum of 65% and a minimum of 15% reduction in specific cutting energy and was regarded as being dependent on mainly the hardness and size of the samples [and the electrical conductivity of the stone]
E.g. this video identifies electrodes and protrusions in various megalithic projects worldwide: https://youtu.be/n8hRsg8tWXg
Perhaps the redundant doors of the great pyramid were water locks rigged with ropes. There do appear to be inset places to place granite cores for rigging.
Ancient and modern stonemasonry skills; how many times have they been lost and why?
Supreme Court: There's No 'Time Limit' on Copyright Infringement Claims
Why is Copyright, of all things, exempt from the federal statute of limitations?
Are federal CSA cases exempted from the federal statute of limitations then?
For CSA cases, e.g. California has: two (2) years from when the victim remembers IIUC?
Are Conflict of Interest cases against the court exempted from the Statute of Limitations?
Can commuted and pardoned cases be reheard once the elected executive is out of office, if there no limit to the statute of limitations?
(Ludicrous that the court would consider total immunity for an executive of one branch, by the way.)
They didn’t even have an ethics code until last year. The new one isn’t great , either. https://www.brennancenter.org/our-work/analysis-opinion/new-...
> The rules themselves are weak. Consider recusal, when justices step aside from considering a case. The justices took the rule that applies to lower court judges but then inserted a handful of new loopholes, including one that could be so big that it swallows the rule — basically allowing a justice to disregard a required recusal if they think their vote is needed in the case. And the financial disclosure rules haven’t tightened at all — a significant shortcoming, since the justices have proven themselves troublingly adept at sidestepping the current rules, whether for RVs, tuition, fishing trips, or real estate deals.
Can the executive terminate the employment contract of a nominated director of a DOJ employee, if there are to be three separate branches of government?
Who said anything about the executive?
You simply can’t file a “conflict of interest” case against SCOTUS. At all. The statute of limitations is a femtosecond.
That said, the DOJ is executive, not judicial. The courts are not part of it.
The DOJ is literally part of the executive branch
There is a nomination procedure which requires (?) Congressional confirmation, but Congress has no recourse for obstructive termination of a nominated director by the executive?
Isn't that the wolf guarding the hen house; i.e. what the founders expressly intended to prevent?
The recourse is impeachment.
With the war powers authorized in the 2000s, there's more than impeachment.
Are they American? Where is their birthy certy? (Is there Habeas Corpus for all persons. "You're not even American!")
If the executive is totally immune, they wouldn't need to assist a foreign country illegally in order to get Congress to grant them extra power.
New mirror that can be flexibly shaped improves X-ray microscopes
For the curious people, there are no practical refractive materials for x-rays, and mirror shapes have to be 4 times more accurate than equivalent refractive systems for the same diffraction limit.
Can x-rays be waveguided with x-rays and/or other bands?
- "Shattering the 'copper or optics' paradigm: humble plastic waveguides outperform" https://news.ycombinator.com/item?id=39493372
- dual beam waveguiding with optics, NIRS + ultrasound: https://news.ycombinator.com/item?id=35617859
Femtosecond Lasers Solve Solar Panels' Recycling Issue
> Manufacturers globally are deploying enough solar panels to produce an additional 240 gigawatts each year. That rate is projected to increase to 3 terawatts by 2030, Ovaitt says. By 2050, anywhere from 54 to 160 million tonnes of PV panels will have reached the end of their life-spans, she says.
"Towards Polymer-Free, Femto-Second Laser-Welded Glass/Glass Solar Modules" (2024) https://ieeexplore.ieee.org/document/10443029 :
> [...] Key to this finding is that the module must be framed and braced, and the glass must be ribbed to allow pockets for the cells and welds inside the border of the module. The result is a module design that is completely polymer free, hermetically sealed, has improved thermal properties, and is easily recycled
Recycling isn't nearly the issue people think it is, because the "lifetime" of a panel - 20, 30 years - is not when the panel becomes useless, but when its output has dropped 20%.
Some percentage of solar panels will become dysfunctional because of extreme weather events.
There is no item, person, or place on Earth for which this is not true. Few things could make something less special than saying "sometimes, this thing is destroyed by extreme weather".
The point, which I should have stated explicitly, is that the solar panels that break should/can be recycled. And so recycling is also needed for this percentage.
Is the largest root of a random real polynomial more likely real than complex?
Wow that is crazy that it is between chance and 1/phi.
In the linked MSE post, it has now been proved that the probability that the largest root is real is at least 6.2%
And! at least 1/10th of 1/phi. Hahaha! :)
I think the link of phi with number theory is natural in the sense that the primes are not random (as is commonly mistaken) but arise recursively from prior primes: they are exactly the 'gaps' not hit by multiples of prior primes, so there's a kind of 'natural growth' pattern that we would expect to reflect some manner of e and phi
It's a very fundamental pattern of magnitude. Truth beauty et all
Beautiful! Phi always seems to show up when there is crowding.
Thanks! Let's collab. We should start a "primes investigation" github repo!!!! :)
I love number theory in the context of waves. Here’s an example:
Thanks. When you mention waves I'm wondering how we can leverage FFT to make a function of the next prime? But it has to include all primes, I guess? Or is this Reimann, already??
What do you get with discrete convolutions of the prime series, and what is the negative space called?
Interesting question. If you Mean it like intending to provoke more thinking, I answer: I don’t know. Would be interesting!
But if you mean it like you’re asking me then: discrete/or-not convolution of “sine waves” with wavelength p for each prime (and optionally prime powers), to give … well… I don’t know! Haha :) it would be interesting to try to create something.
Presumably, it leads to some function that can give information as to the next prime. I mean, you don’t need the FFT analogy or processing for that, but the hope is that somehow, through that process in the natural analog with convolving sine waves, you get, a sort of advantage that let you predict the next prime with less information, somehow or somehow advantageous. Haha! :)
The nodes are multiples, the and the spots where none of waves go to zero are primes (because they haven’t been landed on by any of the previous waves), and at any spot the prime-power waves that land at zero are the factors. You can only get unique factors not the factorization with powers if you don’t make waves/multiples of the prime powers.
But I don’t think I know what you mean by negative space - it sounds interesting. What do you mean by that?
New invention transforms any smartphone or TV into a holographic projector
"Computer-generated holography with ordinary display" (2024) https://opg.optica.org/viewmedia.cfm?r=1&rwjcode=ol&uri=ol-4... :
> We propose a method of computer-generated holography (CGH) using incoherent light emitted from a mobile phone screen. In this method, we suppose a cascade of holograms in which the first hologram is a color image displayed on the mobile phone screen. The hologram cascade is synthesized by solving an inverse problem with respect to the propagation of incoherent light. We demonstrate a three-dimensional color image reproduction using a two-layered hologram cascade composed of an iPhone and a spatial light modulator.
Visual Studio Code for Education
https://news.ycombinator.com/item?id=40115099 :
> JupyterLite, datasette-lite, and the Pyodide plugin for vscode.dev ship with Pyodide's WASM compilation of CPython and SciPy Stack / PyData tools.
Does this work in vscode.dev, in a Chrome or Edge browser?
But can students test and/or grade their own work using their own CPU? ; OtterGrader -> Canvas, nbgrader,: https://news.ycombinator.com/context?id=37981190
Coronal mass ejection impact imminent, two more earth-directed CMEs
- "Most of Europe is glowing pink under the aurora" https://news.ycombinator.com/item?id=40324179
Ointers: A library for representing pointers where bits have been stolen (2021)
Stealing bits from pointers makes sense sometimes, but in most cases an easier and less error-prone way to increase memory efficiency is to just use indices or relative pointers. Instead of shaving individual bits of a pointer take advantage of the fact that most pointers in a program are almost exactly alike.
Instead of storing a 64 bit prev and next pointer for an item in a linked list just store e.g. two 16 bit offsets. Then the next_ptr = (this_ptr & base_mask) + offset * alignment.
Most of the time wasting memory by storing full 64bit pointers is no big deal. But when it is, you can go a lot further than just shaving off a few bits.
If you want to be extra clever you can encode values in the base address of your pointers by taking advantage of how virtual memory is allocated. Every process has its own virtual memory pages and you can request memory pages at specific address ranges. For instance if you align all your memory at a 32gb boundary then you can use 32bit pointers + a base pointer without even having to mask. You can also use virtual memory allocation tricks to convey information about alignment, typing, whether a pointer is allowed to be shared between threads and so on. And again, without having to butcher the pointer itself.
That's really not the application for this, it's tags, which are endlessly useful or characterizing the pointer. For instance you can use the MSBs to establish the type of whats pointed to, or for reference counting. I use the LSB to indicate if the value is a pointer or an unsigned integer, whereby you can store both an (integer) key to a record in a database, or a pointer to the record data in memory. You could use another bit in the pointer to indicate that the record is dirty.
Using unused bits in pointers can make you code more efficient and simpler.
Can unused bits be used for signed pointers or CRCs?
I would argue that we shouldn’t bother to build CRC checking of memory into programs and that ECC RAM should just become the norm eventually.
99.9999% of memory operations aren’t CRC-d so trying to fight against flaky hardware is a bit of an uphill battle.
Pointer authentication: https://llvm.org/docs/PointerAuth.html :
> Pointer Authentication is a mechanism by which certain pointers are signed. When a pointer gets signed, a cryptographic hash of its value and other values (pepper and salt) is stored in unused bits of that pointer.
> Before the pointer is used, it needs to be authenticated, i.e., have its signature checked. This prevents pointer values of unknown origin from being used to replace the signed pointer value.
ECC can't solve for that because it would only have one key for all processes.
Given a presumption of side channel writes, pointer auth is another layer of defense that may or may not be worth the loss in performance.
Quantum gravity gets a new test
"Quantum Gravity Gets a New Test" (2024) https://physics.aps.org/articles/v17/65 :
> As the pendulums oscillate, they change the size and therefore the resonant wavelength of each cavity. This change can be detected by shining laser light into the cavities and then measuring the intensity of the resulting interference pattern.
FWIU there is more information in light intensity; from https://news.ycombinator.com/item?id=37226160 :
> > The work, led by Xiaofeng Qian, assistant professor of physics at Stevens and reported in the August 17 online issue of Physical Review Research, also proves for the first time that a light wave's degree of non-quantum entanglement exists in a direct and complementary relationship with its degree of polarization. As one rises, the other falls, enabling the level of entanglement to be inferred directly from the level of polarization, and vice versa. This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity
> [...] Qian's team interpreted the intensity of a light as the equivalent of a physical object's mass, then mapped those measurements onto a coordinate system that could be interpreted using Huygens' mechanical theorem. "Essentially, we found a way to translate an optical system so we could visualize it as a mechanical system, then describe it using well-established physical equations," explained Qian.
> Once the team visualized a light wave as part of a mechanical system, new connections between the wave's properties immediately became apparent—including the fact that entanglement and polarization stood in a clear relationship with one another.
Hypothetical unipolar magnets in free-fall.
/? hnlog Ctrl-f "quantum gravity":
- https://news.ycombinator.com/item?id=39954180 superfluid quantum gravity / SQS, demonstrated indefinite causal order in quantum systems
- n-ary entanglement; "Designing three-way entangled and nonlocal two-way entangled single particle states via alternate quantum walks" (2024) https://arxiv.org/abs/2402.05080 :
> Herein, we generate three-way entanglement from an initially separable state involving three degrees of freedom of a quantum particle, which evolves via a 2D alternate quantum walk employing a resource-saving single-qubit coin. We achieve maximum possible values for the three-way entanglement quantified by the π-tangle between the three degrees of freedom. We also generate optimal two-way nonlocal entanglement, quantified by the negativity between the nonlocal position degrees of freedom of the particle. This prepared architecture using quantum walks can be experimentally realized with a photon.
- This is a better - fluidic - sim render of sound waves between parabolic reflectors; where are the linked states in is combinatorial outcomes with itself just? "A wave traveling between two parabolic antennas" https://youtube.com/watch?v=v0cZjOIfwos
We like to model regular wavefronts, but waves do travel through fluids of various pressures and diffraction. Are gravitational waves ever so restricted, and why do they arrive after photonic waves from the same phenomena?
https://news.ycombinator.com/item?id=39956484 :
> Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) .. "Distorted crystals use 'pseudogravity' to bend light like black holes do" https://news.ycombinator.com/item?id=38008449
(Edit)
"Measuring gravity with milligram levitated masses" (2024) Fuchs, superconducting magnetic trap with Tantalum: https://www.science.org/doi/10.1126/sciadv.adk2949 https://news.ycombinator.com/item?id=39495570 :
> This regime opens the possibility of table-top testing of quantum superposition and entanglement in gravitating systems. Here, we show gravitational coupling between a levitated submillimeter-scale magnetic particle inside a type I superconducting trap and kilogram source masses, placed approximately half a meter away. Our results extend gravity measurements to low gravitational forces of attonewton and underline the importance of levitated mechanical sensors.
Also, potentially affecting any study of GW gravitational waves with photons,
> If photons come from gravity, can photons end as gravity; and then do they have an infinite lifetime? https://news.ycombinator.com/item?id=40090332
From the paper:
> 7. B. N. Simon, S. Simon, F. Gori, M. Santarsiero, R. Borghi, N. Mukunda, and R. Simon, Nonquantum Entanglement Resolves a Basic Issue in Polarization Optics, Phys. Rev. Lett. 104, 023901 (2010). http://link.aps.org/doi/10.1103/PhysRevLett.104.023901
https://arxiv.org/abs/0906.2467
Also from the paper:
> V. GENERALIZATION TO ARBITRARY DIMENSIONS
> The above polarization-entanglement complementary relation (8) along with its connection to the mechanical concepts of center of mass and moment of inertia can be further generalized to arbitrary dimensional tensor structures. In this case, the concept of polarization is not restricted to describe the three wave oscillation directions of the 3D space any longer. It is extended to represent all vectors of a generic vector space of arbitrary dimension [24,47].
A 100x speedup with unsafe Python
Isn't all Python, by design, unsafe?
ctypes, c extensions, and CFFI are unsafe.
Python doesn't have an unsafe keyword like Rustlang.
In Python, you can dereference a null pointer and cause a Segmentation fault given a ctypes import.
lancedb/lance and/or pandas' dtype_backend="pyarrow" might work with pygame
You don't even need a ctypes import for it, you'll get a segfault from this:
eval((lambda:0).__code__.replace(co_consts=()))
cpython is not memory safe.Static analysis of Python code should include review of "unsafe" things like exec(), eval(), ctypes, c strings, memcpy (*),.
Containers are considered nearly sufficient to sandbox Python, which cannot be effectively sandboxed using Python itself. Isn't that actually true for all languages though?
There's a RustPython, but it does support CFFI and __builtins__.eval, so
The example given by parent does not need eval to trigger though. Just create a function and replace its code object then call it, it will easily segfault.
Flying planes in Microsoft Flight Simulator with a JavaScript autopilot (2023)
A (lengthy!) tutorial on how to write your own JS-based autopilot code to fly planes in Microsoft Flight Simulator with a web page to both visualize the flight and control the AP.
Note that this is a complete do-over of a previous page that used a combination of Python and JavaScript, and only covered basic autopilot functionality. This rewrite threw all of that out, and goes much, much further:
you just want to have the plane take off without having to do anything? Check, this tutorial covers auto-takeoff.
Flying with waypoint-based navigation, not by "programming airports and radio beacons one letter at a time with a jog-dial" but by just placing markers on a map and dragging those around as you please, mid-flight, Google Maps style? Check. After all, why would we not, putting a map on a webpage is super easy (barely an inconvenience).
In fact, let's throw in flight plan saving and loading, so others can fly the same route!
And maybe you want the plane to fly a fixed distance above the ground, instead of at a fixed altitude... well: check. You'll learn about terrain-follow using free Digital Elevation Model datasets for planet Earth.
And of course the big one, for the ultimate "click button, JS flies the plane start to finish" experience: ever wished the in-game autopilot could just land the plane for you? Well, it's a video game, and we know programming, so: why not? We're also implementing auto-landing and it's going to be amazing.
If you play MSFS, and you're a developer/programmer, and you've always thought how cool it would be if you could just do some scripting for it... you can, there's a lot of cool stuff you can do!
And of course if you don't want to follow along and just want to have everything set up and ready to fly your plane, just clone https://github.com/Pomax/are-we-flying and have fun.
Wow, that is a lengthy tutorial.
I only skimmed over it now, so maybe I missed it: How do all these developed methods compare to existing real autopilot systems? Are there even existing autopilot systems which work similarly? Did the author come up with these methods just by themselves, or were they adapted from existing systems? Would those developed methods make sense in a real autopilot system?
Also, it seems that this involved a lot of trial-and-error, and the solutions come with a number of heuristics and hyper parameters to tune. I wonder if you can do better and avoid that as much as possible.
First of all, you would need to have an actual metric which covers all the things you would care about. Obviously no fatal crash, but then also maybe somewhat smooth flight (you can measure that) and how close you want to get to each waypoint, while maybe taking not too much fuel. And given those, let it find the best possible path. I guess this is optimal control theory then. When you have an actual metric which you can measure, then you can quantitatively compare all the methods, also automatically optimize any parameters.
I haven't read the tutorial (yet), but I know a bit about autopilots.
Autopilots depend a lot on which kind of plane it is in, but for the most part, autopilots aren't really "auto pilots". They consist of various fairly basic primitives rather than more complex decision making.
As a rule of thumb, the complexity of the autopilot is proportional to the size of the plane.
Basic autopilots can maintain headings, maintain (barometric) altitudes (i.e. above sea level) and similar very basic operations. For these autopilots it is the pilot's job to ensure the plane is operating within the envelope and does not depart from controlled flight and the throttle is controlled by the pilot.
As an example, the autopilot in an F-16 fighter jet has four things it can do: maintain attitude, maintain altitude, fly toward nav point and fly a compass heading. That's it.
Autopilots in airliner jets are more complex and can handle a set of nav point, ascent/descent, arrival and, depending on the airfield, approach and landing. They also feature autothrottles. Their implementation is still fairly simple, since it's a matter of matching throttle, attitude and heading with the position and altitude of each nav point. The autopilot doesn't have to do any pathfinding or more complex decision making, in order to keep it simple, predictable and bug free.
Are there autopilot systems that do any sort of drone, bird, or other aerial object avoidance?
Then, aren't regional cultural ethics necessary to solve a trolley problem; when there are multiple simultaneous predictable accidents and you can only prevent one.
Does "autopilot" solve the trolley problem for the liable pilot?
There's collision avoidance systems, but those are really for "not flying into another plane". You're going so fast that by the time you can pick up that you're about to hit a drone or bird, you've destroyed it -and possibly yourself- already.
Military drones are a different matter: you will never find yourself in the same air space, because you will never be told they're there. They are ghosts, much like airplanes that run without a transponder "because it's legal". You don't see another aircraft unless you know exactly where to look, or you're right on top of them.
Flying things could sqwak with the inverse of the Doppler transform when subsonic
https://news.ycombinator.com/item?id=37886514 :
> FAA UAS RemoteID: https://www.faa.gov/uas/getting_started/remote_id
Collision avoidance in transportation: https://en.wikipedia.org/wiki/Collision_avoidance_in_transpo...
Fedora Cleared to Build Python Package with "-O3" Optimizations
From "A collection of compiler optimizations with brief descriptions and examples" https://news.ycombinator.com/item?id=40275850 :
> Also, only SO has what specific optimizations are applied for clang's -O1/-O2/-03 args? https://stackoverflow.com/questions/15548023/clang-optimizat...
A collection of compiler optimizations with brief descriptions and examples
Optimizing compiler > Types of optimizations, Specific techniques: https://en.wikipedia.org/wiki/Optimizing_compiler
Category:Compiler optimizations: https://en.wikipedia.org/wiki/Category:Compiler_optimization...
"LLVM’s Analysis and Transform Passes" -print-passes : https://llvm.org/docs/Passes.html
"A whirlwind tour of the LLVM optimizer" mentions Chrome and ART and LLVM: https://news.ycombinator.com/item?id=35893006
FreeBSD has docs in a man page on clang's optimization levels -O1/-O2/-O3: https://stackoverflow.com/questions/21447377/meaning-of-the-...
Also, only SO has what specific optimizations are applied for clang's -O1/-O2/-03 args? https://stackoverflow.com/questions/15548023/clang-optimizat...
What about quantum compilers and quantum embedding of classical data; what are the algorithmic advantages listed at Quantum Algorithm Zoo for $5m? Qubit operations are described as rotations on a bloch sphere; the unit circle with a complex coordinate z to make a sphere, but are also representable as matrix (tensor) transformations. https://news.ycombinator.com/item?id=39637175
Compiler optimization improvements are also incentivized by costed opcodes in e.g. smart contracts.
PiFex: JTAG Hacking with a Raspberry Pi
A device that is a bit similar, but more advanced is the Glasgow Interface Explorer: https://www.crowdsupply.com/1bitsquared/glasgow
It costs 145$ instead of 50$, and you can interface with it via Python3 over USB. It is quite flexible due to a reconfigurable FPGA and has some nice features such as automatically detecting UART baud rates, JTAG pinouts, ESD / Under / and Over-Voltage protection on the I/O pins and more.
A $5 Pi Pico has two UARTS, but is not an FPGA; "Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" https://news.ycombinator.com/item?id=35117847#35120403
According to https://www.reddit.com/r/raspberrypipico/comments/1aut3l2/co... , pico-uart-bridge turns a pico into 6 TTL UARTs; https://github.com/Noltari/pico-uart-bridge
A company is building a giant compressed-air battery in the Australian outback
I must have missed something. Why not just use the water without the compressed air, i.e., pumped hydro? There must be some advantage, but they didn't seem to say. I'd guess maybe if your lower reservoir were underground, the water-only would require the generator systems to be down there, too, which would mean access for people as well, and being down a mine with a small lake's worth of water overhead seems pretty hazardous. Whereas by forcing air down to push water up, that whole below ground aspect can be almost entirely passive. Maybe?
I think you can store energy here not just in the gravitational potential energy of the water, but also in the compression of the air. I _think_ this means that you can get away with a smaller cavern than you could for just pumped hydro.
You might think that you'd need a large amount of water to make the energy release work, but I think it works like this. The force of the water on the air/water interface is dependent not on the reservoir volume, but on the weight of the water in the column (which depends only on the height of the shaft).
By digging a very deep shaft, you can have a very large force of water on the interface, and moving that interface an equal distance hence releases more energy than it would with a shorter shaft.
This way, you can store an arbitrary (up to your ability to compress air to a sufficient pressure, dig a deep shaft, and keep everything from blowing up) amount of energy.
I think if you have a 1 square meter cross section of shaft, and the shaft is 1 kilometer deep, then at the bottom the force of the water above is the weight of 1 square meter * 1 kilometer, or 1000 cubic meters of water, or 1 million kg.
The force then is 9.8×10^6 newtons.
Pressure is force/area, or 9.8×10^6 newtons/square meter here (since we have a unit area).
There's a formula for the energy in compressed air, it's... involved. I downloaded an excel file from here: https://ehs.berkeley.edu/publications/calculating-stored-ene...
It says under this pressure, a 1000 cubic meter tank of air at this pressure stores ~5000 KWh.
The one planned in California is supposed to store 4000 MWh, so I guess they have a tank that is ~a million cubic meters.
A water tank of the same size would store ~2700 MWh of energy (e.g., pumping that much water up a 1KM shaft requires that much energy), so it does seem to be more efficient.
It may also be that hot air turbines are cheaper and easier to maintain than hydropower turbines, but I'm not certain.
Is there any pump efficiency advantage to multiple pressure vessels instead of one large?
You could probably move the compressed air in a GPE gravitational potential energy storage system, or haul it up on a winch and set it on a shelf; but would the lateral vectors due to thrust from predictable leakage change the safety liabilities?
Air with extra CO2 is less of an accelerant, but at what concentration of CO2 does the facility need air tanks for hazard procedures?
FWIU, you can also get energy from a CO2 gradient: "Proof-of-concept nanogenerator turns CO₂ into sustainable power" (2024) https://news.ycombinator.com/item?id=40079784
And, CO2 + Lignin => Better than plastic; "CO2 and Lignin-Based Sustainable Polymers with Closed-Loop Chemical Recycling" (2024) https://news.ycombinator.com/item?id=40079540
From "Oxxcu, converting CO₂ into fuels, chemicals and plastics" https://news.ycombinator.com/item?id=39111825 :
> "Solar energy can now be stored for up to 18 years [with the Szilard-Chalmers MOST process], say scientists"
> [...] Though, you could do CAES with captured CO2 and it would be less of an accelerant than standard compressed air. How many CO2 fire extinguishers can be filled and shipped off-site per day?
> Can CO2 can be made into QA'd [graphene] air filters for [onsite] [flue] capture?
"Geothermal may beat batteries for energy storage" (2022) https://news.ycombinator.com/item?id=33288586 :
> FWIU China has the first 100MW CAES plant; and it uses some external energy - not a trompe or geothermal (?) - to help compress air on a FWIU currently ~one-floor facility.
Hubble Network makes Bluetooth connection with a satellite
> soil-moisture sensing
"Satellite images of plants' fluorescence can predict crop yields" (2024) ... "Sensor-Free Soil Moisture Sensing Using LoRa Signals (2022)" https://news.ycombinator.com/context?id=40234912
"Multimodal" Aperture synthesis: https://en.wikipedia.org/wiki/Aperture_synthesis
> The company joined Y Combinator’s Winter 2022 cohort and closed a $20 million Series A last March. Hubble’s first innovation was to develop software enabling off-the-shelf Bluetooth chips to communicate over very long ranges [600km] with low power.
> [...] Once it’s up and running, a customer would simply need to integrate their devices’ chipsets with a piece of firmware to enable connection to Hubble’s network.
Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU
I spent the last few days building out a nicer ChatGPT-like interface to use Mistral 7B and Llama 3 fully within a browser (no deps and installs).
I’ve used the WebLLM project by MLC AI for a while to interact with LLMs in the browser when handling sensitive data but I found their UI quite lacking for serious use so I built a much better interface around WebLLM.
I’ve been using it as a therapist and coach. And it’s wonderful knowing that my personal information never leaves my local computer.
Should work on Desktop with Chrome or Edge. Other browsers are adding WebGPU support as well - see the Github for details on how you can get it to work on other browsers.
Note: after you send the first message, the model will be downloaded to your browser cache. That can take a while depending on the model and your internet connection. But on subsequent page loads, the model should be loaded from the IndexedDB cache so it should be much faster.
The project is open source (Apache 2.0) on Github. If you like it, I’d love contributions, particularly around making the first load faster.
Github: https://github.com/abi/secret-llama Demo: https://secretllama.com
It's truly amazing how quickly my browser loads 0.6GB of data. I remember when downloading a 1MB file involved phoning up a sysop in advance and leaving the modem on all night. We've come so far.
97MB for the Worms 3 demo felt like an eternity.
So what games are in this LLM? Can it do solitaire yet?
It generates things that you get to look up citations for. It doesn't care if its output converges, it does what it wants differently every time.
> It generates things that you get to look up citations for.
Why would you use it for that? Use a search engine.
LLMs are substitute for talking to people. Use them for things you would ask someone else about, and then not follow up with searching for references.
Blessed Thistle Enhances Nerve Regeneration
> Cnicin, extracted from Blessed Thistle, has been shown to speed up axon growth, essential for nerve repair.
Cnicin: https://en.wikipedia.org/wiki/Cnicin
Cnicus -> Centaurea (Asteraceae) https://en.wikipedia.org/wiki/Centaurea
Centaurea benedicta: https://en.wikipedia.org/wiki/Centaurea_benedicta :
> Centaurea benedicta, known by the common names St. Benedict's thistle, blessed thistle, holly thistle and spotted thistle, is a thistle-like plant in the family Asteraceae, native to the Mediterranean region, [...]. It is known in other parts of the world, including parts of North America, as an introduced species and often a noxious weed.
Coverage Guided Fuzzing – Extending Instrumentation to Hunt Down Bugs Faster
From https://news.ycombinator.com/item?id=39416628 :
> GH topic: coverage-guided-fuzzing: https://github.com/topics/coverage-guided-fuzzing
Satellite images of plants' fluorescence can predict crop yields
> This strategy takes advantage of the growing availability of satellite data and is cheaper to use and faster to access than other yield-prediction methods, he said.
- "Sensor-Free Soil Moisture Sensing Using LoRa Signals (2022)" https://news.ycombinator.com/item?id=38768947 https://dl.acm.org/doi/abs/10.1145/3534608 :
> [...] However, existing commercial soil moisture sensors are either expensive or inaccurate, limiting their real-world deployment. In this paper, we utilize wide-area LoRa signals to sense soil moisture without a need of dedicated soil moisture sensors. Different from traditional usage of LoRa in smart agriculture which is only for sensor data transmission, we leverage LoRa signal itself as a powerful sensing tool. The key insight is that the dielectric permittivity of soil which is closely related to soil moisture can be obtained from phase readings of LoRa signals. Therefore, antennas of a LoRa node can be placed in the soil to capture signal phase readings for soil moisture measurements. Though promising, it is non-trivial to extract accurate phase information due to unsynchronization of LoRa transmitter and receiver. In this work, we propose to include a low-cost switch to equip the LoRa node with two antennas to address the issue. We develop a delicate chirp ratio approach to cancel out the phase offset caused by transceiver unsynchronization to extract accurate phase information
Hawaiian scientist is working with distilleries to save native sugarcane
"Sugar cane waste converted into concrete-beating Sugarcrete" https://newatlas.com/environment/sugar-cane-waste-sugarcrete... https://news.ycombinator.com/item?id=39290854
Perhaps Sugarcrete operations could help to preserve and support native and classic varieties of sugarcane.
FWIU, with apple trees, you must have at least two different species to succeed.
From https://news.ycombinator.com/item?id=38935713 :
> Add'l notes on CEB, Algae, Sargassum, Hempcrete in the 2024 International and US Residential Building Code, and
> The Liberator
Autonomous Quantum Error Correction of Gottesman-Kitaev-Preskill States
> [...] Finally, the advantages of the fully autonomous approach presented here and the measure-and-feedback approach of Ref. [17] could be combined. Indeed, by using a measurement followed by an unconditional reset of the auxiliary, one would have access to error syndromes that can be used in a second layer of quantum error correction, without requiring fast feedback and feed-forward operations within the first layer.
- "Noise Fuels Quantum Leap, Boosting [spin electron] Qubit Performance by 700%" (2024) https://news.ycombinator.com/item?id=39762802
> In conclusion, we have demonstrated the preparation of GKP logical states in the mode of a superconducting cavity with state fidelities mainly limited by bit flips of the auxiliary and intrinsic dephasing of the storage mode. A quantum error correcting protocol based on reservoir engineering, in which the reset of the auxiliary serves as the dissipative process, is demonstrated. We have made the QEC protocol completely autonomous through the implementation of an unconditional auxiliary reset based on a dissipative swap to a lossy resonator. Despite the auxiliary having a relaxation time 10 times shorter than the storage mode, we demonstrate autonomous QEC of GKP states with a gain on the logical lifetime of about 24%. To guide the requirements at the second layer of error correction, further work is needed to upper bound the gain from QEC of the GKP code accessible with realistic hardware and software improvements.
- "A physical qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
UV Superradiance from Mega-Networks of Tryptophan in Biological Architectures
"Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures" (2024) https://pubs.acs.org/doi/10.1021/acs.jpcb.3c07936
- "Quantum fiber optics in the brain enhance processing, may protect against degenerative diseases" (2024) https://phys.org/news/2024-04-quantum-fiber-optics-brain-deg...
Artificial intelligence by mimicking natural intelligence
> Provided a connectome for a whole brain, it’s possible to simulate the model forward, using single-neuron models of varying biological plausibility, from linear-integrate-and-fire to detailed multi-compartment models with Hodgkin-Huxley dynamics. Simulate the body and environment, in addition, and you have yourself a whole-brain-body-environment simulation. We’re not there yet, but one could imagine this could happen in the not-too-distant future, especially for smaller organisms
Carl Rogers' Nineteen Propositions speak of an "Objective phenomenological field theory", wherein a self is only interpretable in context to the fields; which we now understand are typically nonlinear, and literally n-way nonlocally entangleable.
Is brain state completely describeable without modeling Representational drift; the field around the brain created by spreading activation that FWIU presumably nonlinearly affects connectome dynamics?
(Plenoptic function, Wave field recording, stochastic data carving from the aether, reflections in raindrops, mirrors in spacetime, NIR light stimulates neuronal growth, GA with perturbation and tissue ethics, brain2computer, brain2brain, BRIAN)
Hypothesis: There's not enough latent information in training data to infer or clone human cognitive processes without stream of consciousness training data.
From https://news.ycombinator.com/item?id=32819221 :
>> Is it necessary to simulate the quantum chemistry of a biological neural network in order to functionally approximate a BNN with an ANN?
>> "Neurons Are Fickle. Electric Fields Are More Reliable for Information" (2022)
[>>> A new study suggests that electric fields may represent information held in working memory, allowing the brain to overcome “representational drift,” or the inconsistent participation of individual neurons]
From https://news.ycombinator.com/item?id=35877402#35886145 :
> Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.*
How does a connectomic description of the brain differ from a wave field recording of a brain; and given "Representational drift" can a connectomic model be sufficient or stable in functional localization?
Show HN: Fun tool to turn your headshot into a MiniFigure
This is neat. It worked really well for my image. I assume it is just generating an image in the style of a mini figure?
It would be really incredible if it could generate a lego mini figure using real components (ie. this combination of hair, hear, torso, etc). I can see how this would be much more difficult. But it would also make it a really interesting product imo. People already pay a lot for mini figures - would probably pay a LOT for one that looks like them, or for a set that looks like their friends/family.
Thank you for trying and the feedback.
That's correct - it creates an image based on the style of a minifigure using DALLE-3. The challenge with real components is that the APIs / prompts don't allow for facial recognition prompts that can reveal the identify of the person. So the final minifigures won't be accurate especially the smaller parts.
From "PartCAD the first package manager for CAD models" https://news.ycombinator.com/item?id=38789333 :
> Ideas for open source (parametric) parts libraries:
> - ldraw LEGO unofficial parts catalog: "LDraw.org Parts Update 2023-06" https://library.ldraw.org/updates?latest :
Build123d, bd-warehouse
Are any of the major LLMs sufficiently trained on ldraw - or e.g. build123 a parametric CAD library - for this task?
TinkerCAD can save a model as a composition of known bricks.
From https://news.ycombinator.com/item?id=37576296 :
> Is there a blenderGPT-like tool trained on build123d Python models?
ai-game-development tools lists a few CAD LLM apps like blenderGPT and blender-GPT: https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo...
Thank you for the links. After 2 days of launch most people are asking me how to print these.
I am not so familiar with these tools but I will look into these and learn more.
OCCT supports STL, STEP, and AMF files for [extrusion] 3d printing with [sustainable: hemp, wood, bioplastic, etc] 3d printing filament.
Laser engraving for wood on 3d printed wood is a thing too
"GhostSCAD: Marrying OpenSCAD and Golang" (2024) https://news.ycombinator.com/item?id=30940563 re: OpenSCAD or OCCT (FreeCAD, cadquery, build123d), which supports NURBS
Unveiling a new quantum frontier: Frequency-domain entanglement
"NOON-state interference in the frequency domain" (2024) https://www.nature.com/articles/s41377-024-01439-9 :
> [...] In this paper, we demonstrate the photon number path entanglement in the frequency domain by implementing a frequency beam splitter that converts the single-photon frequency to another with 50% probability using Bragg scattering four-wave mixing. The two-photon NOON state in a single-mode fiber is generated in the frequency domain, manifesting the two-photon interference with two-fold enhanced resolution compared to that of single-photon interference, showing the outstanding stability of the interferometer. This successful translation of quantum states in the frequency domain will pave the way toward the discovery of fascinating quantum phenomena and scalable quantum information processing.
PyPy v7.3.16
I never really did much with PyPy, do people mostly use it in a deployed application setting? I ask because looking over at the PyPy Speed page...
Looks like Django is insanely faster under PyPy. Feels like a potential waste not to use PyPy on a deployed web app in most cases. I wonder how FastAPI scales with PyPy and other Python interpreters.
I've put a bit of effort into trying to make it work with the python apps we write at my company. We have a lot of electrical engineers who only write python and we're butting up against the limit of what the language can do as far as the data we need to process..
It's worked well in some cases, but the main situation where we needed it to work was with a GUI application where there was no separation between computation and the GUI elements.. I tried really hard, and got frustratingly close, but could not build wxPython that would work using pypy.
But it's super cool imo, as someone who has to write/maintain python code it's awesome to see some innovative work to make the language faster.
I havent used it in a long time but did you try Qt as well? PySide worked insanely nicely for me. Although this wasnt with PyPy to be fair.
Well the thing is that it's code that's already been written, and honestly is kind of a mess. It would take more effort than it's worth to try and rewrite using a different GUI framework.
I much prefer Qt but this was written before I started here so it is what it is :/
The longer the test runs we end up losing data because they're looping over a dataframe that keeps growing and growing and the program just can't finish that loop in the 2 seconds that it has to run. But it's not a big enough deal for us to fix
FWICS pyqtwebgraph has a TableWidget: https://pyqtgraph.readthedocs.io/en/latest/api_reference/wid... https://github.com/pyqtgraph/pyqtgraph/blob/master/pyqtgraph... :
> Extends QTableWidget with some useful functions for automatic data handling and copy / export context menu. Can automatically format and display a variety of data types (see setData() for more information
DataTreeWidget handles nested structs: https://pyqtgraph.readthedocs.io/en/latest/api_reference/wid...
SO says it's better to extend Qt QTableView and do pagination for larger datasets: https://stackoverflow.com/questions/61517220/pyqt-pandas-fas... https://www.pythonguis.com/tutorials/qtableview-modelviews-n...
pyqtgraph mentions CUDA and NumPy support, but not pandas. Hard to believe there's not yet specific support for drawing spreadsheets of pandas dataframes in qt or pyqtgraph yet.
dask.DataFrame and dask-CuDF also support CUDA. sympy's lambdify function compiles readable symbolic algebra to fast code for various libraries and GPUs.
The Dataframe Protocol __dataframe__ interface spec Purpose and Scope doc mentions the NumPy __array_interface__ protocol; which may do sufficient auto-casting to a NumPy array before trying to draw a data table with formatting derived from then-removed columnar metadata: https://data-apis.org/dataframe-protocol/latest/purpose_and_...
Practically, pd.DataFrame(,dtype_backend="arrow") may be a quick performance boost.
Jupyter kernels with Papermill or cron and templated parameters at the top of a notebook with a dated filename in a (repo2docker compatible) git repo also solve for interactive reports. To provision a temporary container for a user editing report notebooks, there's Voila on binderhub / jupyterhub, https://github.com/binder-examples/voila
or repo2jupyterlite in WASM with MathTex and Pyodide's NumPy/pandas/sympy: https://github.com/jupyterlite/repo2jupyterlite
But that's not a GUI, that's notebooks. For Jupyter integration, TIL pyqtgraph has jupyter_rfb, Remote Frame Buffer: https://github.com/vispy/jupyter_rfb
Is there an overview of the user share of PyPy vs CPython? I have the feeling that PyPy usage became less in the recent years.
How well does PyPy work together with frameworks like PyTorch, JAX, TensorFlow, etc? I know there has been some work to support NumPy, but I guess all these other frameworks are much more relevant nowadays.
"Writing extension modules for pypy" https://doc.pypy.org/en/latest/extending.html ; CFFI, ~ ctypes (libffi), cppyy, or RPython
Show HN: I built a self-hosted status page and monitoring tool for my projects
Hey HN! I'm excited to share Statusnook - a status page and monitoring tool.
I built Statusnook for my own projects, but I soon began incorporating bits of feedback from friends & colleagues.
My goal was to create a tool with a solid essential feature set, and to make it easy to self-host.
I welcome any feedback or suggestions.
awesome-status-page > Open Source: https://github.com/ivbeg/awesome-status-pages?tab=readme-ov-...
Logging cert hashes and checking them against a configurable list and/or CT Certificate Transparency logs would be helpful
Thanks for the suggestion, I hadn't considered this.
US came up with a plan to upgrade 100k miles of transmission lines in 5 years
> To ensure 100,000-miles-in-five-years goal is achieved and to mobilize private and public sector leaders, the Biden administration has made funding available through the Grid Resilience and Innovation Partnership (GRIP) program for upgrades to existing transmission lines. Plus, the US Department of Energy today simplified the environmental review process for transmission line upgrades.
- "Replacing Wires Could Double How Much Electricity the US Grid Can Handle" https://news.ycombinator.com/item?id=39997189
Py2wasm – A Python to WASM Compiler
Nice. Maybe a pie in the sky but considering how prevalent python is in machine learning, how likely is it that at some point most ML frameworks written in python become executable on a web browser through WASM?
Depending on the underling machine learning activity (GPU/no-GPU) it should be already possible today. Any low level machine learning loops are raw native code or GPU already and Python’s execution speed is irrelevant here.
The question is do the web browsers and WebAssembly have enough RAM to do any meaningful machine learning work.
WebGL, WebGPU, WebNN
Which have better process isolation; containers or Chrome's app sandbox at pwn2own how do I know what is running in a WASM browser tab?
JupyterLite, datasette-lite, and the Pyodide plugin for vscode.dev ship with Pyodide's WASM compilation of CPython and SciPy Stack / PyData tools.
`%pip install -q mendeley` in a notebook opened with the Pyodide Jupyter kernel works by calling `await micropip.install(["mendeley"])` IIRC.
picomamba installs conda packages from emscripten-forge, which is like conda-forge: https://news.ycombinator.com/item?id=33892076#33916270 :
> FWIU Service Workers and Task Workers and Web Locks are the browser APIs available for concurrency in browsers
sqlite-wasm, sqlite-wasm-http, duckdb-wasm, (edit) postgresql WASM / pglite ; WhatTheDuck, pretzelai ; lancedb/lance is faster than pandas with dtype_backend="arrow" and has a vector index
"SQLite Wasm in the browser backed by the Origin Private File System" (2023) https://news.ycombinator.com/item?id=34352935#34366429
"WebGPU is now available on Android" (2024) https://news.ycombinator.com/item?id=39046787
"WebNN: Web Neural Network API" https://www.w3.org/TR/webnn/ : https://news.ycombinator.com/item?id=36159049
"The best WebAssembly runtime may be no runtime" (2024) https://news.ycombinator.com/item?id=38609105
(Edit) emscripten-forge packages are built for the wasm32-unknown-emscripten build target, but wasm32-wasi is what is now supported by cpython (and pypi?) https://www.google.com/search?q=wasm32-wasi+conda-forge
Show HN: A storybook designed to teach kids about how computers work
I’ve been working on a unique storybook designed to teach kids about how computers work, and I would love to get your feedback.
Set 500 years in the future, the story follows two kids – one a robot, the other a human – as they explore the workings of what to them is ancient technology: our present-day computers. I’ve aimed to keep each story short and engaging, sprinkling in humor and illustrations to captivate young readers.
As an open-source project, you’re also welcome to check out the source here: https://github.com/yong/lostlanguageofthemachines
How is explaining something with higher mathematics reasonable for childrens? Exponentials arent something you can use. Also shorter sentences are usually easier to understand.
Multiplication is repeated Addition. Exponentiation is repeated Multiplication.
assert 2+3 == 5
assert 2*3 == 6 == 2 + 2 + 2
assert 2**3 == 8 == ((2+2) + (2+2))
And then fractional countability; assert 2.5*2 == 5 == (2*2) + (0.5*2)
assert 2.5*10 == 25
assert 2.5*100 == 250
assert 2.5*104 == 260 == (2.5*100)+(2.5*4)
Functions are quickly demo'able with Desmos or Geogebra, or SymPy and matplotlib: # f(x) = x+1
def f(x):
return x+1
assert f(0) == 1
assert f(1) == 2
assert f(999) == 1000
Calculus derivatives: # N-th derivatives of displacement; velocity, acceleration, jerk, jounce
# (time t seconds, meters)
xyvals = [(0,0), (1, 10), (2, 120)]
# velocity:
10-0 / 1-0 = 10m/s
100-10 / 2-1 = 110m/s
Environment shapes emotional cognitive abilities more than genes
> Decades of extensive research utilizing the classical twin paradigm have consistently demonstrated the heritability of nearly all cognitive abilities so far investigated.
> “Our findings emphasize that these shared family environmental factors, such as parental nurturing and the transmission of cultural values, likely play a significant role in shaping the mental state representations in metacognition and mentalizing.”*
"Distinct Genetic and Environmental Origins of Hierarchical Cognitive Abilities in Adult Humans" (2024) https://www.cell.com/cell-reports/fulltext/S2211-1247(24)003... :
> Human cognitive abilities ranging from basic perceptions to complex social behaviors exhibit substantial variation in individual differences. These cognitive functions can be categorized into a two-order hierarchy based on the levels of cognitive processes. Second-order cognition including metacognition and mentalizing monitors and regulates first-order cognitive processes. These two-order hierarchical cognitive functions exhibit distinct abilities. However, it remains unclear whether individual differences in these cognitive abilities have distinct origins. We employ the classical twin paradigm to compare the genetic and environmental contributions to the two-order cognitive abilities in the same tasks from the same population. The results reveal that individual differences in first-order cognitive abilities were primarily influenced by genetic factors. Conversely, the second-order cognitive abilities have a stronger influence from shared environmental factors. These findings suggest that the abilities of metacognition and mentalizing in adults are profoundly shaped by their environmental experiences and less determined by their biological nature.
>These cognitive functions can be categorized into a two-order hierarchy based on the levels of cognitive processes. Second-order cognition including metacognition and mentalizing monitors and regulates first-order cognitive processes.
This way of modeling Cognitive functions in two distinct hierarchical ways, is it the same theory underlying the idea of "System 1 & System 2" presented in "Think fast and slow" by Daniel Kahneman?
How accepted/supported by empirical evidence is this model
There is observable hierarchy in visual and auditory cortical topology.
Is hierarchical clustering appropriate for cognitive processes?
Control for clusterable Personality factors, Attachment styles, Educational instruction styles, Parenting styles,
Cognitive psychology > Cognitive psychology vs. cognitive science: https://en.wikipedia.org/wiki/Cognitive_psychology#Cognitive...
Cognitive psychology > Cognitive processes: https://en.wikipedia.org/wiki/Cognitive_psychology#Cognitive...
Defence mechanism > Relation with coping; Mature defense mechanisms or not, a Boolean: https://en.wikipedia.org/wiki/Defence_mechanism
Social cognition > Social cognitive neuroscience: https://en.wikipedia.org/wiki/Social_cognition#Social_cognit...
Systems neuroscience: https://en.wikipedia.org/wiki/Systems_neuroscience
Hypotheses: Theory of mind is advantageous in assessments emotional cognitive abilities, and even an NT or non-NT Boolean factor also predicts variance in the given assessments
Theory of mind > Brain mechanisms: https://en.wikipedia.org/wiki/Theory_of_mind#Brain_mechanism...
Three-Stratum Theory: https://en.wikipedia.org/wiki/Three-stratum_theory :
> The three-stratum theory is a theory of cognitive ability proposed by the American psychologist John Carroll in 1993.[1][2] It is based on a factor-analytic study of the correlation of individual-difference variables from data such as psychological tests, school marks and competence ratings from more than 460 datasets. These analyses suggested a three-layered model where each layer accounts for the variations in the correlations within the previous layer.
> The three layers (strata) are defined as representing narrow, broad, and general cognitive ability. The factors describe stable and observable differences among individuals in the performance of tasks. Carroll argues further that they are not mere artifacts of a mathematical process, but likely reflect physiological factors explaining differences in ability (e.g., nerve firing rates). This does not alter the effectiveness of factor scores in accounting for behavioral differences.
There are multiple methods of studying neural topology and emergent cognitive processes, and their possibly hierarchically clusterable topology in feature space. What are some of the current developments in neural topology and feature clustering?
Three-Stratum Theory > See also lists: CHC, g-factor, Fluid and crystallized intelligence, and g-VST (2005).
CHC: Cattell–Horn–Carroll theory: https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93C...
g-VPR (2005): https://en.wikipedia.org/wiki/G-VPR_model
/? hierarchy of cognitive processes: https://www.google.com/search?q=hierarchy+of+cognitive+proce... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=hie...
Neurodevelopmental framework for learning > Other learning frameworks references e.g. CHC and further developments in education: https://en.wikipedia.org/wiki/Neurodevelopmental_framework_f...
OTOH other factors to control for: postal code and school funding and teaching practices in those years,
Compensatory education: https://en.wikipedia.org/wiki/Compensatory_education
Remedial education > Research on outcomes: https://en.wikipedia.org/wiki/Remedial_education
Learning styles: https://en.m.wikipedia.org/wiki/Learning_styles :
> [...] specific study strategies, unrelated to learning style, were positively correlated with final course grade.[46]
Differentiated instruction: https://en.wikipedia.org/wiki/Differentiated_instruction :
> Teachers can differentiate in four ways: 1) through content, 2) process, 3) product, and 4) learning environment based on the individual learner. [7]*
But FWIU none of these models of cognitive hierarchy or instruction are informed by newer developments in topological study of neural connectivity; from https://news.ycombinator.com/item?id=18218504 :
> According to "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function" (2017) https://www.frontiersin.org/articles/10.3389/fncom.2017.0004... , the human brain appears to be [at most] 11-dimensional (11D); in terms of algebraic topology https://en.wikipedia.org/wiki/Algebraic_topology
There is math for classical causal inference with observational data; that FWIU also hasn't been applied to cognitive outcomes studies.
From "The World Needs Computational Social Science" https://news.ycombinator.com/item?id=37746921 :
> "Answering causal questions using observational data" (2021) [PDF] https://www.nobelprize.org/uploads/2021/10/advanced-economic... https://www.nobelprize.org/prizes/economic-sciences/2021/pop...
Si se puedes!
Emotional self regulation is a normed behavior that is modelable.
Head start programs would thus be justified in having SEL Social and Emotional Learning components; but how much more effective is parental behavioral modeling than non-parental caregiving?
Can't be done.
In Africa, they say "It takes a village".
One of perhaps 20 questions to split the sets with: "Can people change?"
fulltext: https://www.cell.com/cell-reports/fulltext/S2211-1247(24)003...
> It takes a village
Compare the first 36 characters of the 3 Character Classic: https://ctext.org/three-character-classic
> Can people change?
With training, dogs and horses improve their cross-species mentalizing; I'd hope training humans on same-species would be even easier.
There appears to be hippocampal neurogeneration even in old age.
From "Multiple GWAS finds 187 intelligence genes and role for neurogenesis/myelination" (2018) https://news.ycombinator.com/item?id=16337941 :
> re: nurture, hippocampal plasticity and hippocampal neurogenesis also appear to be affected by dancing and omega-3,6 (which are transformed into endocannabinoids by the body): https://news.ycombinator.com/item?id=15109698
Neuroplasticity: https://en.wikipedia.org/wiki/Neuroplasticity
Adult neurogenesis: https://en.wikipedia.org/wiki/Adult_neurogenesis
When we forget, we must re-learn. By studying forgetting, we learn about learning and creativity.
Forgetting curve: https://en.wikipedia.org/wiki/Forgetting_curve
Mentally stimulating work plays key role in staving off dementia, study finds
I've always wondered what the health trade-off is for a less mentally stimulating role if it dramatically reduces your stress levels and offers better work/life balance.
Same here. I worked repetitive manual labor jobs for a long time before I switched to tech. I was bored out of my mind at work, but after work loved playing games and working on programming/tech/electronics projects.
Now that I'm in tech I have 0 desire to do anything screen based or deep thinking/problem solving after work. There are days I look out the office window and kind of wish I was the guy mowing the lawn and trimming trees. But I know I'd be bored.
The grass is always greener I guess.
Occupational burnout > Treatment and prevention: https://en.wikipedia.org/wiki/Occupational_burnout
Work-life balance: https://en.wikipedia.org/wiki/Work%E2%80%93life_balance
Maybe you could start a 20% hobby project at work on their tab. "Why companies lose their best innovators (2019)" https://news.ycombinator.com/item?id=23887903
Hypothesis; it's probably normal and healthy to pursue different biomechanical activities when not working (possibly regardless of tech career or not)
Does light have an infinite lifetime?
New Mexico: Psychologists to dress up as wizards when providing expert testimony
>> psychologist or psychiatrist
Does therapy help some people. It sure does.
Recall, the "reproducibility crisis" came to everyone's attention because of how bad the papers from these two camps were. Nothing really changed, no one got fired, and the situation, seems to have not only not improved but the quality has gone DOWN.
Maybe the law should be they have to wear the hats till they un fuck the last 30 years of publishing...
[flagged]
OMFG, people! Therapy isn't pseudoscience. There is a shitload of evidence that it works. You don't get to wave around the reproducibility crisis like it's a magic wand to make things that scare you go away. Now show me on the doll where therapy hurt you.
>There is a shitload of evidence that it works.
As "verified" not by hard measurements (like in Physics), proofs (as in math), or specific clinical results (e.g. blood panels in medicine) but with questionaires and "assessments" by the very people whose continued existance and esteem of the field their careers are based on?
"All kinds of evidence that it works" was there for Freudian psychanalysis too.
>All kinds of evidence that it works was there for Freudian psychanalysis too.
Not really, which is why it is all but gone from university psychology research.
It's gone because it went out of fashion.
The psychanalysis institutes at the time had all kinds of papers and results of their huge success...
It gets funnier than that! [0]
> ... in 1952 in which H. J. Eyesenck, an English psychiatrist, divided thousands of World War II veterans hospitalized for mental illness into three groups. One group was treated by psychoanalysis, another was given other kinds of therapy, and the third received no treatment at all. The men were then measured on an "improvement scale" with these stunning results: 44 per cent improved with psychoanalysis, 64 per cent improved with other therapies, and 72 per cent got better with no treatment at all.
> More recently, Werner Mendel, professor of psychiatry at the University of Southern California, directed a similar study. Trained psychoanalysts treated the patients in the first group, less highly trained psychologists and psychotherapists treated the second group, and the third group was under the care of employees with no formal training. The improvement ratings showed the following remarkable results: the third group, treated by staff members without formal training in therapy, showed the most improvement, whereas the first group, treated by staff members with the most extensive training in psychotherapy, showed the least improvement. Understandably, Dr. Mendel was startled by the results and repeated the experiment, with the same outcome.
Now, I'm not going down this rabbit-hole (I did once and wasted a lot of time!) but there is a lot of evidence that just a bit of attention from another person may be helpful, and affective attention may work wonders.
[0] https://www.religion-online.org/article/sin-guilt-and-mental...
Which behaviors are reinforced more than a general level of consideration and positive regard?
I hadn't heard of this study. I had heard that the case count for psychoanalytic psychotherapy was insufficient to fairly cluster by symptoms as (e.g. hierarchical) features, even. And then Milgram's selection bias.
Is all "Talk Therapy" bad then, and what's a good way to actually help;
Clean language > Clean language in detail: https://en.wikipedia.org/wiki/Clean_language#Clean_language_... :
> Clean language offers a template for questions that are as free as possible of the facilitator's suggestions, presuppositions, mind-reading, second guessing, references and metaphors. Clean questions incorporate all or some of the client's specific phrasing and might also include other auditory components of the client's communication such as speed, pitch, tonality. Besides the words of the client, oral sounds (sighs, oo's and ah's) and other nonverbals (e.g. a fist being raised or a line-of-sight) can be replicated or referenced in a question when the facilitator considers they might be of symbolic significance to the client.
> [...] "What would you like to have happen?
Clean Language Questions : https://cleanlearning.co.uk/blog/discuss/clean-language-ques... :
> The questions containing the verb ‘to be’ help to keep time still and are mostly used for developing individual perceptions:
> Questions containing the verb ‘to happen’ generally encourage the client to move time forwards or backwards:
> Questions which utilise other verbs are:
A prompt: Leadership and Public Speaking: which can utilize Clean Language; and which are necessarily suggestive of direction, course, or sequalae of events?
Scientists solve long-standing mystery surrounding the moon's 'lopsided' geology
Speaking of anomalous geology, I went down a bit of a rabbit hole this evening, I was reading about survey grade compasses, and they are balanced to account for magnetic dip, the closer you get to the poles the more dip you have. And the manufacturer helpfully provided a map to help you pick the best dip correction. Looking at the credited source(the world magnetic model) for a bit greater resolution... And, what the heck is that in the south Atlantic ocean.
https://www.ncei.noaa.gov/sites/default/files/2022-02/Miller...
I think it is this, There is a large(very large) density structure in the mantle in that general area and it appears to drive weak van allen fields in the south atlantic.
Existing gravimeter tech:
"City-size seamount triple the height of world's tallest building discovered via gravitational anomalies" (2024) https://www.livescience.com/planet-earth/rivers-oceans/city-...
"Kerr-enhanced optical spring for next-generation gravitational wave detectors" (2024) , and/or Tantalum; https://news.ycombinator.com/item?id=39957123
How does this finding compare with those from this simulated video of the moon forming?
"Collision May Have Formed the Moon in Mere Hours, Simulations Reveal" https://www.nasa.gov/solar-system/collision-may-have-formed-... https://youtu.be/kRlhlCWplqk
And this, from the moon:
"Scientists discover huge, heat-emitting blob on the far side of the moon" (2023) https://www.livescience.com/space/the-moon/scientists-discov...
"Remote detection of a lunar granitic batholith at Compton–Belkovich" [from a dormant volcano] (2023) https://www.nature.com/articles/s41586-023-06183-5
Goldene: New 2D form of gold makes graphene look boring
> The researchers say that goldene gets its new properties because in its 2D form, the atoms get two “free bonds.” This means it could eventually find use as a catalyst for converting carbon dioxide, producing hydrogen or valuable chemicals, or purifying water. And of course, electronics could benefit, even if it just means less gold is needed to make them.
"Synthesis of goldene comprising single-atom layer gold" (2024) https://www.nature.com/articles/s44160-024-00518-4 :
> [...] Our developed synthetic route is by a facile, scalable and hydrofluoric acid-free method. The two-dimensional layers are termed goldene. Goldene layers with roughly 9% lattice contraction compared to bulk gold are observed by electron microscopy. While ab initio molecular dynamics simulations show that two-dimensional goldene is inherently stable, experiments show some curling and agglomeration, which can be mitigated by surfactants. X-ray photoelectron spectroscopy reveals an Au 4f binding energy increase of 0.88 eV. Prospects for preparing goldene from other non-van der Waals Au-intercalated phases, including developing etching schemes, are presented.
- https://news.ycombinator.com/item?id=40079841 8mins ago lol
Proof-of-concept nanogenerator turns CO₂ into sustainable power
> "We've worked out how to make the positive ions much larger than the negative ions and because the different sizes move at different speeds, they generate a diffusion current which can be amplified into electricity to power light bulbs or any electronic device.
> "In nature and in the human body, ion transportation is the most efficient energy conversion—more efficient than electron transportation which is used in the power network."
> The two components were embedded in a hydrogel which is 90% water, cut into 4-centimeter disks and small rectangles and then tested in a sealed box pumped full of CO2.
> "When we saw electrical signals coming out, I was very excited but worried I'd made a mistake," Dr. Wang said.
"Electricity generation from carbon dioxide adsorption by spatially nanoconfined ion separation" (2024) https://www.nature.com/articles/s41467-024-47040-x :
> Abstract: Selective ion transport underpins fundamental biological processes for efficient energy conversion and signal propagation. Mimicking these ‘ionics’ in synthetic nanofluidic channels has been increasingly promising for realizing self-sustained systems by harvesting clean energy from diverse environments, such as light, moisture, salinity gradient, etc. Here, we report a spatially nanoconfined ion separation strategy that enables harvesting electricity from CO2 adsorption. This breakthrough relies on the development of Nanosheet-Agarose Hydrogel (NAH) composite-based generators, wherein the oppositely charged ions are released in water-filled hydrogel channels upon adsorbing CO2. By tuning the ion size and ion-channel interactions, the released cations at the hundred-nanometer scale are spatially confined within the hydrogel network, while ångström-scale anions pass through unhindered. This leads to near-perfect anion/cation separation across the generator with a selectivity (D-/D+) of up to 1.8 × 106, allowing conversion into external electricity. With amplification by connecting multiple as-designed generators, the ion separation-induced electricity reaching 5 V is used to power electronic devices. This study introduces an effective spatial nanoconfinement strategy for widely demanded high-precision ion separation, encouraging a carbon-negative technique with simultaneous CO2 adsorption and energy generation.
LLMs approach expert-level clinical knowledge and reasoning in ophthalmology
"Large language models approach expert-level clinical knowledge and reasoning in ophthalmology: A head-to-head cross-sectional study" (2024) https://journals.plos.org/digitalhealth/article?id=10.1371/j...
> [...] We trialled GPT-3.5 and GPT-4 on 347 ophthalmology questions before GPT-3.5, GPT-4, PaLM 2, LLaMA, expert ophthalmologists, and doctors in training were trialled on a mock examination of 87 questions. Performance was analysed with respect to question subject and type (first order recall and higher order reasoning). Masked ophthalmologists graded the accuracy, relevance, and overall preference of GPT-3.5 and GPT-4 responses to the same questions. The performance of GPT-4 (69%) was superior to GPT-3.5 (48%), LLaMA (32%), and PaLM 2 (56%). GPT-4 compared favourably with expert ophthalmologists (median 76%, range 64–90%), ophthalmology trainees (median 59%, range 57–63%), and unspecialised junior doctors (median 43%, range 41–44%). Low agreement between LLMs and doctors reflected idiosyncratic differences in knowledge and reasoning with overall consistency across subjects and types (p>0.05). All ophthalmologists preferred GPT-4 responses over GPT-3.5 and rated the accuracy and relevance of GPT-4 as higher (p<0.05). LLMs are approaching expert-level knowledge and reasoning skills in ophthalmology. In view of the comparable or superior performance to trainee-grade ophthalmologists and unspecialised junior doctors, state-of-the-art LLMs such as GPT-4 may provide useful medical advice and assistance where access to expert ophthalmologists is limited. Clinical benchmarks provide useful assays of LLM capabilities in healthcare before clinical trials can be designed and conducted.
Proposals for New US Power Plants Jump 90% on Surging Demand
"DOE Transmission Interconnection Roadmap: Transforming Bulk Transmission Interconnection by 2035" (2024-04) https://www.energy.gov/eere/i2x/doe-transmission-interconnec... [PDF] https://www.energy.gov/sites/default/files/2024-04/i2X%20Tra...
Can You Grok It – Hacking together my own dev tunnel service
awesome-tunneling lists a number of ngrok alternatives: https://github.com/anderspitman/awesome-tunneling
- https://news.ycombinator.com/item?id=39754786
- FWIU headscale works with the tailscale client and supports MagicDNS
I link to awesome-tunneling in my post :) I didn't know about that particular list until after I spent my night doing this.
I didn't know about headscale, that does seem pretty cool but I think MagicDNS also specifically would introduce a behavior that I didn't particularly want -- TLS certs being issued for my individual hosts, and thus showing up in cert transparency logs and getting scanned. Ultimately this is really only a problem in the first minutes or hours of setting up a cert, though.
Honestly I would probably recommend every other solution before I recommend my own. It was just fun to figure out and it works surprisingly well for what I wanted -- short lived development tunnels on my own infra with my own domain, without leaking the address of the tunnel automatically.
Computer-generated holography with ordinary display
CGH: Computer-generated holography: https://en.wikipedia.org/wiki/Computer-generated_holography
Ask HN: Methods for removing parts per trillions of PFAS from water?
What are the most cost efficient methods for removing PFAS from water?
There is now $1.5b/yr in federal funding for PFAS removal from water supplies [2].
There is a $10-12b 3M PFAS settlement approved 2024-04-01 with hundreds of municipal water supply plaintiffs. There is a $1.2b DuPont/Dow PFAS settlement also with hundreds of drinking water provider plaintiffs. [1]
PFAS and microplastics and Nitrogen and Oxygen are in rainwater, which plants prefer. Crops don't need tap water levels of e.g. chlorine or fluoride, which damage the soil microbiome.
Municipal, commercial, residential, and farm to table customers need solutions for SDG6: Clean Water;
What are the low-cost and gold-standard sensors for water quality? And,
What are the most cost efficient methods for removing PFAS and microplastics from water supplies, ground water, well water, tap water, sea water, and hobbyist rainwater collection systems?
[1] https://apnews.com/article/pfas-drinking-water-settlement-3m-fa41cadfe0d65b9723377a681df43af1
[2] "Biden-Harris Administration Finalizes First-Ever National Drinking Water Standard to Protect 100M People from PFAS Pollution" https://www.epa.gov/newsreleases/biden-harris-administration-finalizes-first-ever-national-drinking-water-standard
Look at the biological "Algal Turf Scrubber" approach, some companies are trying to claim the phrase, but there's much experimentation globally, and a lot of field test installations.
This is a good overview; Sustainable, Decentralized Sanitation and Reuse with Hybrid Nature-Based Systems https://www.mdpi.com/2073-4441/13/11/1583?utm_campaign=relea...
"review" and "systematic review" are also good keywords for research. /? 'PFAS remov' might be a better search term
"Single-Step Method Swiftly Removes Micropollutants from Water" (2024) https://www.chemicalprocessing.com/industrynews/news/3301747...
"Multifunctional zwitterionic hydrogels for the rapid elimination of organic and inorganic micropollutants from water" (2024) https://www.nature.com/articles/s44221-023-00180-8 https://scholar.google.com/scholar?hl=en&as_sdt=0%2C11&q=Mul...
https://news.mit.edu/2024/zwitterionic-hydrogels-swiftly-eli... :
> Perhaps most importantly, the particles in the Doyle group’s system can be regenerated and used over and over again. By simply soaking the particles in an ethanol bath, they can be washed of micropollutants for indefinite use without loss of efficacy. When activated carbon is used for water treatment, the activated carbon itself becomes contaminated with micropollutants and must be treated as toxic chemical waste and disposed of in special landfills. Over time, micropollutants in landfills will reenter the ecosystem, perpetuating the problem.
/?hn pfas method: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
/? pfas method: https://www.google.com/search?q=pfas+method
"PFAS Treatment in Drinking Water and Wastewater – State of the Science" https://www.epa.gov/research-states/pfas-treatment-drinking-... :
> It is currently known that three treatment processes can be effective for PFAS removal: granular activated carbon, ion exchange resins, and high-pressure membrane systems
/?gscholar PFAS removal method + review: https://scholar.google.com/scholar?q=pfas+removal+method+rev...
Click "Cited by 131" to find Citations of e.g "Comparison of currently available PFAS remediation technologies in water: A review" (2021) https://scholar.google.com/scholar?cites=1170583541201234310...
/?hnlog https://westurner.github.io/hnlog/ Ctrl-F: water, CleanWater, SDG6
https://news.ycombinator.com/item?id=39731290 :
> "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39507692
But does that also filter PFAS or microplastics?
"/?hn PFAS methods" found Magnetic and TODO removal approaches that aren't yet acknowledged by EPA for the purpose, but could be more sustainable and cost-effective.
> Municipal, commercial, residential, and farm to table customers need solutions for SDG6: Clean Water; [...]
> [2] "Biden-Harris Administration Finalizes First-Ever National Drinking Water Standard to Protect 100M People from PFAS Pollution" https://www.epa.gov/newsreleases/biden-harris-administration...
SDG6: Clean Water: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_6
OTOH, e.g. Spout has an atmospheric water generator product for preorder with a 98/99 water quality score; but does the water quality score account for parts per trillions of PFAS?
Do any of the solar+water products that sterilize and/or filter the water handle PFAS already?
Some types of under-sink water treatment systems already remove most PFAS.
If the user doesn't replace the filters what should it do?
FWIU reverse osmosis (RO) is a high-pressure membrane method that wastes water, but doesn't require replaceable filters?
Jim Keller suggests Nvidia should have used Ethernet to link Blackwell GPUs
I don't have a horse in this race, but I saw today that there was a healthy thread <https://news.ycombinator.com/item?id=40011762> about "just pull fiber" when folks were discussing pulling cat6e and a lot of that talk was about the relative cost versus massive bandwidth gains. My mental model of GPU interconnect is that one wishes to have the most bandwidth possible
https://www.windowscentral.com/hardware/computers-desktops/n... :
> "The reason why we took [NVLink] out was because we needed the we needed the I/Os for something else, and so, so we use the I/O area to cram in as much as much AI processing as we could," Huang confirmed and explained of NVLink's absence.
> For gamers, creators, and workstation users who want a multi-GPU setup with the latest RTX 4000 series, Nvidia is switching from NVLink as a bridge to the PCIe Gen 5 standard.
PCIe 5.0 x32 supports 128GB/s.
"PCIe 7.0 Draft 0.5 Spec: 512 GB/s over PCIe x16 On Track For 2025" https://news.ycombinator.com/item?id=39933959
> The HBM3E [RAM] Wikipedia article says 1.2TB/s (2024) https://news.ycombinator.com/item?id=39938652
"Fiber-optic data transfer speeds hit a rapid 301 Tbps" (2024-03) https://news.ycombinator.com/item?id=39864107
HPE Cray Slingshot is a copper Ethernet HPC interconnect: https://www.nextplatform.com/2022/01/31/crays-slingshot-inte... ... "HPE Slingshot Makes The GPUs Do Control Plane Compute" https://www.nextplatform.com/2022/08/15/hpe-slingshot-makes-... :
> Here’s the gist. With normal GPU aware MPI software, inside the node the GPU vendors – the only two that matter today are Nvidia and AMD – have their own mechanisms for doing peer-to-peer communications between the GPUs – in this case NVLink and Infinity Fabric. For MPI data exchanges between GPUs located in different nodes, which is often required for running large simulations and models as well as for large AI training runs, MPI data movement is done through Remote Direct Memory Access methods, pioneered with InfiniBand adapters decades ago, which allow for the transfer of data between a GPU and a network interface card without the interaction of the host CPU’s network stack. [...]
> In a [recent paper], HPE showed off the new MPI offload method, making the distinction between the typical MPI communication that is GPU aware versus the stream triggered method that is GPU stream aware.
"Exploring GPU Stream-Aware Message Passing using Triggered Operations" (2022) https://arxiv.org/abs/2208.04817 https://scholar.google.com/scholar?cites=1354761056738606744...
"New RISC-V microprocessor can run CPU, GPU, and NPU workloads simultaneously" (2024) https://news.ycombinator.com/item?id=39938538
Sat-based entanglement distribution and teleportation with continuous variables
"Satellite-based entanglement distribution and quantum teleportation with continuous variables" (2024) https://www.nature.com/articles/s42005-024-01612-x :
> [...] The results show the feasibility of free-space entanglement distribution and quantum teleportation in downlink paths up to the LEO region, but also in uplink paths with the help of the intermediate station. Finally, we complete the study with microwave-optical comparison in bad weather situations, and with the study of horizontal paths in ground-to-ground and inter-satellite quantum communication.
How does this approach compare to micrometers between a quantum dot and a THz optical resonator? https://news.ycombinator.com/item?id=39365579
Google's new technique gives LLMs infinite context
"Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" (2024) https://arxiv.org/abs/2404.07143
Quantum Algorithms for Lattice Problems
How does this affect these statements on Wikipedia [1]
> some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.
and [2] ?
> One class of quantum resistant cryptographic algorithms is based on a concept called "learning with errors" introduced by Oded Regev in 2005.
[1] https://en.wikipedia.org/wiki/Lattice-based_cryptography
[2] https://en.wikipedia.org/wiki/Ring_learning_with_errors_key_...
CRYSTALS-Kyber, NTRU, SABER, CRYSTALS-Dilithium, and FALCON are lattice-based method finalists in NIST PQC Round 3.
[1] NIST Post-Quantum Cryptography Standardization: https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
The NTRU article mentions PQ resistance to Shor's only, other evaluations, and that IEEE Std 1363.1 (2008) and the X9 financial industry spec already specify NTRU, which is a Round 3 Finalist lattice-based method.
In [1] Under "Selected Algorithms 2022", the article lists "Lattice: CRYSTALS-Kyber, CRYSTALS-Dilithium, FALCON; Hash-based: SPHINCS+".
Round 4 includes Code-based and Supersingular elliptic curve isogeny algos.
FWIU There's not yet a TLS 1.4/2.0 that specifies which [lattice-based] PQ algos webservers would need to implement to support a new PQ TLS spec.
Can Gemini 1.5 read all the Harry Potter books at once?
> All the books have ~1M words (1.6M tokens). Gemini fits about 5.7 books out of 7. I used it to generate a graph of the characters and it CRUSHED it.
An LLM could read all of the books with Infini-attention (2024-04): https://news.ycombinator.com/item?id=40001626#40020560
Leave No Context Behind: Efficient Infinite Context Transformers
"Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" (2024) https://arxiv.org/abs/2404.07143 :
> This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.
Replacing Wires Could Double How Much Electricity the US Grid Can Handle
Terawatt-miles. A unit of measurement I'm glad to have been exposed to today.
If a watt is 1 kg⋅m2⋅s−3, and we multiply by distance we get 1 kg⋅m3⋅s−3. I wonder what that means
This is where simply multiplying the units through can be misleading as it hides semantics. The distance/area of m in the watt is not necessarily the same as the distance of the m in the length of conductor.
But an interpretation would be volume which seems intuitive, double the length or area of a cylinder and you double its volume.
So there are three parameters; watts, conductor_path_length, and watt_meters?
Transmission line > Matrix parameters > Variable definitions: https://en.wikipedia.org/wiki/Transmission_line#Variable_def...
But, FWIU how this applies to line length and transmission time hadn't been tested until Veritasium's experimental video with CalTech with like a mile of cable: https://youtu.be/oI_X2cMHNe0
So, it doesn't need multiple length parameters.
Here's that with Pint for Quantities and Units with `%pip install -q` for Jupyter notebooks with e.g. Google Colab or vscode.dev+pyodide or JupyterLite or ... in Jupyter notebook Percent format with the %%ipytest cell magic function to upgrade test assertions (and @pytest.fixture s) and run everything starting with test_ with the pytest test runner:
# %%
%pip install -q pint ipytest pytest-cov
import ipytest
ipytest.autoconfig()
# %%
%%ipytest -v # --help
def test_pint_units_for_watt_meters():
import pint
ureg = pint.UnitRegistry()
unit1 = ureg.kilogram*(ureg.meter**2)*(ureg.seconds**-3)
unit2 = unit1 * ureg.meter
assert 1*unit1 == 1*ureg.watt
assert 1*unit2 == 1*ureg.watt*ureg.meters
Nimble: A new columnar file format by Meta [video]
There's already been some interesting column format optimization work at Meta, as their Velox execution engine team worked with Apache Arrow to align their columnar formats. This talk is actually happening at VeloxCon, so there's got to be some awareness! https://engineering.fb.com/2024/02/20/developer-tools/velox-... https://news.ycombinator.com/item?id=39454763
I wonder how much if any overlap there is here, and whether it was intentional or accidentally similar. Ah, "return efficient Velox vectors" is on the list, but still seems likely to be some overlap in encoding strategies etc.
The four main points seem to be: a) encoding metadata as part of stream rather than fixed metadata, b) nls are just another encoding, c) no stripe footer/only stream locations is in footer, d) FlatBuffers! Shout out to FlatBuffers, wasn't expecting to see them making a comeback!
I do wish there were a lot more diagrams/slides. There's four bullet points, and Yoav Helfman talks to them, but there's not a ton of showing what he's talking about.
How do Parquet, Lance, and Nimble compare?
lancedb/lance: https://github.com/lancedb/lance :
> Modern columnar data format for ML. Convert from Parquet in 2-lines of code for 100x faster random access, a vector index, data versioning, and more. Compatible with pandas, DuckDB, Polars, and pyarrow
Can the optimizations in lance be ported to other formats without significant redesign?
Transformer as a general purpose computer
Interesting, I thought it was proven Transformers cannot do recursion[0] and aren't actually able to be equivalent to a turing machine due to lack of expandable memory, and are rather finite state automata.
[0]: https://youtu.be/rie-9AEhYdY?si=5p0QCxAFovGUHFRb&t=1744
Isn't recursion solved by just feeding the output of the LLM (with some additional instructions) back into the input? That's how a lot of complex problems are already tackled with LLMs.
Preview of Explore Logs, a new way to browse your logs without writing LogQL
I'm not really a cloud expert so maybe I'm fundamentally missing something about how I'm "supposed to work", but honestly all I have ever wanted to do, when looking at logs, is see the log from one process, from beginning to end, as a text file. You can of course do this using kubectl but only for the most recent two instances of a given pod which isn't helpful when investigating an incident that happened a while ago.
It seems nobody else cares about this use case and wants you to use LogQL and the incredibly clunky Grafana web UI instead, because it makes it possible to aggregate across many different processes, slice and dice by various labels, etc., which as I said, I have never (or almost never) actually wanted to do.
Hopefully this new UI is a step in the right direction as people won't need to futz around with LogQL anymore, but it seems like it still doesn't quite do what I want.
Could LogQL do.something like
select * from stdout, stderr
where session_id = 123456
?
If not, why?strace and gdb can trace and close and reopen process file handles 0,1,2.
ldpreloadhook has an example of hooking write() with LD_PRELOAD=, which e.g. golang programs built without libc don't support.
When systemd is /sbin/init, it owns all subprocess' file handles already, so there's no need to close(0), time, open(0) with gdb.
Without having to logship (copy buffers that are flushed and/or have newline characters in the stream) to a network or local Arrow database files and or SQLite vtables,
journalctl (journald) supports pattern matching with: -t syslogidentifier, -u unit; and -g grepexpr of the MESSAGE= field:
journalctl -u <TAB>
journalctl -u init.scope --reverse
journalctl -u unit.scope -g "Reached target" # and then "/sleep" to search and highlight with less
journalctl -u auditd.service
# this is slow because it's a full table scan, because
# journald does not index the logfiles;
# and -g/--grep is case insensitive if the query is all lowercase:
journalctl -g avc --reverse
journalctl -g AVC --reverse
# this is faster:
journalctl -t audit -g AVC -r
# this is still faster,
# because it only searches the current boot:
journalctl -b 0 -t audit -g AVC
# these are equivalent:
journalctl -b 0 --dmesg -t kernel
journalctl -k
#
journalctl -b 0 --user | grep -i -C "xyz123"
There is a GNOME Logs viewer that has 'All' and a few mutually exclusive filter/reports in a side pane, and a search expression field to narrow a filter/report like All or Important.There is a Grafana Loki Docker Driver that logships from all containers visible on that DOCKER_HOST docker socket to Grafana for querying with Loki: https://grafana.com/docs/loki/latest/send-data/docker-driver...
Podman with Systemd doesn't need the Grafana Docker Driver (or other logshippers like logstash, loggly, or fluentd) because systemd spawns containers and optionally pipes their stdout/stderr logs to journald.
Influx has Telegraf, InfluxDB, Chronograf, and Kapacitor. Chronograf is their WebUI which provides a query interface for configurable chart dashboards and InfluxQL.
Grafana supports SQL, PromQL, InfluxQL, and LogQL.
Graylog2 also indexes logfiles.
But you can't query stdout and stderr you or /sbin/init haven't logged to a file.
Show HN: FizzBee – Formal methods in Python
GitHub: https://www.github.com/fizzbee-io/FizzBee
Traditionally, formal methods are used only for highly mission critical systems to validate the software will work as expected before it's built. Recently, every major cloud vendor like AWS, Azure, Mongo DB, confluent, elastic and so on use formal methods to validate their design like the replication algorithm or various protocols doesn't have a design bug. I used TLA+ for billing and usage based metering applications.
However, the current formal methods solutions like TLA+, Alloy or P and so on are incredibly complex to learn and use, that even in these companies only a few actually use.
Now, instead of using an unfamiliar complicated language, I built formal methods model checker that just uses Python. That way, any software engineer can quickly get started and use.
I've also created an online playground so you can try it without having to install on your machine.
In addition to model checking like TLA+/PlusCal, Alloy, etc, FizzBee also has performance and probabilistic model checking that be few other formal methods tool does. (PRISM for example, but it's language is even more complicated to use)
Please let me know your feedback. Url: https://FizzBee.io Git: https://www.github.com/fizzbee-io/FizzBee
How does FizzBee compare to other formal methods tools for Python like Nagini, Deal-solver, Dafny? https://news.ycombinator.com/item?id=39139198
https://news.ycombinator.com/item?id=36504073 :
>>> Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ [or FizzBee] help with that at all?
FizzBee is an alternative to TLA+ designed to be the easiest to get started for every software engineer. To make it easiest to use, the language of specification is Python instead of using some kind of mathematical presentation language like TLA+. So it's a formal methods in Python not for Python.
FizzBee is a design specification language whereas Nagini and deal solver are implementation verification language. FizzBee is supposed to be before you start coding, or actually before sending out the design document for review to iron out design bugs. The latter two are supposed to be used to verify the implementation.
Dafny is actually an implementation language designed as verification aware programming language. It does code generation into multiple languages and inserts various preconditions and other assertions into the code. Again, it is suitable for verifying the design.
Is it advantageous to maintain separation between formal design, formal implementation, and formal verification formal methods?
Is there still a place for implementation language agnostic tools for formal methods?
icontract and pycontracts do runtime type checking and value constraints with DbC Design-by-Contract patterns that more clearly separate the preconditions and postconditions from the command, per Hoare logic. Both can read Python type annotations and apply them at runtime or not, both support runtime checking of invariants: values that shouldn't have changed when postconditions are evaluated after the function returns, but could have due to e.g. parallelism and concurrency and (network I/O) latency and pass by reference in distributed systems.
The single biggest advantage of keeping formal design separate from the forms implementation/verification is you check your design before the implementation starts.
Anything that operates by checking the implementation can find implementation bugs. Even if they find design bugs it's too late.
As an analogy. You can document the code with comments. Comments explaining why you do what you do. This along with the code can be reviewed and be part of the revision control.
That said, do you still recommend writing separate design documents? If so why? This will also answer your question on why we will need a design spec that's separate from the implementation
Having also written lots of code with nothing but Tests as the Functional Spec I'm not sufficiently familiar with Formal Methods, though I should make myself learn. TIL that there are different tools for Design/Implementation/Verification instead of just code and decorators and IDK annotations as decorators and autocorrect.
(TODO: link to post about university-level FM programs)
Always wondered whether there's some advantage to translating to another language's AST for static and/or dynamic analysis or not. AFAIU Fuzzing is still necessary even given Formal Implementation?
Python is Turing complete, but does [TLA,] need to be? Is there an in-Python syntax that can be expanded in place by tooling for pretty diffs; How much overlap between existing runtime check DbC decorators and these modeling primitives and feature extraction transforms should there be? (In order to: minimize cognitive overload for human review; sufficiently describe the domains, ranges, complexity costs, inconstant timings, and the necessary and also the possible outcomes given concurrency,)
"FaCT: A DSL for Timing-Sensitive Computation" and side channels https://news.ycombinator.com/item?id=38527663
Kerr-enhanced optical spring for next-generation gravitational wave detectors
- "Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" with tantalum https://news.ycombinator.com/item?id=39495482 :
> "Measuring gravity with milligram levitated masses" (2024) https://www.science.org/doi/10.1126/sciadv.adk2949 :
> Here, we show gravitational coupling between a levitated submillimeter-scale magnetic particle inside a type I superconducting trap and kilogram source masses, placed approximately half a meter away. Our results extend gravity measurements to low gravitational forces of attonewton and underline the importance of levitated mechanical sensors.
Can LLMs read the genome? This one decoded mRNA to make better vaccines
"A 5′ UTR language model for decoding untranslated regions of mRNA and function predictions" https://www.nature.com/articles/s42256-024-00823-9
"Language models can explain neurons in language models"; GPT-4 can help explain what GPT-2 neurons do: https://openai.com/research/language-models-can-explain-neur... https://news.ycombinator.com/item?id=35877402
IDK if this is a practical limit to such research? From https://news.ycombinator.com/item?id=38753725 :
>> "Challenges with unsupervised LLM knowledge discovery" (2023) https://arxiv.org/abs/2312.10029 :
>> Abstract: We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge -- instead they seem to discover whatever feature of the activations is most prominent
WinBtrfs – an open-source btrfs driver for Windows
> See also Quibble, an experimental bootloader allowing Windows to boot from Btrfs, and Ntfs2btrfs, a tool which allows in-place conversion of NTFS filesystems.
The chocolatey package for WinBtrfs: https://community.chocolatey.org/packages/winbtrfs
NASA spacecraft films crazy vortex while flying through sun's atmosphere
Very cool. Any point to false-colorizing this footage?
A slightly embarrassed normie asking.
(Weirdly too, I want sound — maybe those long, low-frequency whistles you hear radio astronomers pick up from the sun.)
sound does not travel through space. So that is just sonification of data. It is, in general, bullshit
Actually,
https://twitter.com/NASAExoplanets/status/156144251407831449... :
> The misconception that there is no sound in space originates because most space is a ~vacuum, providing no way for sound waves to travel. A galaxy cluster has so much gas that we've picked up actual sound. Here it's amplified, and mixed with other data, to hear a black hole!
"Physicists demonstrate how sound can be transmitted through vacuum" https://www.sciencedaily.com/releases/2023/08/230809130709.h... :
> In a recent publication they show that in some cases a sound wave can jump or "tunnel" fully across a vacuum gap between two solids if the materials in question are piezoelectric.
"Complete tunneling of acoustic waves between piezoelectric crystals" (2023) https://www.nature.com/articles/s42005-023-01293-y
Helmholtz resonance https://en.wikipedia.org/wiki/Helmholtz_resonance :
> Helmholtz resonance is one of the principles behind the way piezoelectric buzzers work: a piezoelectric disc acts as the excitation source, but it relies on the acoustic cavity resonance to produce an audible sound.
Nonvolatile quantum memory: Discovery points path to flash-like qubit storage
> In an open-access study published recently in Nature Communications, Rice physicist Ming Yi and more than three dozen co-authors from a dozen institutions similarly showed they could use heat to toggle a crystal of iron, germanium and tellurium between two electronic phases. In each of these, the restricted movement of electrons produces topologically protected quantum states. Ultimately, storing qubits in topologically protected states could potentially reduce decoherence-related errors that have plagued quantum computing. [...]
> "That's the key finding," she said of the material's switchable vacancy order. "The idea of using vacancy order to control topology is the important thing. That just hasn't really been explored. People have generally only been looking at materials from a fully stoichiometric perspective, meaning everything's occupied with a fixed set of symmetries that lead to one kind of electronic topology. Changes in vacancy order change the lattice symmetry. This work shows how that can change the electronic topology. And it seems likely that vacancy order could be used to induce topological changes in other materials as well."
"Reversible non-volatile electronic switching in a near-room-temperature van der Waals ferromagnet" (2024) https://www.nature.com/articles/s41467-024-46862-z :
> Abstract: [...] Here, we report the observation of reversible and non-volatile switching between two stable and closely related crystal structures, with remarkably distinct electronic structures, in the near-room-temperature van der Waals ferromagnet Fe_{5−δ}GeTe_2. We show that the switching is enabled by the ordering and disordering of Fe site vacancies that results in distinct crystalline symmetries of the two phases, which can be controlled by a thermal annealing and quenching method. The two phases are distinguished by the presence of topological nodal lines due to the preserved global inversion symmetry in the site-disordered phase, flat bands resulting from quantum destructive interference on a bipartite lattice, and broken inversion symmetry in the site-ordered phase.
Additional questions of Electronic topology, and optical sensing:
- "GAN semiconductor defects could boost quantum technology" (202 ) https://news.ycombinator.com/item?id=39365632
- "Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) .. "Distorted crystals use 'pseudogravity' to bend light like black holes do" https://news.ycombinator.com/item?id=38008449
- "Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 :
> By illuminating the system with THz radiation [...]
- "Ask HN: Can qubits be written to crystals as diffraction patterns?" https://news.ycombinator.com/item?id=38501668 :
> Presumably holography would have already solved for quantum data storage if diffraction is a sufficient analog of a wave function?
Home insurers are dropping customers based on aerial images
I used to work for Verisk, an insurance tech holding company, who owned a company that took pictures of people's roofs using airplanes and special cameras. They got sued over a patent violation with EagleView[1], who claimed the tech idea as their own, and settled.
> The strategic alliance allows customers seamless and integrated access to EagleView technology within Verisk’s Xactware platform
Xactware is a product that customers (read: insurance companies) use to figure out how much money to pay on a claim.
The whole idea is to speed up the claims process. Insurance agents don't need to go out to people's houses to examine roof damage. But we also had a department doing some pretty sophisticated stuff around preventing claims fraud, so I'm not surprised.
1: https://www.verisk.com/company/newsroom/verisk-and-eagleview...
What happens to the value of real estate when it can no longer be insured?
Its loss in value is an external cost to operations that are causing the extreme weather that is resulting from climate change.
There is re-valuation due to re-estimation of risk and cost of remediation.
There are climate refugees, with less equity gains and property investments in high risk markets that now might not appreciate as quickly.
Loki: An open-source tool for fact verification
Isn't this similar to the Deepmind paper on long form factuality posted a few days ago?
https://arxiv.org/abs/2403.18802
https://github.com/google-deepmind/long-form-factuality/tree...
Yes, they are similar. Actually, our initial paper was presented around five months ago (https://arxiv.org/abs/2311.09000). Unfortunately, our paper isn't cited by the DeepMind paper, which you may see this discussion as an example: https://x.com/gregd_nlp/status/1773453723655696431
Compared with our initial version, we have mainly focused on its efficiency, with a 10X faster checking process without decreasing accuracy.
> We further construct an open-domain document-level factuality benchmark in three-level granularity: claim, sentence and document
A 2020 Meta paper [1] mentions FEVER [2], which was published in 2018.
[1] "Language models as fact checkers?" (2020) https://scholar.google.com/scholar?cites=3466959631133385664
[2] https://paperswithcode.com/dataset/fever
I've collected various ideas for publishing premises as linked data; "#StructuredPremises" "#nbmeta" https://www.google.com/search?q=%22structuredpremises%22
From "GenAI and erroneous medical references" https://news.ycombinator.com/item?id=39497333 :
>> Additional layers of these 'LLMs' could read the responses and determine whether their premises are valid and their logic is sound as necessary to support the presented conclusion(s), and then just suggest a different citation URL for the preceding text
> [...] "Find tests for this code"
> "Find citations for this bias"
From https://news.ycombinator.com/item?id=38353285 :
> "LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285
> "Misalignment and [...]"
Google's quantum computer suggests that wormholes are real (2022)
I'm betting my money on causality. These kinds of articles ignore how the EPR experiment may demonstrate nonlocality, but does not allow information exchange faster than the speed of light. I don't think we'll ever break that barrier.
Indefinite causal order (indefinite causal structure) is already demonstrated at the quantum level at least.
Do time crystals demonstrate macrostate retrocausality? There are time crystal in childrens' chemistry sets and they've been there for years.
In this video [1], Physics Girl creates two vortices in a pool with plates and dyes the fluid flow between them with food coloring.
How is this matter information flow between low pressure centers of local fluid vortices different from wormholes and ER=EPR particle state linkage? Is this effect of linkage between vortices the same thing as ER=EPR wormholes?
Similarities: Low pressure in the center of a vortex; fluid vortical dynamics, State linkage across spacetime, Refractory occlusion, Glowing edge at the boundary of the system
FWIU, there are various types of quantum entropy; entanglement and non-entanglement entropy. Quantum discord [2]:
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
With wormholes, doesn't the degree of complexity of the universe increase significantly; when any particle state in a system with fluidic dynamics can be linked to any other particle state in spacetime, what is the degree of the Hilbert space?
Can there be ER=EPR wormholes between here and the other side of the apparently expanding universe, and is c the speed of light the limit?
Is the universe expanding faster than c; is |v1|+|v2| > c anywhere? From https://news.ycombinator.com/item?id=39718864 regarding Hubble's expansion constant and `(v_other_direction + c) > c` :
> If virtual particle states are entangled with particle states in black holes or through ER=EPR bridges, is there effectively FTL?
> There is FTL within dielectric antennae.
Are there paths between points in spacetime which are faster than the path that a photon would take given gravity and solar wind?
From "Light and gravitational waves don't arrive simultaneously" (2023; GW170817) https://news.ycombinator.com/item?id=38056295 :
>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
>> In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics)
[1] https://www.youtube.com/watch?v=pnbJEg9r1o8
[2] https://en.wikipedia.org/wiki/Quantum_discord
Also, this video depicts how much dye exchanges through the "sidewall" of a fluid vortex. Is the dye that exchanges with the system outside the vortex similar to Hawking Entropy that escapes a gravitational fluid phenomena like a black hole, or a different phenomena?
[3] "Filming Inside a Slow Motion Vortex - The Slow Mo Guys" https://www.youtube.com/watch?v=0_avPbgxw28
And so, time travel and ER=EPR: https://en.wikipedia.org/wiki/ER_%3D_EPR
PCIe 7.0 Draft 0.5 Spec: 512 GB/s over PCIe x16 On Track For 2025
PCI Express > PCIe 7.0: https://en.wikipedia.org/wiki/PCI_Express#PCI_Express_7.0
From your link:
> PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e
So I'm not sure what you're trying to say?
I linked the section of the PCIe Wikipedia article that could be updated with this latest 512 GB/s spec ann, which is newer than the 242 GB/s max spec listed in the infobox.
No idea what the problem with that was.
The 242 GB/s in the Wikipedia article is the per-direction bandwidth available after removing the encoding overhead. The 512 GB/s in the headline is the raw bi-directional bandwidth. Both are actually the same number.
New RISC-V microprocessor can run CPU, GPU, and NPU workloads simultaneously
https://news.ycombinator.com/item?id=39518293 :
> [PQ] SSL/TLS Accelerators and ML Accelerators / AI Accelerators
An opportunity for a new RISC-V arch, perhaps.
> With some newer architectures, the GPU(s) are directly connected to [HBM] RAM; which somewhat eliminates the CPU and Bus performance bottlenecks that ASICs and FPGAs are used to accelerate beyond.
The HBM3E Wikipedia article says 1.2TB/s.
Latest PCIe 7 x16 says 512 GB/s: https://news.ycombinator.com/item?id=39933959
Anyone has any opinions about eye-tracking market?
Hey all, wanted to check is it only me or eye-tracking market is bit broken.
There are basically two types of tech here: hardware eye tracking (which goes into wearable or stationary tracking) and software tracking (I would say less popular).
Both are incredibly expensive and mostly focused on research, basically or you have to get expensive hardware, or you have to get expensive license for your software. And this is I guess fair in small research market, but devices like vision pro kinda shows that eye tracking, gaze aware electronics can be super comfortable and native to use for us.
Also windows for example supports natively eye tracking for dealing with disabilities (and these disabilities are ones all of us can get, by just simply skiing downhill and breaking arms, doing other type of extreme sports). But to use it you need to buy compatible hardware tracker.
I think it is bit broken, that disabled people (and technically all of us are) kinda locked out of eye tracking technology for causal use. Like you ask me: we have keyboards and touch screens so there is no reason for any eye tracking, but then we have each year flu season spreading because people touch for example same touch screens while using restaurant kiosks (like the ones you can find in mcdonalds etc), this could be done with gaze tracking and vision pro like gestures.
Or these are my thoughts, which pushed me on path to democratize bit webcam driven eye tracking tech. Like usually biggest problem for building new apps is lack or locked technology engineers cannot access and need to develop for themselves. Such step require know-how and time. So I have decided to bring open source algorithm for people to tinker: https://github.com/NativeSensors/EyeGestures and https://polar.sh/NativeSensors
It basically changes your laptop or any webcam into eye tracker (maybe not as precise as hardware ones, but still)! And well I think this could bring a bit of freshness into user interfaces.
What are your thoughts? Is that even viable idea, or rather something what will never get traction and I am extremely wrong?
There are invasive and non-invasive BCI capabilities, and their ranges vary. E.g. Neural Lace is a surgically invasive BCI capability. A dry electrode EEG cap is a noninvasive BCI sensor.
If a noninvasive BCI capability can write to a brain, is it a Directed Energy Weapon? And so I suspect the applications maybe limited. Certain states don't even allow video biometrics in banks, casinos, or prisons FWIU.
IDK much about eye tracking.
I know that there exist patients with ocular movement disorders like amblyopia/strabismus/exotropia and nystagmus. The equitability of eye-tracking only interfaces is limited if they do not work with eye movement disorders, which are more common than not having at least one hand.
If the system must detect which eye movement disorder a patient has in order to work, is it 1) doing automated diagnosis, which is unlikely to have 100% accuracy; and 2) handling sensitive PHI Personal Health Information without consent, and 3) capturing biometric information without consent.
I understand that (when people have consented), eye tracking experiments yield helpful UX design research data like UI heatmaps that can help developers make apps more accessible.
Practically, how do you turn off the eye tracking component and use the app without the fancy gestural HID; and if one is suddenly distracted by something else in the FOV, is there unintentional input?
/? awesome eye tracking: https://www.google.com/search?q=awesome+eye+tracking
awesome-gaze-estimation: https://github.com/cvlab-uob/Awesome-Gaze-Estimation
Community response to a benign opt-in art project that merges eye tracking data from viewers might be ethically studied in estimating viability of a competing BCI HID approach.
That is extremely helpful actually!
How does your programming language handle "minus zero" (-0.0)?
> the ubiquitous IEEE floating-point standard defines two numbers to represent zero, the positive and the negative zeros.
Seriously, how did anyone think including two fantasy numbers was a good idea?
Not to mention excluding the real zero.
I don't think you can have a ring or a field without additive inverses.
Additive inverse: https://en.wikipedia.org/wiki/Additive_inverse :
> For a number (and more generally in any ring), the additive inverse can be calculated using multiplication by −1; that is, −n = −1 × n. Examples of rings of numbers are integers, rational numbers, real numbers, and complex numbers
Must there be an additive inverse of zero?
There was no concept of zero in early mathematics or in Roman numerals.
0 (number) https://en.wikipedia.org/wiki/0
Division by signed or unsigned zero is undefined with Integers, Rationals, and Reals.
1/x is not equivalent to 2/x at any point other than x=0, where their signed tendency is toward a limit with positive and/or negative infinity.
IEEE-754 also specifies only three infinities: positive, negative, and unsigned.
Conway's surreal infinities are multiple.
Fairly, aren't there infinity infinities? Infinity_n. Perhaps one for each different symbolic value, but which cancel out as multiplicative inverses?
If 1/x != 2/x at any real point, it defies intuition that their limits are equally -/+ infinity.
I remember a system I was working on where I was setting up an empty financial account. Error - no currency specified. Well a) I don't have a currency, but more importantly b) how does nothing need a currency?
It's as if a farmer had a field with no animals in it and was asked what sort of animals it hasn't got in it. I think the Romans were right.
Engineers develop 90-95% efficient electricity storage with piles of gravel
> "The installed cost for our thermal storage system is less than $5-10 per kWh thermal, as compared to other energy-storage technologies, which are in the range of $150-$200 per kWh electric," [for 20 hours, after heating the piles of gravel to 932F ] at least initially? With greenhouses as the initial application
Very competitively efficient, but you don't need 932F for greenhouses?
/? oranges in Nebraska; https://youtube.com/results?sp=mAEA&search_query=Oranges+in+...
- "Nebraska retiree uses earths's heat to grow oranges in snow" in Alliance, NE, USA at -40F sometimes https://youtu.be/ZD_3_gsgsnk :
Passive, Net Zero, Carbon Sink solar geothermal half-buried greenhouse that produces citrus in the winter at a colder northern latitude
Geothermal air with 6" tubes, 8ft down, out 230ft (753) into the yard, blower, reflective sheeting on the south sloped back wall; est $25k build cost. 30F interval with no fans on in the winter. ;
Net zero greenhouses probably win UN SDG Indicator 13.2.2 Total greenhouse gas emissions per year; UN SDG Goal 13. Take urgent action to combat climate change and its impacts*
A New Zealand word for an [excavated] [semi-] underground greenhouse is "Walipini".
FWIU from YouTube, there's a new style of Northern Chinese greenhouse now in Canada: they build a big wall of clay/soil on one long side and lower a blanket over the entire greenhouse with a TODO HP motor in the evening to keep heat in the greenhouse at night. The wing shape makes it's easy to capture rainwater in a trough on just the one side.
Geodesic domes are very resilient, easy to assemble, and can also be semi-underground with vents at the top to actuate.
An aquarium heater can heat a tank of water to heat an arctic greenhouse.
Geodesic domes get so hot in the summer that it's necessary to hang cloth to shade and cool.
Also learned from YT South Korea: (JADAM JWA (Castille Soap) AND,) long stone seat and table fireboxes for greenhouse heat; but wood-burning has combustion byproducts so e.g. solar + wind electric (if geothermal is not feasible).
New EPA rules specify emissions limits for full-time wood-burning stoves; and so currently basically only Reburning and Catalytic Combustor stoves are legal in the US.
IDK what the effective $ per kWh is for solar and pumped air geothermal greenhouses?
Looks like there is at least one sand battery + hot water heater product to compete with heat pump water heaters that could also be sand (or gravel) batteries.
Multi-source heat pumps typically have an electric heat strip for very cold weather, and post-halving mining rigs also produce waste heat.
Apparently one can heat a pool (or a greenhouse water tank) with immersion mining rigs, but is there a sustainable fluid for industrial immersion cooling?
Agricultural Bunds (and buried unglazed water vessels) would probably work in NM, too; but which agricultural resources can process bunds instead of rows?
Applied Category Theory in the Wolfram Language Using Categorica I
Can this be applied to IDK Refactoring, Intermediate Reoresentation design, or Compilation?
"Codemod"
/? codemod: https://hn.algolia.com/?q=codemod
Quantum interference enhances the performance of single-molecule transistors
"Transistor Takes Advantage of Quantum Interference" (2024) https://spectrum.ieee.org/amp/quantum-interference-transisto... :
"Quantum interference enhances the performance of single-molecule transistors" (2014) https://www.nature.com/articles/s41565-024-01633-1 :
> Abstract: Quantum effects in nanoscale electronic devices promise to lead to new types of functionality not achievable using classical electronic components. However, quantum behaviour also presents an unresolved challenge facing electronics at the few-nanometre scale: resistive channels start leaking owing to quantum tunnelling. This affects the performance of nanoscale transistors, with direct source–drain tunnelling degrading switching ratios and subthreshold swings, and ultimately limiting operating frequency due to increased static power dissipation. The usual strategy to mitigate quantum effects has been to increase device complexity, but theory shows that if quantum effects can be exploited in molecular-scale electronics, this could provide a route to lower energy consumption and boost device performance. Here we demonstrate these effects experimentally, showing how the performance of molecular transistors is improved when the resistive channel contains two destructively interfering waves. We use a zinc-porphyrin coupled to graphene electrodes in a three-terminal transistor to demonstrate a >10^4 conductance-switching ratio, a subthreshold swing at the thermionic limit, a >7 kHz operating frequency and stability over >10^5 cycles. We fully map the anti-resonance interference features in conductance, reproduce the behaviour by density functional theory calculations and trace back the high performance to the coupling between molecular orbitals and graphene edge states. These results demonstrate how the quantum nature of electron transmission at the nanoscale can enhance, rather than degrade, device performance, and highlight directions for future development of miniaturized electronics.
Having remembered an article about superabsorption increasing battery charge rate, I wondered whether there is superradiance with electrons, and there is! :
"Superradiance and Subradiance due to Quantum Interference of Entangled Free Electrons" (2021) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.12...
> [...] Our findings suggest that light emission can be sensitive to the explicit quantum state of the emitting matter wave and possibly serve as a nondestructive measurement scheme for measuring the quantum state of many-body systems.
> an article about superabsorption increasing battery charge rate
"Physicists steer chemical reactions by magnetic fields and quantum interference" https://news.ycombinator.com/item?id=30647086 :
"Superabsorption in an organic microcavity: Toward a quantum battery" (2022) https://www.science.org/doi/10.1126/sciadv.abk3160 :
> [...] Our work opens future opportunities for harnessing collective effects in light-matter coupling for nanoscale energy capture, storage, and transport technologies.
Qubits for Quantum Computing Might Just Be Atoms
"Quadruple quantum: engineers perform four quantum control methods in just one atom" (2024) https://www.unsw.edu.au/newsroom/news/2024/02/four-different... :
> [Antimony] was chosen because its nucleus possesses eight distinct quantum states, plus an electron with two quantum states, resulting in a total of 8 x 2 = 16 quantum states, all within just one atom. Reaching the same number of states using simple quantum bits – or qubits, the basic unit of quantum information – would require manufacturing and coupling four of them.
"Navigating the 16-dimensional Hilbert space of a high-spin donor qudit with electric and magnetic fields" (2024) https://link.springer.com/article/10.1038/s41467-024-45368-y
EpiPen For Heart Attacks? Idorsia Launches Phase III Study Of Selatogrel (2021)
Almost otoh, but fwiu: a Vape Pen with (which) cannabinoids may be a helpful immediate intervention for ischemic Stroke?
Smoking increases risk of stroke and heart attack (MI: Myocardial Infarction).
Cannabis [smoking?] is associated with heart disease and MI in some studies but that could be confounding e.g. preexisting hypertension and other lifestyle factors.
(This is relevant to cardiothoraic and geriatric medicine.)
Why don't I just sell AEDs here. AEDs are also a heart attack intervention. It looks like AEDs are about $800-$2000 USD.
AED: Automated External Defibrilator: https://en.wikipedia.org/wiki/Automated_external_defibrillat...
You absolutely can buy AEDs for home use, and if you're high-risk it might even be a good idea.
The only reason it's not recommended more widely is cost (they also need regular maintenance) and likelihood of actually needing it making it a poor medical value for the general population.
(This is also predicated on having people around who are trained to use the AED. If you life alone or your family/roommates don't know how how to use it, it's useless.)
I thought the whole point was that you don’t need training? Most of them literally talk you through and and all the administration is computer controlled.
You don't strictly need training to use it, but training is still strongly recommended if you want to know how to use one effectively and have the best chance of survival.
At a minimum, you need to know how to perform CPR in between shocks (or if you don't have a shockable rhythm). Ideally, you should know how to perform good CPR. The higher end ones will coach you on performing CPR, but that's definitely not universal.
Not to mention you need to figure out pad placement, possibly shave someone's chest (if they're excessively hairy), and delegate calling 911 to someone.
When seconds count you don't want to be spending minutes figuring all this out.
BLS: Basic Life Support > Method: https://en.wikipedia.org/wiki/Basic_life_support#Method :
> DRSABCD: Danger, Response, Send for help, Airway, Breathing, CPR, Defibrillation
"Drs. ABCD"
CPR: Cardiopulmonary Resuscitation: https://en.wikipedia.org/wiki/Cardiopulmonary_resuscitation
CPR > Use of Devices > Defibrillators, Devices for timing CPR, Devices for assisting in manual CPR, Devices for providing automatic CPR (*), Mobile apps for providing CPR instructions: https://en.wikipedia.org/wiki/Cardiopulmonary_resuscitation#...
/? CPR training: https://www.google.com/search?q=cpr+training &tbm=vid
/? AED CPR site:sba.gov https://www.google.com/search?q=site%3Asba.gov+AED+CPR
SBA.gov blog > Review Your Workplace Safety Policies:
> Also, consider offering training for CPR to employees. Be sure to have an automatic external defibrillator (AED) on site and have employees trained on how to use it. The American Red Cross and various other organizations offer free or low-cost training.
I suspect we will be. They were virtually unheard of a decade ago (and I think the cost was closer to $10k then) but are now more or less standard kit in new commercial construction.
/? Is there a recommended placement for an AED wall cabinet within a facility? https://www.google.com/search?q=Is+there+a+recommended+place...
> How many AEDs should a facility have?
> When responding to someone who has suffered Sudden Cardiac Arrest (SCA), immediate action is critical for saving lives. The sooner that bystanders treat the SCA victim with a defibrillation shock from an Automated External Defibrillator (AED), the more likely that they will survive.
> According to [US OSHA], of the 350,000 people who die from SCA outside the hospital in the United States each year, 10,000 lives are lost in the workplace. By having defibrillators throughout offices and facilities, businesses are able to protect the lives of both their workforce and visitors.
estimated_response_times[0] = time__to_walk_from_a_central_point * 2
ADA guidelines for AED placement: https://www.google.com/search?q=ada+guidelines+AED
https://www.aedbrands.com/resources/implement/where-to-place... , AED Placement Guidelines [PDF] https://www.rescuetraininginstitute.com/wp-content/uploads/2... :> [ height_min: 15" (38cm), max height: 48" (121cm) , max protrusion: 4" (10cm), side reach: 54" (137cm) ]
And also there are AED backpacks, which are probably easier to carry through hallways for 6 minutes (given a recommended maximum of 3 minutes each way)
Hacking the genome of fungi for smart foods of the future
This seems like a much more promising avenue than lab-grown meat. Fungi are already rich in protein, are a bit closer on the tree of life to animals than plants are, are sedentary by default, and are absolute chemical powerhouses.
If we can engineer a fungus to produce just the right amino profile, or whatever nutrition you're looking for, then you have a self perpetuating stock that should be much lower tech and simpler to manage than trying to grow muscle tissue in a "petri dish".
Heck, we could just be eating more mushrooms, they already are a pretty good source of protein.
How far is that from pre-existing products, like Quorn[1], which here in the UK is probably the main producer of vegan/vegetarian meat simulacra? The problem is that people don't like eating straight fungus, and so it is serially transformed through elaborate processing, to create meat-like products. Which it turns out are extremely unhealthy.
I think you are on more promising ground in your final remark. There are plenty of whole food plant-based sources of protein that are tested, economical and low-carbon, while possessing none of the health drawbacks of 'high tech' alternatives.
100g at 2.3g carbs, 1.7g fat, and 13g protein? What's extremely unhealthy about that?
Are there other macronutrients in that much mass of food?
30g of Hemp Hearts (shelled hemp seeds) have 10g protein, 12g polyunsaturated fats (Omega 3 and 6), 1g sugar, and a bunch of other vitamins and minerals; but some brands have 2.3g Manganese (which 100% RDI) and Manganese toxicity occurs beyond 11mg (UL) and other brands don't have those nutrients listed on the nutrition facts label for their similar product: https://images.app.goo.gl/a2v9imfaWWyG5hSn8
Flax has a better Omega 6:3 PUFA polyunsaturated fats ratio, but only 2g protein out of 11g.
Manganese and MSG both have something to do with Glutamate, which apparently affects motivation; https://news.ycombinator.com/item?id=35885204 :
>> High glutamine-to-glutamate ratio predicts the ability to sustain motivation: The researchers found that individuals with a higher glutamine-to-glutamate ratio had a higher success rate and a lower perception of effort.
That is interesting about the manganese and glutamate-to-glutamine angle, although it's worth noting that manganese in foods has very poor bioavailability:
Humans absorb only about 1% to 5% of dietary manganese. Infants and children tend to absorb greater amounts of manganese than adults. In addition, manganese absorption efficiency increases with low manganese intakes and decreases with higher intakes
https://ods.od.nih.gov/factsheets/Manganese-HealthProfession...
Another interesting tidbit from the links you provide, 90% of glutamine synthesis happens in muscle cells (from glutamate and ammonia). So in theory, building more muscle mass (while getting adequate dietary glutamine and glutamate) be beneficial for regulating glutamine-to-glutamate ratios. Although it doesn't look like this has been studied specifically as of yet.
On the other hand, over-training and extreme endurance training without proper protein intake leads to depletion of circulating glutamine levels. It is suggested that depleted glutamine may be the cause of increased risks of infection and chronic fatigue during periods of over-training (even in elite athletes).
TIL there are vegan sources of MSG, and that peas, tomatoes, and processed meats have high amounts of glutamate: https://www.webmd.com/diet/high-glutamate-foods
Muscles burn Lactic Acid before they burn glucose.
FWIU LAB Lactic Acid Bacteria use Glutamate to control pH by GABA production.
And GABA, well https://en.m.wikipedia.org/wiki/%CE%93-Aminobutyric_acid :
> [GABA] is the chief inhibitory neurotransmitter in the developmentally mature mammalian central nervous system. Its principal role is reducing neuronal excitability throughout the nervous system.
So, IIUC, when you feed LAB glutamate, you get GABA?
> Are there other macronutrients in that much mass of food?
By convention, those three are the macronutrients. Dietary fibre is sometimes included, 100g of quorn mince has 7.5 g.
The Wiki article has a detailed breakdown of the protein fraction by amino acid, but only breaks fats out into 'saturated' and 'not', which is reasonably common.
https://news.ycombinator.com/item?id=34674644 :
> Omega-3s and Omega-6s would be listed under "Polyunsaturated Fats" if it were allowed to list them on the standard Nutrition Facts label instead of elsewhere on the packaging.
Maybe someone else should ask FDA to allow food labelers to optionally include a breakdown of PUFAs Polyunsaturated Fats into at least Omega 6 and Omega 3 next time there is a public comment register on Nutrition Facts labels. There may be more justifying research on Omegas' relevancy to health now?
Show HN: WhatTheDuck – open-source, in-browser SQL on CSV files
WhatTheDuck is an in-browser sql tool to analyze csv files that uses duckdb-wasm under the hood.
Please provide feedback/issues/pull requests.
Cool stuff, we are also building in the space, currently adding Python support with Pyodide. I would recommend you add your live website at the top of the README so people can find it easily (we had a lot of views that way for https://github.com/pretzelai/pretzelai). Also, we added a simple text2sql "Ask AI" that users loved, maybe your users will like it too.
JupyterLite is also built on Pyodide.
Is there support for DuckDB in JupyterLite in WASM in WhatTheDuck or pretzelai?
datasette-lite can load [remote] sqlite and Parquet but not yet DuckDB (?) with Pyodide in WASM, and there's also JupyterLite as a datasette plug-in: https://github.com/simonw/datasette-lite
Two-faced solar panels can generate more power at up to 70% less cost
Oh neat!
I briefly pondered this same idea when first hearing about how cats have such good night vision (they have basically got a mirror _behind_ their light receptors which increases the chance of a stray photon getting picked up)
Not quite the same thing I'm sure...
With agrivoltaics do the plants function like a mirror?
I just saw article about this study; "Methodology for the estimation of cultivable space in photovoltaic installations with dual-axis trackers for their reconversion to agrivoltaic plants" (2024) https://www.sciencedirect.com/science/article/pii/S030626192... https://www.pv-magazine-india.com/2024/03/19/how-to-convert-...
Albedo: https://en.wikipedia.org/wiki/Albedo :
> Surface albedo is defined as the ratio of radiosity J_e to the irradiance E_e (flux per unit area) received by a surface. [2]
IDK what a reasonable adjustment factor for dual-sided single-junction cells given crop albedo would be
Technically, even with one-sided panels, it's probably actually necessary to also model light that bounces off a shiny crop back into the air above the panel and diffuses and reflects against other radiation and scatters back to the panel; it's probably necessary to model albedo (or reflectivity) for single-junction cells too.
Noise Fuels Quantum Leap, Boosting Qubit Performance by 700%
> We suggest a completely different approach. Instead of getting rid of noise, we use continuous real-time noise surveillance and adapt the system as changes in the environment happen,” says Ph.D. Researcher at NBI Fabrizio Berritta, lead author on the study. [...]
> The protocol behind the new findings integrates a singlet-triplet spin qubit implemented in a gallium arsenide double quantum dot with FPGA-powered qubit controllers. The qubit involves two electrons, with the states of both electrons entangled.
"Real-time two-axis control of a spin qubit" (2024) https://www.nature.com/articles/s41467-024-45857-0
- > - "Coherent Interaction of a Few-Electron Quantum Dot with a THz Optical Resonator" (2024) https://news.ycombinator.com/item?id=39365579
- "Navigating the 16-dimensional Hilbert space of a high-spin donor qudit with electric and magnetic fields" (2024) https://link.springer.com/article/10.1038/s41467-024-45368-y
- "Shattering the 'copper or optics' paradigm: humble plastic waveguides outperform" (2024) https://news.ycombinator.com/item?id=39493372
Plastic!
But then now, today, "Researchers demonstrate breakthrough recyclability of carbon nanotube sheets" which are functional substitutes for copper IIUC https://news.ycombinator.com/item?id=39762164
Researchers demonstrate breakthrough recyclability of carbon nanotube sheets
> "It demonstrates that high-performance materials made from carbon nanotubes can be reused as structural reinforcement or electrical conductors. This is due to the fact that neither their continuity, alignment and mechanical properties, nor their conductivity is affected by this recycling process."
> "These will be able to displace widespread CO2-intensive materials, such as conventional carbon fibers and some metals like copper, decreasing our future CO2 emissions footprint."
Seeding steel frames brings destroyed coral reefs back to life
- "Playing sounds of healthy coral on reefs makes fish return" (2019) https://news.ycombinator.com/item?id=36450824#36465577 and other Coral Restoration links
> Because this loose rubble is in constant motion, tumbling and rolling around, coral larvae don’t have enough time to grow before they get squashed. So the first step to bringing damaged reefs back to life was stabilizing the rubble. The people running the MARS program did this using Reef Stars, hexagonal steel structures coated with sand. “These structures are connected into networks and pinned to the seabed to reduce the movement of the rubble,” Lamont said.
surely there's no easily seen problems being ignored in the rush to "do good" here.
https://en.wikipedia.org/wiki/Osborne_Reef
steel and saltwater are kinda known to have problems
From "Tire dust makes up the majority of ocean microplastics" https://news.ycombinator.com/item?id=37726539&p=3#37728049 :
>> Years of creating artificial reefs from old tires are catching up.
> People probably don't even realize that most tires are synthetic rubber and thus are also bad for the ocean.
> Is there a program to map the locations of old synthetic tire reefs, and what is a suitable replacement for reef reconstruction and regrowth?
I really hate this kind of brainless driveby pessimism. Look up the steel frames they're using; they're tiny. The amount of steel being added to the ocean by this project is absolutely inconsequential compared to the steel added by innumerable other human activities. Even shipping containers falling off ships dwarfs this project, let alone ships themselves sinking. Furthermore, we have decades of experience that tells us that corals and fish quite like steel shipwrecks, debris, etc. What's more, the frames they're using demonstrably do work as scaffolding for coral growth; it's not an untested idea.
the problems i allude to involve the steel no longer being steel within a very short time after hitting the water. As with the wiki reference, stuff you're expecting to pin together with steel will likely not stay together.
It's not pessimism to say "this could be done better; we've seen the methods being used fail repeatedly before"
How could it be done better?
Could they be made from under spec for other applications steel?
There's not yet an awesome-coral restoration markdown README.md; or any mentions of both "MARRS" and "Reef Cubes".
Do coral prefer steel to other materials like concrete, sargassumcrete, hempcrete, sugarcrete, formed CO2, or IDK cellulose; and can you just add iron to the mix or what do coral prefer?
Can RUVs and robots deploy coral scaffolding safely at scale underwater?
What other shapes would solve?
"Researchers create green steel from toxic red mud in 10 minutes" (2024) https://newatlas.com/materials/toxic-baulxite-residue-alumin... :
> Researchers have turned the red mud waste from aluminum production into green steel
Perhaps green steel for steel coral reef scaffolding
Insecurity and Python Pickles
There should be a data-only pickle serialization protocol (that won't serialize or deserialize code).
How much work would it be to create a pickle protocol that does not exec or eval code?
python/cpython//Lib/pickle.py: https://github.com/python/cpython/blob/main/Lib/pickle.py
python/cpython//Lib/pickletools.py: https://github.com/python/cpython/blob/main/Lib/pickletools....
A data-only pickle serialization protocol implementation would need to skip calls to self.save_global() in pickle._Pickler.save() if condition(pickle_protocol) here https://github.com/python/cpython/blob/main/Lib/pickle.py#L5... and #L585 , and also in the save_type() dispatch table at #L1123 .
How a Solar Revolution in Farming Is Depleting Groundwater
I’m very naive in this space. What percent of water extracted makes its way back into the water table and how long does it take? I assume you lose some to the plant that is harvested and some to evaporation. But how much makes it back?
What are some solutions for water table depletion and how comparably sustainable are they?
How much water is extracted from air by PV panels that output H2O, and what are the impacts?
Do drilling and fracking consume lots of water and create holes through aquifers? Do holes in aquifers cause aquifers to drain out to lower chambers of the earth?
Historically, people have moved to where the water is.
Is there a subsurface ocean on Earth or Mars?
Is there enough of a thermal gradient between very deep wells and surface water to generate energy?
How many humans operating humanoid robots operating construction equipment does to take to build irrigation canals?
Perhaps helpful new solar distillation tech for #Goal6 #CleanWater:
- "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39507692
Perhaps there are already sustainable salt-tolerant plants without GM; algae, kelp, sargassum
There is plenty of salt water on Earth. Does life on earth depend upon and/or derive from ocean thermohaline cycles?
I see videos of successful bund dig operations; digging half-craters to catch water apparently keeps the water from flashing off.
Tilling turns topsoil to dirt due to solar radiation.
No-till farming methods like residue mulching prevent soil depletion and retain water in topsoil; in order to create healthy soil as an output.
Hugelkultur is basically residue mulching and rainwater catchment.
Removing all of the trees reduces the level of water in the air and soil.
What sorts of agricultural machines can service bunds, raised beds, and hugelkultur mounds instead of rows?
Boinc lets you help cutting-edge science research using your computer
Ultra-efficient toroidal propeller gets contra-rotating double upgrade
> Beyond a general promise of better efficiency, agility and maneuverability, Sharrow doesn't dig too deep into how its toroidal blades might improve upon the benefits already inherent in contra-rotating propulsion.
Contra-rotating: https://en.wikipedia.org/wiki/Contra-rotating :
> Contra-rotating, also referred to as coaxial contra-rotating, is a technique whereby parts of a mechanism rotate in opposite directions about a common axis, usually to minimise the effect of torque. [...] Planetary gear
Epicyclic gearing (planetary gearing) https://en.wikipedia.org/wiki/Epicyclic_gearing :
> By choosing to hold one component or another—the planetary carrier, the ring gear, or the sun gear—stationary, three different gear ratios can be realized. [3]
Other things boats and motors:
[Radial] Axial Flux motor: https://en.wikipedia.org/wiki/Axial_flux_motor
Koenigsigg Raxial Flux motor latest spec from YouTube, originally; 800 HP, 922-lb/ft or torque, under 40kg, but 750A800V=600,000W=60kW
https://www.thedrive.com/news/heres-how-koenigseggs-dark-mat...
> Photos of the drive motor next to a roughly 11-ounce (330 mL) beverage can show just how tiny the Quark motor really is for its power output*
By comparison, a Panther Jeep Wrangler water car has a 4-speed manual "(3,664 cc) 24 valve SOHC V6 VTEC engine which produces 305 HP engine" with an engine weight of 160 kg out of a curb weight of 1,340 kg; and it looks like she drafts about ___ parked and ___ at 38 knots or pulling a waterskier.
Honda J engine: https://en.wikipedia.org/wiki/Honda_J_engine
Panther Jeep Wrangler watercar: https://en.wikipedia.org/wiki/Panther_(amphibious_vehicle)
But Axial Flux motors and geared toroidal propellers do not make MHD drives, which have no moving parts by comparison.
/? mhd drive: https://youtube.com/results?sp=mAEA&search_query=MHD+drive
MHD: Magnetohydrodynamic drive > Typology > Marine propulsion: https://en.wikipedia.org/wiki/Magnetohydrodynamic_drive#Mari...
BlenderBIM – add-on for beautiful, detailed, and data-rich OpenBIM with Blender
There is (among many other) one thing that I truly like about Blender: with its gigantic add-on ecosystem it is on the path to becoming the standard platform for everything 3D.
Or rather: the one place where everything you can do with 3D can come together under the same roof: CAD, CAM, Architecture, Interior Design, 3D reconstruction, Scientific Visualization, Animation, 2.5D drawing, Artistic Modeling, Prototyping, Industrial Design, Simulation, Rendering, Special Effects (of course), etc ... I'm certainly forgetting 80% of the list.
Take any of the other 3D packages out there in all the domains I listed: none of them has anywhere near the breadth and versatility of Blender, and pretty much none of them are capable of importing 3D work from other packages the way Blender can.
I hope various industries involved in 3D will finally recognize this fact and the economic value it brings about. The SFX industry has already recognized it. Fingers crossed, the other will follow.
How does CAD modelling work in Blender? I thought solid geometry manipulation like in CAD software was fundamentally different to what Blender does (surfaces only)?
In the BlenderBIM Add-on, CAD modeling works by providing a dedicated modeling interface that interacts with the IFC data model. IFC is an ISO standard that describes geometry, data, objects, processes, and relationships, for the built environment (i.e. BIM). IFC's geometry is based on STEP, so modeling in IFC has similarities to modeling in STEP. You can have swept solid extrusions, revolutions, true arcs and circles, but also some things that don't exist in STEP but are specific to the built environment, like the parameters for an I-shaped beam.
So like other Blender add-ons that do this already (e.g. CAD Sketcher) the BlenderBIM Add-on bypasses Blender for most geometric operations. You define using the dedicated modeling interface, the IFC data model is updated, then the triangles are visualised by Blender, but the under-the-hood CAD definition is there.
The geometry processing layer is done by IfcOpenShell, which is a layer on top of OpenCASCADE (but in the future, may make more use of CGAL or its own custom geometry processing code).
That said modeling buildings are typically simpler than manufacturing CAD modeling. Buildings have forgiving construction tolerances, and are often assemblages of off-the-shelf products where it is not necessary to redefine the exact product shapes. (i.e. place a packer here, place a sprinkler there, whether the sprinkler looks like a sprinkler or is geometrically just a cube in my model makes little difference to construction or maintenance operations).
Is there a good way to work with bulid123d (or cadquery) parametric CAD with Python and Blender as a CAD? All three are built with OCC OpenCASCADE.
IFC: Industry Foundation Classes: https://en.wikipedia.org/wiki/Industry_Foundation_Classes
BIM: Building Information Modeling: https://en.wikipedia.org/wiki/Building_information_modeling
I'm not currently aware of any integration, but since it is based on OpenCASCADE, there may be certain shapes that "just work" with IFC. This is similar to how FreeCAD uses IfcOpenShell to directly serialise some FreeCAD shapes into IFC geometry.
Webb and Hubble confirm Universe's expansion rate
Hubble's law: https://en.wikipedia.org/wiki/Hubble%27s_law
Expansion of the universe: https://en.wikipedia.org/wiki/Expansion_of_the_universe :
> While objects cannot move faster than light, this limitation only applies with respect to local reference frames and does not limit the recession rates of cosmologically distant objects
Given that v is velocity in the opposite direction, and c is the constant reference frame speed of light; do we account for velocity in determining whether light traveling at c towards earth will ever reach us?
v - c < 0 if v>c
v + c > c if v>0
Are tachyons FTL, is there entanglement FTL?How far away in light years does a mirror in space need to be in order to see dinosaurs that existed say 100 million years ago?
Tachyons aren't a thing. Tachyons are sci-fi nonsense.
Nothing in the universe can travel faster than the speed of light. This does not hold for the universe itself. It can and does expand faster than the speed of light, using specific reference frames (i.e., big enough).
So, space can increase FTL. Particles do not travel faster than light tho, that is nonsense.
If the universe is expanding faster than the speed of light, how are particles not traveling faster than the speed of light indeed with zero acceleration relative to the expansion?
If the Copenhagen interpretation is correct, particle states are correlated after a photonic beam splitter; if you measure entangled photons after a beam splitter, their states are still linked.
If virtual particle states are entangled with particle states in black holes or through ER=EPR bridges, is there effectively FTL?
There is FTL within dielectric antennae.
Take a star in a region of the universe that recedes from us at 3c. In what sense is the star not traveling faster than the speed of light relative to us?
Well, we have (virtual) particles that can travel backwards in time, without breaking causality. There's no proof that Tachyons exist, they are purely hypothetical, but they are not outright nonsense.
From https://news.ycombinator.com/item?id=38045112#38047149 :
> from https://news.ycombinator.com/item?id=35877402#35886041 : "EM Wave Polarization Transductions" Lt. Col. T.E Bearden (1999) :
>> Physical observation (via the transverse photon interaction) is the process given by applying the operator ∂/∂t to (L^3)t, yielding an L3 output
[and "time-polarized photons"]
> do we account for velocity in determining whether light traveling at c towards earth will ever reach us?
As far as I know this is not necessary because the speed of light is constant regardless of the velocity of both the source and the observer (this is Einstein's special relativity: https://en.m.wikipedia.org/wiki/Special_relativity)
> Expansion of the universe: https://en.wikipedia.org/wiki/Expansion_of_the_universe :
>> While objects cannot move faster than light, this limitation only applies with respect to local reference frames and does not limit the recession rates of cosmologically distant objects
Then the length traveled changes for two photons emitted when they cross the starting line at different velocities:
const d = distance = 1
# d_start = 0
# d_finish = d
c: Velocity # of a photon in a vacuum
v1: Velocity # of photon emission source 1
v2: Velocity # of photon emission source 2
v1 + c != v2 + c
distance / (v1 + c) ?= distance / (v2 + c)
((v1 + c)*t) - ((v2+c)*t) ?!= 0
Should (v + c) be prematurely reduced to just (c), with a confirmed universal expansion rate?> How far away in light years does a mirror in space need to be in order to see dinosaurs that existed say 100 million years ago?
A mirror in a vacuum 1ly away will return the photonic signal from t=0 at t=2 light years, with diffraction (due to matter in [solid, liquid, gas, plasma, and superfluid/superconductor] phases describable with superfluid quantum gravity (e.g. Fedi's with Bernoulli pressure))
# m = meters
# s = f(decay_rate_of_cesium_atom)
# c = const x: meters/second
Time = TimeInLightYears
t_tx = t_transmitted: Time = 0
t_rx = t_received: Time
d = distance: Time
# d == d_ab == d_ba
# t_tx, d, t_rx,
test_data = [
[0, 0, 0.0],
[0, 2, 1],
[0, 1, 0.5],
[-1, 0, None],
[-100e6, 0, None], # dinosaurs
]
@pytest.mark.parametrized('t_tx, d, t_rx', test_data)
def test_test_data(ttx, d, trx'):
assert trx == ttx + (2*d)
m = scattering_matrix: # FluidDiffractionTensorMatrix
assert is_YangBaxterMatrix(m)
A water droplet [in space] reflects enough [photonic,] information to recover a modulated signal and also a curvilinear transformation of Escher looking into a crystal ball, with c the speed of light as a consideration only at cosmological distances.How large of water droplet, a regular or irregular spheroid or reflective and/or lensing matter configuration is necessary to reflect a sufficient amount of photonic information to recover information from cosmological information medium, in terms of constructor theory?
Inverse scattering transform: https://en.wikipedia.org/wiki/Inverse_scattering_transform
R-matrix > R-matrix method in quantum mechanics: https://en.m.wikipedia.org/wiki/R-matrix :
> [now generalized for photonic diffraction]
Quantum inverse scattering method > Procedure: https://en.wikipedia.org/wiki/Quantum_inverse_scattering_met... :
> 1. Take an R-matrix which solves the Yang–Baxter equation.
> 2. Take a representation of an algebra T_R satisfying the RTT relations. [[clarification needed]]
> 3. Find the spectrum of the generating function t(u) of the centre of T_R
> 4. Find correlators
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Edsger Dijkstra had a precise english style; even though his mother tongue was Dutch, I find he made better use of English than many native speakers.
In one of the EWD's, he reminisced that, as children, they were taught to never begin to speak a sentence unless they already knew how they were going to finish it.
I'd bet these two observations have a causal connection.
I love reading his EWDs, I had a professor who worked with him who mentioned he made his students work use pens while taking his tests. To make it less likely for the students to make mistakes??
Perhaps to make it easier determine how to correct instruction.
- "Guidelines for keeping a laboratory notebook" (2019) https://news.ycombinator.com/item?id=19123430#19126809
Monte-Carlo graph search from first principles
> Also, as far as the name "Monte-Carlo Tree Search" itself, readers might note that there is nothing "Monte-Carlo" in the above algorithm - that it's completely deterministic!
MCTS as commonly implemented is deterministic? How strange! I assumed there was randomness in the sampling.
Meta Loves Python
I know I keep posting this, but my favorite (half-joking) programming advice was from a Google dev infra engineer who said "Any Python script over 100 lines should be rewritten in Bash, because at least that way you're not fooling yourself into thinking it's production quality"
Fedora Workstation 41 to No Longer Install Gnome X.org Session by Default
I started using Fedora as my desktop just as they defaulted to wayland and honestly the transition has barely been noticed by me.
The biggest issue a few years ago was screen sharing not working in video metting apps like Teams. But I can't even remember when that was anymore.
So the team has really done well from my perspective.
On Fedora, I had to compile Emacs to use GTK rendering.
VSCode needed some custom launch parameters with the insiders version:
code-insiders --enable-features=UseOzonePlatform --enable-features=WaylandWindowDecorations --ozone-platform=wayland --disable-features=WaylandFractionalScaleV1
Factorio runs really well with: SDL_VIDEODRIVER=wayland __GL_SYNC_TO_VBLANK=yes ~/Games/factorio/bin/x64/factorio
So some slight annoyances, but nothing really unusual for anyone who's used Linux desktops for a long time.vscodium flatpak fonts are already fine; IME vscodium works with font scaling fine out of the box. There's a vscode flatpak issue: "Feature: add optional Wayland support" https://github.com/flathub/com.visualstudio.code/issues/471#...
Fractional scales, fonts and hinting
And still no proper gamma correction.. wich makes the whole thing useless, specially on low-DPI screens
Nobody does it properly on linux, despite freetype's recommendations.. a shame..
https://freetype.org/freetype2/docs/hinting/text-rendering-g...
It's even worse for Light text on Dark backgrounds.. text becomes hard to read..
GTK is not alone
Chromium/Electron tries but is wrong 1.2 instead of 1.8, and doesn't do gamma correction on grayscale text
https://chromium-review.googlesource.com/c/chromium/src/+/53...
Firefox, just like Chromium is using Skia, so is using proper default values but ignores it for grayscale text too..
https://bugzilla.mozilla.org/show_bug.cgi?id=1882758
A trick that i use to make things a little bit better:
In your .profile:
export FREETYPE_PROPERTIES="cff:no-stem-darkening=0 autofitter:no-stem-darkening=0 type1:no-stem-darkening=0 t1cid:no-stem-darkening=0"
> This seems to fix the blurry fonts with Wayland instead of X:
flatpak --socket=wayland run com.visualstudio.code --enable-features=UseOzonePlatform --ozone-platform=wayland
https://github.com/flathub/com.visualstudio.code/issues/398 :> Various font-related flags I found in solving for blurry fonts on wayland
Is there an environment variable to select Wayland instead of XWayland for electron apps like Slack and VScode where fractional scaling with wayland doesn't work out of the box?
New hydrogen producing method is simpler and safer
"Decoupled supercapacitive electrolyzer for membrane-free water splitting" (2024) https://www.science.org/doi/10.1126/sciadv.adi3180 :
> Abstract: Green hydrogen production via water splitting is vital for decarbonization of hard-to-abate industries. Its integration with renewable energy sources remains to be a challenge, due to the susceptibility to hazardous gas mixture during electrolysis. Here, we report a hybrid membrane-free cell based on earth-abundant materials for decoupled hydrogen production in either acidic or alkaline medium. The design combines the electrocatalytic reactions of an electrolyzer with a capacitive storage mechanism, leading to spatial/temporal separation of hydrogen and oxygen gases. An energy efficiency of 69% lower heating value (48 kWh/kg) at 10 mA/cm2 (5 cm–by–5 cm cell) was achieved using cobalt-iron phosphide bifunctional catalyst with 99% faradaic efficiency at 100 mA/cm2. Stable operation over 20 hours in alkaline medium shows no apparent electrode degradation. Moreover, the cell voltage breakdown reveals that substantial improvements can be achieved by tunning the activity of the bifunctional catalyst and improving the electrodes conductivity. The cell design offers increased flexibility and robustness for hydrogen production.
Aloe vera:
> This type of supercapacitor formed within living plants acts as a form of electronic plant (e-plant) by using its tissue fluid electrolyte, which surprisingly presents a satisfying electrical capacitance of 182.5 mF cm−2,
Hemp bast fiber:
>> "Hemp Carbon Makes Supercapacitors Superfast" (2013) https://www.asme.org/engineering-topics/articles/energy/hemp... :
>> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg
TMK there's not yet an aerogel electrode; but aerogels are porous.
/? Hemp aerogel https://www.google.com/search?q=hemp+aerogel :
"Superelastic and Ultralight Aerogel Assembled from Hemp Microfibers" (2023) https://onlinelibrary.wiley.com/doi/full/10.1002/adfm.202300...
Hemp bast fiber is cheap: https://www.google.com/search?q=hemp+bast+fiber ?tbm=isch
"Self-cleaning superhydrophobic aerogels from waste hemp noil for ultrafast oil absorption and highly efficient PM removal" (2023) https://www.sciencedirect.com/science/article/abs/pii/S13835... : superhydrophobic and porous
Serving my blog posts as Linux manual pages
It would be cool to provide a deb repo as a way of subscribing to your blog.
So `apt update` will pull in all blog posts, `man your-blog` will show the latest post with links to the index of all your other posts.
On the one hand this idea is brilliant - on the other hand if it caught on, there are some obvious opportunities for malware intrinsic to the approach :'(
I think I'd be too chicken to subscribe.
How does HTTP(S) Same Origin policy work with local file:/// URLs?
TIL there's no `ls -al /usr/share/man/** | man --html`; though it would be easy to build one with Python's http.server, or bottlepy, or bash,.
Prompt: An http server ( with bash and /dev/tcp/ ) that serves HTML versions of local manpages with linkification of URLs and references to other manpages
> How does HTTP(S) Same Origin policy work with local file:/// URLs?
It doesn't since file:/// is just a uri and not part of the http protocol
If I open an HTML copy of a manpage that contains JS from a file:/// URL, can it read other local files and send them to another server with img URLs?
If so, Isn't it thus probably better to run an HTTP server over a permissioned socket than to serve static HTML [manpages] from file URLs [in a [DEB] package]?
file:/// is always going to be local to the machine reading the URI. So if you're serving a page that has a file:/// uri in it and someone on another machine clicks on it, they wont be seeing the file (unless they happen to have the same path on their machine). If the goal is get them that file, then yes, it'll need to be served.
CVE-2019-11730: https://nvd.nist.gov/vuln/detail/CVE-2019-11730
Google Will Pay You $5M to Figure Out What the Hell Quantum Computers Do
Quantum Algorithm Zoo lists algorithmic speedups: https://quantumalgorithmzoo.org/
What are the known applications for the known quantum algorithms?
What are existing NP problems that will be faster with enough coherent qubits and interconnect and storage?
"Ask HN: What has quantum computing achieved so far?" (2023) https://news.ycombinator.com/item?id=38822569#38830067
Washers and dryers are about to get a whole lot more efficient
This article implies that manufacturers are only making machines as efficient as is required by law. Which is cynical - but extremely believable.
Are there no companies out there making ultra-efficient household appliances even when they aren't being forced to?
Miele heat pump dryers are quite efficient.
FWIU, there are also now All-in-One Washer + Dryer combos?
There have been for some time. There are some issues. A minor one: the max wash load is larger than the max dryable load.
A major one: the act of unloading and reloading gives you a chance to shake out tangles in shirt sleeves and whatnot. If you don't untangle your clothes you'll find they don't dry as well and can fatigue a lot faster!
Dryers are great if you live in the south where things won't dry on their own, or if you need to run laundry fast. If you can afford it, a drying rack works great, is cheap, and is far, far easier on your clothes.
I have one of these and I love it. Takes half as much space in the closet as having a normal washer and dryer and still finishes most loads in a little over two hours as long as I don’t put a crazy amount of stuff in it. I’ve also noticed the water usage is significantly lower than my previous HE washer. Mine is the “GE Ultrafast” one.
Oh yeah. Have been for a while. I run into them a lot when I’m in Italy and still haven’t used one that worked well. Unless you want a load of laundry to take half a day. Then they work fine.
The new ones utilize heat pumps (GE Ultrafast, LG WashCombo), very different than existing ones in European countries.
They are popular in Europe and Asia but are somewhat less effective than separate units.
There are but I have heard that sometime the dryer takes forever to dry... never had one though
Black–Scholes Model
SHMT (2024) has a probably faster blackscholes_2d implementation: https://github.com/jk78346/SHMT/blob/main/src/kernels/functi... https://news.ycombinator.com/item?id=39515038
"Python for Finance", 2nd Edition (py4fi2nd) has a chapter on BSM with notebooks and a module: https://github.com/yhilpisch/py4fi2nd/tree/master/code/ch12 https://home.tpq.io/books/py4fi/
GH topic: black-scholes: https://github.com/topics/black-scholes
Neurosurgeon pioneers Alzheimer's, addiction treatments using ultrasound [video]
could look great in the short run but go very, very sideways (ultrasound could fragments plaques, possibly below size detection limit, but then nucleate the formation of even more plaques if your'e not fixing the underlying problem):
https://www.sciencedirect.com/science/article/abs/pii/S00222...
OGC GeoSPARQL v1.1: A Geographic Query Language for RDF Data
The OGC will forever hold a place in my heart as the source of numerous business-paid trips to interesting places...however, this is one of the most absurd documents I've seen in a while.
You've just inherited a legacy C++ codebase, now what?
I'd swap 2 and 3. Getting CI, linting, auto-formatting, etc. going is a higher priority than tearing things out. Why? Because you don't know what to tear out yet or even the consequence of tearing them out. Linting (and other static analysis tools) also give you a lot of insight into where the program needs work.
Things that get flagged by a static analysis tool (today) will often be areas where you can tear out entire functions and maybe even classes and files because they'll be a re-creation of STL concepts. Like homegrown iterator libraries (with subtle problems) that can be replaced with the STL algorithms library, or homegrown smart pointers that can just be replaced with actual smart pointers, or replacing the C string functions with C++'s on string class (and related classes) and functions/methods.
But you won't see that as easily until you start scanning the code. And you won't be able to evaluate the consequences until you push towards more rapid test builds (at least) if not deployment.
On the flip side, auto-formatting will trash your version history and impede analysis of "when and why was this line added".
You can instruct git to ignore specific commits for blame and diff commands.
See "git blame ignore revs file".
Intended use is exactly to ignore bulk changes like auto formatting.
Free, Open-Source Alternative to LabVIEW and TestStand
Awesome. Tools and applications for:
"Ask HN: Why don't datacenters have passive rooflines like Net Zero homes?" https://news.ycombinator.com/item?id=37607590 : awesome-fluid-dynamics
"Ask HN: Does mounting servers parallel with the temperature gradient trap heat?" https://news.ycombinator.com/item?id=23033210 :
> Would mounting servers sideways (vertically) allow heat to transfer out of the rack?
"Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations" https://news.ycombinator.com/item?id=31049970 ; jax-cfd
"Zero energy ready homes are coming" https://news.ycombinator.com/item?id=35064493 ; GH topic finite-element-analysis, structural-analysis
SymPy: Symbolic Mathematics in Python
SymPy: https://en.wikipedia.org/wiki/SymPy
- "How should logarithms be taught?" [with python and SymPy] https://news.ycombinator.com/item?id=28518565#28519356
From "SymPy - a Python library for symbolic mathematics" (2020) https://news.ycombinator.com/item?id=23767513 :
> NumPy for Matlab users: https://numpy.org/doc/stable/user/numpy-for-matlab-users.htm...
> SymPy vs Matlab: https://github.com/sympy/sympy/wiki/SymPy-vs.-Matlab
Is there any GUI for SymPy one could easily use to solve symbolic math like Maple/Maxima/Macsyma?
JupyterLite's default Python-compiled-to-WASM build has NumPy, SciPy, matplotlib, and SymPy installed; so you can do computer algebra with SymPy in a browser tab.
https://github.com/jupyterlite/jupyterlite/tree/main/py/jupy... :
> Initial support for interactive visualization libraries such as: altair, bqplot, ipywidgets, matplotlib, and plotly
Photon Detectors Rewrite the Rules of Quantum Computing
"Low-noise balanced homodyne detection with superconducting nanowire single-photon detectors" (2024) https://opg.optica.org/opticaq/fulltext.cfm?uri=opticaq-2-1-... :
> [...] A pertinent example of continuous variables comprise the quadrature representation of the optical quantum state, as opposed to the discrete-variable photon-number representation. The wave-like nature of the field results from the coherence between different photon number components of a field with uncertain photon number. Therefore, to study the wave-like nature of the field, one must be able to measure superpositions of photon number states.
> To do so, the field of a weak optical quantum state (signal) comprising a few photons interferes with the field of a bright coherent state (i.e., a local oscillator, LO) on a balanced beam splitter. The two output modes of the beam splitter are then measured with two photodetectors. By calculating the difference between the detector count rates, which is proportional to the field quadrature at a reference phase provided by the local oscillator, the optical quantum state can be characterized [3–5]. The ability to characterize optical quantum states in their phase-space representation [1] makes homodyne detection an essential tool for quantum information processing with continuous variables [6].
> Typically, conventional semiconductor photodiodes are used as the detector in balanced homodyne detection. The high optical flux arising from the LO lifts the optical signal above the electronic noise floor, which is typically in the pW√Hz range at telecommunication wavelengths. As a result, the generated carriers in the photodiode can be integrated over a characteristic response time, resulting in a photocurrent which is proportional to the incident optical flux [7]. For BHD, it is essential that the photodetector output is directly proportional to the intensity (photon flux) of the input light. In this case, we refer to the detector output as linear.
Single-photon_source#History: https://en.wikipedia.org/wiki/Single-photon_source#History :
> As well as NV centres and molecules, quantum dots (QDs),[14] quantum dots trapped in optical antenna,[15] functionalized carbon nanotubes,[16][17] and two-dimensional materials[18][19][20][21][22][23][24] can also emit single photons and can be constructed from the same semiconductor materials as the light-confining structures. It is noted that the single photon sources at telecom wavelength of 1,550 nm are very important in fiber-optic communication and they are mostly indium arsenide QDs.[25] [26] However, by creating downconversion quantum interface from visible single photon sources, one still can create single photon at 1,550 nm with preserved antibunching. [27]
"A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929 :
> "Logical states for fault-tolerant quantum computation with propagating light" (2024)
> it is essential that the photodetector output is directly proportional to the intensity (photon flux) of the input light
From "Physicists use a 350-year-old theorem to reveal new properties of light waves" (2023) https://news.ycombinator.com/item?id=37226121#37226160 :
>> "This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity."
Neutron capture reaction cross-section of 79Se through inverse kinematics
"Neutron capture reaction cross-section of 79Se through the 79Se(d,p) reaction in inverse kinematics" (2024) https://www.sciencedirect.com/science/article/pii/S037026932... :
> One way of handling such long-lived fission products is to neutralize waste through nuclear reactions [1], [2], [3], [4]. Decommissioning nuclear waste by employing high flux accelerators [1], [4] and fast reactors [2], [3] has been proposed. However, to design such facilities, reaction cross-sections in a wide energy range are indispensable.
Neutron capture for transmutation.
From "The Federal Helium reserve is for sale" (2023) https://news.ycombinator.com/item?id=37391584 :
> Again, Helium-3 is a viable nonradioactive input to nuclear fusion reactions
"The War over Burying Nuclear Waste in America's Busiest Oil Field" (2024) https://news.ycombinator.com/item?id=39422590 :
> $100 ball of Thorium = 100 years of energy. [For one person]
P5.js: Online Canvas Programming
There's a p5.js kernel in JupyterLite: https://jupyterlite.readthedocs.io/en/stable/_static/lab/ind...
Khan Academy Computer Programming "Unit 1: Intro to JS: Drawing & Animation" has ProccesingJS challenges. P5.js is the new ProcessingJS, and it's pretty much compatible. https://www.khanacademy.org/computing/computer-programming/p...
Insecure Features in PDFs (2021)
Show HN: R2R – Open-source framework for production-grade RAG
Hello HN, I'm Owen from SciPhi (https://www.sciphi.ai/), a startup working on simplifying˛Retrieval-Augmented Generation (RAG). Today we’re excited to share R2R (https://github.com/SciPhi-AI/R2R), an open-source framework that makes it simpler to develop and deploy production-grade RAG systems.
Just a quick reminder: RAG helps Large Language Models (LLMs) use current information and specific knowledge. For example, it allows a programming assistant to use your latest documents to answer questions. The idea is to gather all the relevant information ("retrieval") and present it to the LLM with a question ("augmentation"). This way, the LLM can provide answers (“generation”) as though it was trained directly on your data.
The R2R framework is a powerful tool for addressing key challenges in deploying RAG systems, avoiding the complex abstractions common in other projects. Through conversations with numerous developers, we discovered that many were independently developing similar solutions. R2R distinguishes itself by adopting a straightforward approach to streamline the setup, monitoring, and upgrading of RAG systems. Specifically, it focuses on reducing unnecessary complexity and enhancing the visibility and tracking of system performance.
The key parts of R2R include: an Ingestion Pipeline that transforms different data types (like json, txt, pdf, html) into 'Documents' ready for embedding. Next, the Embedding Pipeline takes text and turns it into vector embeddings through various processes (such as extracting text, transforming it, chunking, and embedding). Finally, the RAG Pipeline follows the steps of the embedding pipeline but adds an LLM provider to create text completions.
R2R is currently in use at several companies building applications from B2B lead generation to educational tools for consumers.
Our GitHub repo (https://github.com/SciPhi-AI/R2R) includes basic examples for application deployment and standalone use, demonstrating the framework's adaptability in a simple way.
We’d love for you to give R2R a try, and welcome your feedback and comments as we refine and develop it further!
[deleted]
Tangential to the framework itself, I've been thinking about the following in the past few days:
How will the concept of RAG fare in the era of ultra large context windows and sub-quadratic alternatives to attention in transformers?
Another 12 months and we might have million+ token context windows at GPT-3.5 pricing.
For most use cases, does it even make sense to invest in RAG anymore?
From "GenAI and erroneous medical references" https://news.ycombinator.com/item?id=39497333 literally 2 days ago:
> From [1], pdfGPT, knowledge_gpt, and paperai are open source. I don't think any are updated for a 10M token context limit (like Gemini) yet either.
Meta seeks ASIC designers for ML accelerators and datacenter SoCs
What do [PQ] SSL/TLS Accelerators and ML Accelerators / AI Accelerators cost these days, post Proof-of-Work mining (with and without ASIC resistance)?
ASIC: Application-Specific Integrated Circuit: https://en.wikipedia.org/wiki/Application-specific_integrate...
FPGA: Field-Programmable Gate Array: https://en.wikipedia.org/wiki/Field-programmable_gate_array
Graphcore IPU: https://en.wikipedia.org/wiki/Graphcore
"Intel’s Exascale Dataflow Engine Drops x86 and von Neumann" (2018) https://www.nextplatform.com/2018/08/30/intels-exascale-data... :
> The new architecture that they have dreamed up is called a Configurable Spatial Accelerator, but the word accelerator is a misnomer because what they have come up with is a way to build either processors or coprocessors that are really dataflow engines, not serial processors or vector coprocessors, that can work directly on the graphs that programs create before they are compiled down to CPUs in a traditional sense. The CSA approach is a bit like having a switch ASIC cross-pollenate with a math coprocessor, perhaps with an optional X86 coprocessor perhaps in the mix if it was needed for legacy support.
> The concept of a dataflow engine is not new. Modern switch chips work in this manner, which is what makes them programmable to a certain extent rather than just static devices that move packets around at high speed. Many ideas that Intel has brought together in the CSA are embodied in Graphcore’s Intelligence Processing Unit, or IPU. The important thing here is that Intel is the one killing off the X86 architecture for all but the basic control of data flow and moving away from a strict von Neumann architecture for a big portion of the compute. So perhaps Configurable Spatial Architecture might have been a better name for this new computing approach, and perhaps a Xeon chip will be thought of as its coprocessor and not the other way around.
FWIU the Intel 486 / 487 was the first CPU with a math coprocessor.
Coprocessor: https://en.wikipedia.org/wiki/Coprocessor
With some newer architectures, the GPU(s) are directly connected to HBMe RAM; which somewhat eliminates the CPU and Bus performance bottlenecks that ASICs and FPGAs are used to accelerate beyond.
High Bandwidth Memory > Technology > HBM3E: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Technolo...
AI Accelerator: https://en.wikipedia.org/wiki/AI_accelerator
Dataflow architecture: https://en.wikipedia.org/wiki/Dataflow_architecture
Massively parallel processor array: https://en.wikipedia.org/wiki/Massively_parallel_processor_a... :
> MPPAs are used in high-performance embedded systems and hardware acceleration of [various workloads], which otherwise would use FPGA, DSP and/or ASIC chips.
> These processors pass work to one another through a reconfigurable interconnect of channels.
"A PCIe Coral TPU Finally Works on Raspberry Pi 5" https://news.ycombinator.com/item?id=38310063 :
> An HBM3E HAT would or would not yet make TPUs more useful with a Raspberry Pi 5?
PVM Virtualization Framework Proposed for Linux – Built Atop the KVM Hypervisor
> Pagetable Virtual Machine
>> for safety, nested virtualization is disabled in the L0 hypervisor, so we cannot use KVM directly. Additionally, the current nested architecture involves complex and expensive transitions between the L0 hypervisor and L1 hypervisor.
>> "So the over-arching goals of PVM are to completely decouple secure container hosting from the host hypervisor and hardware virtualization support to: [1. Enable nested virtualization, and 2.] avoid costly exits to the host hypervisor and devise efficient world switching mechanisms.
Intel VT and AMD-V were not built for pagetable virtualization. What CPU changes would make Pagetable Virtualization more efficient?
Nothing changed to the CPU that favors PVM.
PVM is more efficient than Intel VT and AMD-V on nested virtualization since PVM doesn't cause VMexit back to L0 hypervisor in (al)most cases while Intel VT and AMD-V often involve L0/L1/L2 dancing.
The name "Pagetable Virtual Machine" mainly comes from its privilege-separation. X86_64 doesn't have ring1 nor enforce segment limits, so it cannot use ring1+segment like xenpv 32-bit. Separated pagetables are the only solution for privilege-separation.
Except for virtualizing addresses, pagetables are also crucial for mapping the switcher, emulating SMEP/SMAP, and so on.
Show HN: Sqlbind a Python library to compose raw SQL
I think the point about not know what SQL an ORM generated query is the one that resonates with me, as does the dig at Django ORM.
I love Django, including aspects of the ORM (its very easy to write schema, migrations are pretty easy to use, even queries are very concise and easy to write) but I have no idea what the SQL will look like except in the simplest cases.
I often do not particularly care, and it is easy enough to see them when I do, but I feel a bit icky about the extra layer of stuff I do not really understand.
If you use postgres, psycopg provides `cursor.mogrify(query, params)` which returns the actual query that will be executed . For example:
cursor.mogrify(*queryset.query.sql_with_params())
Alternatively, you can set the log level of the `django.db.backends` logger to DEBUG to see all executed querieshttps://docs.djangoproject.com/en/5.0/topics/db/sql/ :
> Django gives you two ways of performing raw SQL queries: you can use `Manager.raw()` to perform raw queries and return model instances, or you can avoid the model layer entirely and execute custom SQL directly.
https://stackoverflow.com/questions/1074212/how-can-i-see-th... has :
MyModel.objects.all().query.sql_with_params()
str(MyModel.objects.all().query)
And: from django.db import connection
from myapp.models import SomeModel
queryset = SomeModel.objects.filter(foo='bar')
sql_query, params = queryset.query.as_sql(None, connection)
with connection.connection.cursor(cursor_factory=DictCursor) as cursor:
cursor.execute(sql_query, params)
data = cursor.fetchall()
But that's still not backend-specific SQL?There should be an interface method for this. Why does psycopg call it mogrify?
https://django-debug-toolbar.readthedocs.io/en/latest/panels... :
> debug_toolbar.panels.sql.SQLPanel: SQL queries including time to execute and links to EXPLAIN each query
But debug toolbars mostly don't work with APIs.
https://github.com/django-query-profiler/django-query-profil... :
> Django query profiler - one profiler to rule them all. Shows queries, detects N+1 and gives recommendations on how to resolve them
https://github.com/jazzband/django-silk :
> Silk is a live profiling and inspection tool for the Django framework. Silk intercepts and stores HTTP requests and database queries before presenting them in a user interface for further inspection
`str(queryset.query)` does not give you executable SQL. Query parameters are not escaped or encoded in that representation of the query. That escaping/interpolation of query parameters is performed by mogrify. I agree the name is odd, and I don't know why they use it other than "transmogrify" being an obscure word for "magically transform".
debug_toolbar and silk are both tools that show you what queries were executed by your running application. They're both good tools, but neither quite solves the problem of giving you the executable SQL for a specific queryset (e.g., in a repl).
"Is ORM still an anti-pattern?" (2023)
> In truth the best way to do the data layer is to use stored procs with a generated binding at the application layer. This is absolutely safe from injection, and is wicked fast as well.
[Who must keep stored procedures in sync with migrations,]
https://news.ycombinator.com/item?id=36497613#36503998
I used the stored procedures approach sometimes. The usual criticisms are
1. Almost nobody in the team knows SQL, stored procedures more so.
2. It's not possible to deploy and test stored procedures with the usual tooling. Actually I don't know what's the usual way to do it in projects that go with that approach.
Perhaps I can help, or maybe learn something.
I don't really understand the confusion here, you write a stored procedure, you document it, test it, peer review it, then stuff it into the database. You then write code that uses it. No sarcasm, but what's the problem?
> you ... test it
I think they're referring to ongoing automated tests, and not one-off tests.
Yes, what is the normal way to unit test stored procedures? Dev teams used to Node, Python, Ruby or whatever language have tools that work with ORMs to setup a test database, run tests and zero it. I did wrote some stored procedures in a Rails migration many years ago but that's not the place where one would normally write code. Furthermore there will be many migrations like that, almost one for every change to the stored procedure. The problem is in the frameworks, that are not designed for that. So, what's the recommended way to handle them?
Right, I'm starting to get the picture. I've never worked with an ORM as my SQL is pretty solid. I have to put my hands up here and say I don't know, sorry.
I think I can tell you the answer: you have a very slow system test that spins up a whole real database, sets up all the tables and stored procedures, and fires a load of requests through the system and inspects the responses and the final state of the system.
That works; it's just maybe thousands of times slower than unit tests, and much more work to create and maintain.
Method identified to double computer processing speeds
They implemented a kernel driver that processes can submit work items to (multiply this vector, summing reduce et ). It balances the computation across available hardware.
Seems really neat, if not a bit click-baity.
The work would have to be large enough to be worth the cost of kernel context switch IIUC.
src/Python/kernels/ground_truth_functions.py: https://github.com/jk78346/SHMT/blob/main/src/Python/kernels... :
> [ blackscholes_2d, dct8x8_2d, dwt, hotsplot_2d, srad_2d, sobel_2d, npu_sobel_2d, minimum_2d, mean_2d, laplacian_2d, fft_2d, histogram_2d ]
src/kernels: https://github.com/jk78346/SHMT/tree/main/src/kernels :
> [ convolutionFFT2D, ]
What is the (platform) kernel context switching overhead tradeoff for what size workloads of the already implemented functions?
A History of the TTY
https://en.wikipedia.org/wiki/TTY disambiguates to Teleprinter, TDD, and Computer_terminal; but not Terminal_emulator, Virtual_terminal (VT), Pseudoterminal (PTY), or this article
It does disambiguate to Virtual console, which arguably encompasses all three of those.
Extreme salt-resisting multistage solar distilation with thermohaline convection
"Extreme salt-resisting multistage solar distillation with thermohaline convection" (2023) https://www.cell.com/joule/abstract/S2542-4351(23)00360-4 :
> Recent advances in multistage solar distillation are promising for the sustainable supply of freshwater. However, significant performance degradation due to salt accumulation has posed a challenge for both long-term reliability of solar desalination and efficient treatment of hypersaline discharge. Here, inspired by a natural phenomenon, thermohaline convection, we demonstrate a solar-powered multistage membrane distillation with extreme salt-resisting performance. Using a confined saline layer as an evaporator, we initiate strong thermohaline convection to mitigate salt accumulation and enhance heat transfer. With a ten-stage device, we achieve record-high solar-to-water efficiencies of 322%–121% in the salinity range of 0–20 wt % under one-sun illumination. More importantly, we demonstrate an extreme resistance to salt accumulation with 180-h continuous desalination of 20 wt % concentrated seawater. With high freshwater production and extreme salt endurance, our device significantly reduces the water production cost, paving a pathway toward the practical adoption of passive solar desalination for sustainable water economy.
- "Desalination system could produce freshwater that is cheaper than tap water" (2023) https://news.ycombinator.com/item?id=37681004 https://news.mit.edu/2023/desalination-system-could-produce-...
Distilled water: https://en.wikipedia.org/wiki/Distilled_water
Distillation > Desalination by distillation: https://en.wikipedia.org/wiki/Distillation#Desalination_by_d...
Desalination: https://en.wikipedia.org/wiki/Desalination
Salinity > See also: https://en.wikipedia.org/wiki/Salinity#See_also
TDS: Total Dissolved Solids sensor: https://en.wikipedia.org/wiki/Total_dissolved_solids#Measure...
Thermohaline circulation: https://en.wikipedia.org/wiki/Thermohaline_circulation
https://electrek.co/2023/10/02/mit-solar-power-drinking-wate... :
> They assert that if the system is scaled up to the size of a small suitcase, it could produce about 4 to 6 liters (1 to 1.5 gallons) of drinking water per hour and last several years before requiring replacement parts. At that scale and performance, the system could produce drinking water at a rate and price that’s cheaper than tap water.
> A scaled-up device could passively produce enough drinking water to meet the daily requirements of a small family. It could also supply off-grid coastal communities near seawater.
"140-year-old ocean heat tech could supply islands with limitless energy" if there's a 20° C thermal gradient; because the ocean is a big thermal battery that stores infrared from the sun especially at the surface of the water: https://news.ycombinator.com/item?id=38222695
https://en.wikipedia.org/wiki/Sustainable_Development_Goal_6 :
> Sustainable Development Goal 6 (SDG 6 or Global Goal 6) declares the importance of achieving "clean water and sanitation for all". It is one of the 17 Sustainable Development Goals established by the United Nations General Assembly to succeed the former Millennium Development Goals (MDGs).
Every model learned by gradient descent is approximately a kernel machine (2020)
Thanks for the submission. I keep an eye out for connections to potentially cheaper and simpler methods. The new comparison reminds me of this old paper which compared them to Gaussian networks:
https://arxiv.org/abs/1711.00165
For causality, I'm also keeping an eye out for ways to tie DNN research back into decision trees, esp probabilistic and nested. Or fuzzy logic. Here's an example I saw moving probabilistic methods into decision trees:
http://www.gatsby.ucl.ac.uk/~balaji/balaji-phd-thesis.pdf
Many of these sub-fields develop separately in their own ways. Gotta wonder what recent innovations in one could be ported to another.
Does a given model converge after Gaussian blurring? What does it do in the presence of noise, given the curse of dimensionality?
OpenCog integrates PLN and MOSES (~2005).
"Interpretable Model-Based Hierarchical RL Using Inductive Logic Programming" (2021) https://news.ycombinator.com/item?id=37463686 :
> https://en.wikipedia.org/wiki/Probabilistic_logic_network :
>> The basic goal of PLN is to provide reasonably accurate probabilistic inference in a way that is compatible with both term logic and predicate logic, and scales up to operate in real time on large dynamic knowledge bases
Asmoses updates an in-RAM (*) online hypergraph with the graph relations it learns.
CuPy wraps CuDNN.
Re: quantum logic and quantum causal inference: https://news.ycombinator.com/item?id=38721246
From https://news.ycombinator.com/item?id=39255303 :
> Is Quantum Logic the correct propositional logic? Is Quantum Logic a sufficient logic for all things?
A quantum ML task: Find all local and nonlocal state linkages within the presented observations
And then also do universal function approximation
But also biological neurological systems;
Coping strategies: https://en.wikipedia.org/wiki/Coping
Defense Mechanisms > Vaillant's categorization > Level 4: mature: https://en.wikipedia.org/wiki/Defence_mechanism#Level_4:_mat...
"From Comfort Zone to Performance Management" suggests that the Carnall coping cycle coincides with the TPR curve (Transforming, Performing, Reforming, adjourning); that coping with change in systems is linked with performance.
And Consensus; social with nonlinear feedback and technological.
What are the systems thinking advantages in such fields of study?
Systems theory > See also > Glossary,: https://en.wikipedia.org/wiki/Systems_theory#See_also
Wrong thread, my mistake; this comment was for this thread: https://news.ycombinator.com/context?id=39504104
Institutions try to preserve the problem to which they are the solution
Systemantics by John Gall has some insightful and surprising gems that feel related to The Shirky Principle, I guess because they're both related to complex wetware systems.
> Complex Systems Tend To Oppose Their Own Proper Function. My favorite axiom. Your city has a problem with trash building up on the streets so it sets up a waste management company. The company starts out by collecting trash daily, but then they shift to a Tuesday-only schedule. Next, you get a notice saying that the company will no longer service your building unless you buy their standardized garbage cans (to ensure that the robotic arms on their trucks can pick them up). Eventually, the waste management union goes on strike and the trash starts piling up on the street again anyways.
> People In Systems Do Not Do What The System Says They Are Doing. The stated purpose of a king is to rule a country, but in reality they spend a lot of time fighting off usurpers.
This one feels related to the Shirky Principle:
> A System Continues To Do Its Thing, Regardless Of Need. The Selective Service System continues to require all 18-year-old male US citizens to register for the draft, even though the US hasn’t had a draft in 51 years.
https://www.biodigitaljazz.net/blog/systemantics.html (one of my many blogs)
Systemantics (1977, 1986, 2002) > Contents: https://en.wikipedia.org/wiki/Systemantics#Contents
New printing might be coming up! His wife (and heir) suggested to a friend that he might be invited to write the forward to the next one. He has cases and cases of books in his basement, and has given them to all his students for years :)
Linked Data! IDK YAML-LD for TinaCMS in Git, but then also Jupyter-Book because it supports notebooks and Index generation with Sphinx.
Aarne-Thompson-Uther ATC character/plot story codes as Linked Data would be neat, too. https://en.wikipedia.org/wiki/Aarne%E2%80%93Thompson%E2%80%9...
IIRC I sent an email to a robotics team about cataloguing metadata for supported procedures as (JSON-LD) Linked Data; there also so that it's easy to add attributes and also to revise the schema of Classes and Properties.
Compared to Ctrl-F'ing a PDF copy of an ebook,
Client-side JS to fuzzy search (and auto complete) over just the names of the patterns/headings in the book would be cool; and then also search metadata attributes of each.
The facts in Mediawiki (Wikipedia,) infoboxes are regularly scraped by dbpedia. Wikidata is also a Wikipedia project, but with schema.
Dbpedia:
dbpedia.org/page/Distributed_algorithm: https://dbpedia.org/page/Distributed_algorithm
dbpedia.org/resource/Category:Distributed_algorithms: http://dbpedia.org/resource/Category:Distributed_algorithms
dbpedia.org/page/Category:Anti-patterns: https://dbpedia.org/page/Category:Anti-patterns
IIRC there used to be a longer list of {software, and project management} antipatterns on wikipedia? It may have been unfortunately and sort of tragically removed due to being original research without citations.
Anti-pattern: https://en.wikipedia.org/wiki/Anti-pattern
Also there's an Antipatterns catalog wiki: https://wiki.c2.com/?AntiPatternsCatalog
There are probably more useful systems patterns to be mined from: the Fowler patterns books like "Patterns of Distributed Systems (2022)" [1] and "Patterns of Enterprise Architecture", Lamport's "Concurrency: The Works of Leslie Lamport", Leslie Valient's Distributed Systems work,
[1] https://news.ycombinator.com/item?id=38234304
Perhaps there's also general systems theory insight to be gained from limits and failures in [classical and quantum] Universal Function Approximation; general AL/ML limits and Systemantics.
Universal approximation theorem; simulacra: https://en.wikipedia.org/wiki/Universal_approximation_theore...
Whoops, I mixed up the comment forms:
More notes: https://news.ycombinator.com/item?id=39497800
GenAI and erroneous medical references
If anyone is interested - my startup which did exactly this was acquired 8 months ago. As many mentioned - the sauce is in the RAG implementation and the curation of the base documents. As far as I can tell, the software is now covering ~11m lives or so and going strong - the company already got the acquisition price back and some more. I was even asked to come support an initiative to move from RAG to long context + multi-agents.
I know it works very well. There are lots of literature from the medical community where they don't consult any actual AI engineers. There are also lots of literature on the tech community where no clinicians are to be seen. Take both of them with a massive grain of salt.
If you don’t mind sharing, what lesser known RAG tricks did you use to ensure the correct information was going through?
Simple similarity search is not enough.
Main thing is to curate a good set of documents to start with. Garbage in (like Bing google results like this study did) --> Garbage out.
From the technical side, the largest mistake people do is abstracting the process with langchain and the like - and not hyper-optimizing every step with trial and error.
From [1], pdfGPT, knowledge_gpt, and paperai are open source. I don't think any are updated for a 10M token context limit (like Gemini) yet either.
[1] https://news.ycombinator.com/item?id=39363115
From https://news.ycombinator.com/item?id=39492995 :
> On why code LLMs should be trained on the edges between tests and the code that they test, that could be visualized as [...]
"Find tests for this code"
"Find citations for this bias"
Perhaps this would be best:
"Automated Unit Test Improvement Using Large Language Models at Meta" (2024) https://news.ycombinator.com/item?id=39416628
From https://news.ycombinator.com/item?id=37463686 :
> When the next token is a URL, and the URL does not match the preceding anchor text.
> Additional layers of these 'LLMs' could read the responses and determine whether their premises are valid and their logic is sound as necessary to support the presented conclusion(s), and then just suggest a different citation URL for the preceding text
Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale
https://www.sciencealert.com/physicists-have-figured-out-a-w...
> To circumvent this dilemma, Fuchs and his team used something called a superconducting magnetic trap. A small trap made of tantalum is cooled to a critical temperature of 4.48 Kelvin (-268.67 Celsius, or -451.6 Fahrenheit).
> In the chamber, the particle is levitated. This consists of three 0.25-millimeter neodymium magnet spheres and one 0.25-millimeter glass sphere stuck together to create one particle around 0.43 grams in mass.
> The apparatus is suspended from springs in a mass spring system to shield the experiment from external vibrations, and the cryostat is placed on pneumatic dampers to limit vibrations from the building.
"Measuring gravity with milligram levitated masses" (2024) https://www.science.org/doi/10.1126/sciadv.adk2949 :
> Abstract: Gravity differs from all other known fundamental forces because it is best described as a curvature of space-time. For that reason, it remains resistant to unifications with quantum theory. Gravitational interaction is fundamentally weak and becomes prominent only at macroscopic scales. This means, we do not know what happens to gravity in the microscopic regime where quantum effects dominate and whether quantum coherent effects of gravity become apparent. Levitated mechanical systems of mesoscopic size offer a probe of gravity, while still allowing quantum control over their motional state. This regime opens the possibility of table-top testing of quantum superposition and entanglement in gravitating systems. Here, we show gravitational coupling between a levitated submillimeter-scale magnetic particle inside a type I superconducting trap and kilogram source masses, placed approximately half a meter away. Our results extend gravity measurements to low gravitational forces of attonewton and underline the importance of levitated mechanical sensors.
From "Show HN: Open-source digital stylus with six degrees of freedom" (2023) https://news.ycombinator.com/item?id=38246722 re: RSSI hacks, IMU Inertial Measurement Unit, Quantum Navigation; and Rydberg antenna :
> Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
Ask HN: Anyone use a code to mindmap/flowchart tool?
Is there any really good code to mindmap tool? So that it can take a large codebase and split it into a mindmap?
markmap: markdown + mindmap: https://markmap.js.org/
On why code LLMs should be trained on the edges between tests and the code that they test, that could be visualized as a mindmap DAG with cycles
django_extensions/utils/dia2django.py: https://github.com/django-extensions/django-extensions/blob/...
django_extensions/management/modelviz.py: https://github.com/django-extensions/django-extensions/blob/...
viewflow supports BPMN: https://github.com/viewflow/viewflow https://github.com/viewflow/cookbook/blob/main/guardian_perm...
Wikipedia mentions security concerns with low-code and no-code apps; and it's rare to impossible for a tool to support round-trip from code -> diagram UI -> code. And the test for isomorphism or functional equivalenve after normalization.
https://news.ycombinator.com/item?id=39139198 :
> All program evolution algorithms tend to produce bloated, convoluted, redundant programs ("spaghetti code"). To avoid this, MOSES performs reduction at each stage, to bring the program into normal form. The specific normalization used is based on Holman's "elegant normal form", which mixes alternate layers of linear and non-linear operators. The resulting form is far more compact than, say, for example, boolean disjunctive normal form. Normalization eliminates redundant terms, and tends to make the resulting code both more human-readable, and faster to execute.
"Elements of an expert system for determining the satisfiability of general Boolean expressions" (1990) https://dl.acm.org/doi/10.5555/100095 :
> Neither the constraint calculus nor the satisfiability-determination (SD) algorithm require that an expression be in either Conjunctive or Disjunctive Normal Form.
IR: Intermediate Representation: https://en.wikipedia.org/wiki/Intermediate_representation
How many ways can a compiler generate a code graph from a code graph?
Shattering the 'copper or optics' paradigm: humble plastic waveguides outperform
> Point2’s E-Tube provides a low-cost, low-loss broadband dielectric waveguide solution that could serve as an advanced alternative to existing electrical and optical interconnects in high-speed, short-reach communication links. It’s 80% lighter and 50% less bulky than copper cables, and could reduce power consumption and the cost of optical cables by 50%, with picosecond latencies that are three orders of magnitude better.
Terahertz quantum resonators currently have a maximum transmission distance of a hundred micrometers:
- "Coherent Interaction of a Few-Electron Quantum Dot with a THz Optical Resonator" (2024) https://news.ycombinator.com/item?id=39365579
Show HN: Consol3 – A 3D engine for the terminal that executes on the CPU
Hi all
This has been my hobby project for quite a few years now
It started as a small engine to serve as a sandbox to try out new 3d graphics ideas
After adding many features through out the years and re-writing the entire engine a few times, this is the latest state
It currently supports loading models with animations, textures, lights, shadow maps, normal maps, and some other goodies
I've also recently added voxel raymarching as an alternative renderer, along with a fun physics simulation :)
Textual is not 3d too, but is also great for TUIs.
Textualize/Frogmouth has a TUI tree control: https://github.com/Textualize/frogmouth
FWICS browsh supports WebGL over SSH/MoSH https://www.brow.sh/docs/introduction/ :
> The terminal client updates and renders in realtime so that, for instance, you can watch videos. It uses the UTF-8 half-block trick () to get 2 colours from every character cell, thus simulating basic graphics.
https://github.com/fathyb/carbonyl :
> Carbonyl originally started as html2svg and is now the runtime behind it.
Always wondered how brew.sh added the brew sprite there; that's real nice.
TIL that e.g. Kitty term can basically framebuffer modified Chrome?
https://github.com/chase/awrit :
> Yep, actual Chromium being rendered in your favorite terminal that supports the Kitty terminal graphics protocol.
FWIW Cloudflare has clientless Remote Browser Isolation that also splits the browser at the rendering engine.
A TUI Manim renderer would be neat. Re: Teaching math with Manim and interactive 3d: https://github.com/bernhard-42/jupyter-cadquery/issues/99
What would you add to make it easier to teach with this entirely CPU + software rendering codebase?
What prompts for learning would you suggest?
- Pixar in a Box, Wikipedia history of CG industry,: https://westurner.github.io/hnlog/#comment-36265807
- "Rotate a wireframe cube or the camera perspective with just 2d pixels to paint to; And then rotate the cube about a point other than the origin, and then move the camera while the cube is rotating"
- OTOH, ManimML, Yellowbrick, and the ThreeJS Wave/Particle simulator might be neat with a slow terminal framebuffer too
The War over Burying Nuclear Waste in America's Busiest Oil Field
Copenhagen Atomics' reactor fits in a 40ft shipping container and burns nuclear waste and Thorium, for example. https://www.copenhagenatomics.com/technology/ :
> Burning spent nuclear fuel by kickstarting our reactors on long-lived "waste" left in spent fuel we're able to convert it to fission products, which only needs to be stored for 300 years
There was a presentation that suggested that a cue ball of Thorium would supply enough energy for the average bear's lifetime energy use?
Nuclear Transmutation of nuclear waste would presumably be handled like any other processing capability. If there is laser nuclear Transmutation of nuclear waste, it's presumably a near-range capability. If there is laser nuclear Transmutation of nuclear waste, that's basically a more efficient reactor design FWIU.
There are microbes that eat waste and self replicate.
Nuclear Fusion can produce He3 and ship that around as a critical input instead of enriched Uranium or nastier FWIU.
Almost everything on this planet survives directly or indirectly off solar, with thermophiles that transmute old wells into crystals being an exception.
$100 ball of Thorium = 100 years of energy.
A newer video:
"THORIUM: World's CHEAPEST Energy!" https://youtube.com/watch?v=U434Sy9BGf8
Inside the proton, the ‘most complicated thing you could possibly imagine’
I just had a really stupid thought, after finishing reading the article.
So, the electron is an elementary particle, right? Compared to the proton, the electron is "simple", yes?
Despite this difference in complexity, an electron has a charge of -e and a proton has a charge of +e. They are exactly complementary regarding charge (if I am understanding right, I am not a smart person).
my question is... why? why must protons and electrons be perfectly complementary regarding charge? if the proton is this insanely complex thing, by what rule does it end up equaling exactly the opposite charge of an electron? why not a charge of +1.8e, or +3e, or 0.1666e, etc? Certainly it is convenient that a proton and electron complement each other, but what makes that the case? Does this question even make sense?
so, there's a concept of a "positron", which I can understand - of course it has charge +e, it is the "opposite" of an electron. it is an anti-electron. at least that makes some kind of sense. but a proton is made up of this complex soup of other elementary particles following all these crazy rules, and yet it also ends up being exactly +e.
Are there intermediate [electron,] charge states between + and - in superfluids and/or superconductors?
Is there superposition with electron charge states?
The typical model of superconductivity says that electrons in the material pair up to form a quasiparticle -- the "cooper pair" -- with new properties, namely not experiencing resistance. The original quantized charge of the electrons still adds up to the same amount.
Unlike protons an neutrons, electrons are considered elementary particles that can't be broken down any further, so their charge can not be "divided" into something less than 1.
Quantum Hall effect: https://en.wikipedia.org/wiki/Quantum_Hall_effect :
> The fractional quantum Hall effect is more complicated and still considered an open research problem. [2] Its existence relies fundamentally on electron–electron interactions. In 1988, it was proposed that there was quantum Hall effect without Landau levels. [3] This quantum Hall effect is referred to as the quantum anomalous Hall (QAH) effect. There is also a new concept of the quantum spin Hall effect which is an analogue of the quantum Hall effect, where spin currents flow instead of charge currents. [4]
Fractional quantum Hall effect: https://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect :
> The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2-dimensional (2D) electrons shows precisely quantized plateaus at fractional values of e^{2}/h, where e is the electron charge and h is the Planck constant. It is a property of a collective state in which electrons bind magnetic flux lines to make new quasiparticles, and excitations have a fractional elementary charge and possibly also fractional statistics
westurner.github .io/hnlog/#story-38139569 ctrl-f "quantum Hall", "hall effect" :
- "Electrical switching of the edge current chirality in quantum Hall insulators" (2023) https://www.nature.com/articles/s41563-023-01694-y ( https://news.ycombinator.com/item?id=38139569 )
But that's not elementary charge.
"Inside the proton, the ‘most complicated thing you could possibly imagine’" (2024) https://www.quantamagazine.org/inside-the-proton-the-most-co... https://news.ycombinator.com/item?id=39374020 :
> Despite this difference in complexity, an electron has a charge of -e and a proton has a charge of +e. They are exactly complementary regarding charge (if I am understanding right, I am not a smart person).
> my question is... why? why must protons and electrons be perfectly complementary regarding charge? if the proton is this insanely complex thing, by what rule does it end up equaling exactly the opposite charge of an electron? why not a charge of +1.8e, or +3e, or 0.1666e, etc? Certainly it is convenient that a proton and electron complement each other, but what makes that the case?
"Electrons become fractions of themselves in graphene, study finds" https://phys.org/news/2024-02-electrons-fractions-graphene.a... :
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024) https://www.nature.com/articles/s41586-023-07010-7
Bad property debt exceeds reserves at largest US banks
I want to see this play out without government intervention. If banks need to take losses, so be it. This needs to be unfettered and not be an excuse to lower interest rates.
The TARP bailouts showed all of us that bailing out banks in bad situations does little to help the average citizen anyway. Let them take the losses. Why on earth did we train banks that they can have all the upside but have the public eat the downside if a financial crisis ever arises out of their own behavior?
Because not helping those folks had systemic risk, they were deemed "Too Big to Fail" and given interest free loans after they cut taxes to pay for their surprise necessary multi-trillion dollar war.
And Geithner saved us all as he was asked to do.
Is it the same people tasked with getting inflation down and whethering the induced bubbles this time? They have gotten inflation down, but they can't hide the unpaid expenses of "scorched earth" and "starve the beast" experts at this.
> Geithner saved us all as he was asked to do
Geithner had nothing to do with TARP; the SEC was barely involved. This article has practically nothing to do with the SEC outside reporting requirements.
We’ve also gone through a round of bank failures, largely due to risk managers who couldn’t contemplate America raising rates, without moral hazard.
[dead]
Automated Unit Test Improvement Using Large Language Models at Meta
"Automated Unit Test Improvement using Large Language Models at Meta" (2024) https://arxiv.org/abs/2402.09171 :
> This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination. [...] We believe this is the first report on industrial scale deployment of LLM-generated code backed by such assurances of code improvement.
Coverage-guided unit test improvement might [with LLMs] be efficient too.
https://github.com/topics/coverage-guided-fuzzing :
- e.g. Google/syzkaller is a coverage-guided syscall fuzzer: https://github.com/google/syzkaller
- Gitlab CI supports coverage-guided fuzzing: https://docs.gitlab.com/ee/user/application_security/coverag...
- oss-fuzz, osv
Additional ways to improve tests:
Hypothesis and pynguin generate tests from type annotations.
There are various tools to generate type annotations for Python code;
> pytype (Google) [1], PyAnnotate (Dropbox) [2], and MonkeyType (Instagram) [3] all do dynamic / runtime PEP-484 type annotation type inference [4] to generate type annotations. https://news.ycombinator.com/item?id=39139198
icontract-hypothesis generates tests from icontract DbC Design by Contract type, value, and invariance constraints specified as precondition and postcondition @decorators: https://github.com/mristin/icontract-hypothesis
Nagini and deal-solver attempt to Formally Verify Python code with or without unit tests: https://news.ycombinator.com/item?id=39139198
Additional research:
"Fuzz target generation using LLMs" (2023) https://google.github.io/oss-fuzz/research/llms/target_gener... https://security.googleblog.com/2023/08/ai-powered-fuzzing-b... https://hn.algolia.com/?q=AI-Powered+Fuzzing%3A+Breaking+the...
OSSF//fuzz-introspector//doc/Features.md: https://github.com/ossf/fuzz-introspector/blob/main/doc/Feat...
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=Fuz... :
- "Large Language Models Based Fuzzing Techniques: A Survey" (2024) https://arxiv.org/abs/2402.00350 : > This survey provides a systematic overview of the approaches that fuse LLMs and fuzzing tests for software testing. In this paper, a statistical analysis and discussion of the literature in three areas, namely LLMs, fuzzing test, and fuzzing test generated based on LLMs, are conducted by summarising the state-of-the-art methods up until 2024
Thanks for sharing this. By far the best tool I've seen in the market centered around Code Integrity is CodiumAI (https://www.codium.ai/). They generate unit test based on entire code repos. Also integrates into SDLC through a PR Agent on GitHub or GitLab. My whole team uses them.
RoR Debugbar
If anyone is interested in this sort of thing I started a proposal with a list of ideas and features around a Rails debug toolbar at: https://discuss.rubyonrails.org/t/proposal-a-fully-featured-...
NOTE: I'm not the author of this tool but they did end up replying in that thread. I didn't even know their tool existed until I happened to stumble upon it on Twitter well after my post.
It would be neat if something like this were built out and included in Rails at some point. Especially with Rails 8 focusing on tool integrations and overall developer happiness.
Django Debug Toolbar > Panels > Default built-in panels, Third-party panels, API for third-party panels: https://django-debug-toolbar.readthedocs.io/en/latest/panels...
Firelogger.py//middleware.py: https://github.com/binaryage/firelogger.py/blob/master/firep...
Distributed tracing w/ OpenTelemetry, Jaeger; Metrics with ~JMX
W3C Trace Context v1: https://www.w3.org/TR/trace-context-1/#overview
Reasons to heat your home using infrared fabric radiant heat
From https://theconversation.com/five-reasons-to-heat-your-home-u... :
> 2. Simple to install
> Infrared fabric looks like a roll of slightly stiff wallpaper. It’s essentially a graphene sandwich, a thin film of carbon between two sheets of paper that conducts low voltage electricity and emits infrared heat, like the sun, but without the light or harmful ultraviolet.
> A room’s ceiling area emits the right amount of heat for a room, making installation very simple in any property, irrespective of its construction, shape or size. It’s little more than a wallpapering job with a click together wiring connection. [...]
> 3. Affordable heat
> Infrared fabric is affordable to install and maintain due to its simplicity with a total cost of around £100 per sq metre for a full system. And it’s quite indestructible – it can have holes cut out of it and can get wet in floods without any danger to occupants or damage to the material. It’s also affordable to run.
Magika: AI powered fast and efficient file type identification
This looks cool. I ran this on some web crawl data I have locally, so: all files you'd find on regular websites; HTML, CSS, JavaScript, fonts etc.
It identified some simple HTML files (html, head, title, body, p tags and not much else) as "MS Visual Basic source (VBA)", "ASP source (code)", and "Generic text document" where the `file` utility correctly identified all such examples as "HTML document text".
Some woff and woff2 files it identified as "TrueType Font Data", others are "Unknown binary data (unknown)" with low confidence guesses ranging from FLAC audio to ISO 9660. Again, the `file` utility correctly identifies these files as "Web Open Font Format".
I like the idea, but the current implementation can't be relied on IMO; especially not for automation.
A minor pet peeve also: it doesn't seem to detect when its output is a pipe and strip the shell colour escapes resulting in `^[[1;37` and `^[[0;39m` wrapping every line if you pipe the output into a vim buffer or similar.
Thanks for the feedback -- we will look into it. If you can share with us the list of URL that would be very helpful so we can reproduce - send us an email at magika-dev@google.com if that is possible.
For crawling we have planned a head only model to avoid fetching the whole file but it is not ready yet -- we weren't sure what use-cases would emerge so that is good to know that such model might be useful.
We mostly use Magika internally to route files for AV scanning as we wrote in the blog post, so it is possible that despite our best effort to test Magika extensively on various file types it is not as good on fonts format as it should be. We will look into.
Thanks again for sharing your experience with Magika this is very useful.
What is the MIME type of a .tar file; and what are the MIME types of the constituent concatenated files within an archive format like e.g. tar?
hachoir/subfile/main.py: https://github.com/vstinner/hachoir/blob/main/hachoir/subfil...
File signature: https://en.wikipedia.org/wiki/File_signature
PhotoRec: https://en.wikipedia.org/wiki/PhotoRec
"File Format Gallery for Kaitai Struct"; 185+ binary file format specifications: https://formats.kaitai.io/
Table of ': https://formats.kaitai.io/xref.html
AntiVirus software > Identification methods > Signature-based detection, Heuristics, and ML/AI data mining: https://en.wikipedia.org/wiki/Antivirus_software#Identificat...
Executable compression; packer/loader: https://en.wikipedia.org/wiki/Executable_compression
Shellcode database > MSF: https://en.wikipedia.org/wiki/Shellcode_database
sigtool.c: https://github.com/Cisco-Talos/clamav/blob/main/sigtool/sigt...
clamav sigtool: https://www.google.com/search?q=clamav+sigtool
https://blog.didierstevens.com/2017/07/14/clamav-sigtool-dec... :
sigtool –-find-sigs "$name" | sigtool –-decode-sigs
List of file signatures: https://en.wikipedia.org/wiki/List_of_file_signaturesAnd then also clusterfuzz/oss-fuzz scans .txt source files with (sandboxed) Static and Dynamic Analysis tools, and `debsums`/`rpm -Va` verify that files on disk have the same (GPG signed) checksums as the package they are supposed to have been installed from, and a file-based HIDS builds a database of file hashes and compares what's on disk in a later scan with what was presumed good, and ~gdesktop LLM tools scan every file, and there are extended filesystem attributes for label-based MAC systems like SELinux, oh and NTFS ADS.
A sufficient cryptographic hash function yields random bits with uniform probability. DRBG Deterministic Random Bit Generators need high entropy random bits in order to continuously re-seed the RNG random number generator. Is it safe to assume that hashing (1) every file on disk, or (2) any given file on disk at random, will yield random bits with uniform probability; and (3) why Argon2 instead of e.g. only two rounds of SHA256?
https://github.com/google/osv.dev/blob/master/README.md#usin... :
> We provide a Go based tool that will scan your dependencies, and check them against the OSV database for known vulnerabilities via the OSV API. ... With package metadata, not (a file hash, package) database that could be generated from OSV and the actual package files instead of their manifest of already-calculated checksums.
Might as well be heating a pool on the roof with all of this waste heat from hashing binaries build from code of unknown static and dynamic quality.
Add'l useful formats:
> Currently it is able to scan various lockfiles, debian docker containers, SPDX and CycloneDB SBOMs, and git repositories
Things like bittorrent magnet URIs, Named Data Networking, and IPFS are (file-hash based) "Content addressable storage": https://en.wikipedia.org/wiki/Content-addressable_storage
I’m not sure what this comment is trying to say
File-based hashing is done is so many places, there's so much heat.
Sub- file-based hashing with feature engineering is necessary for AV, which must take packing, obfuscating, loading, and dynamic analysis into account in addition to zip archives and magic file numbers.
AV AntiVirus applications with LLMs: what do you train it on, what are some of the existing signature databases.
https://SigStore.dev/ (The Linux Foundation) also has a hash-file inverted index for released artifacts.
Also otoh with a time limit,
1. What file is this? Dirname, basename, hashes(s)
2. Is it supposed to be installed at such path?
3. Per it's header, is the file an archive or an image or a document?
4. What file(s) and records and fields are packed into a file, and what transforms were the data transformed with?
Ledger: Stripe's system for tracking and validating money movement
For those curious, YC funded an open source financial ledger company in S21: https://github.com/formancehq/stack
Did they implement Interledger Protocol (ILP) for their traditional or digital asset ledger? https://interledger.org/developers/rfcs/interledger-protocol...
Mentioned company founder here, we did not yet! I'm glad to see the ILP project is continuing to progress well though, we should definitely consider it again.
Chemical cocktail could restore sight by regenerating optic nerves
"Vision rescue via chemically engineered endogenous retinal ganglion cells" (2024) https://www.biorxiv.org/content/10.1101/2023.12.27.572921v1 :
> Loss of retinal ganglion cells (RGCs) is the final common end point for many optic neuropathies, ultimately leading to irreversible vision loss. Endogenous RGC regeneration from Müller cells presents a promising approach to treat these diseases, but mammalian retinas lack regenerative capacity. Here, we report a small molecule cocktail that causes endogenous Müller cell proliferation, migration, and specification to newly generated chemically induced RGCs (CiRGCs) in NMDA injured mice retina. Notably, regenerated CiRGCs extend axons towards optic nerve, and rescue vision post-NMDA treatment. Moreover, we successfully reprogrammed human primary Müller glia and fibroblasts into CiRGCs using this chemical-only approach, as evidenced by RGC-specific gene expression and chromatin signature. Additionally, we show that interaction between SOX4 and NF-kB determine CiRGC fate from Müller cells. We anticipate endogenous CiRGCs would have therapeutic potential in rescuing vision for optic nerve diseases.
"Chemical cocktail could restore sight by regenerating optic nerves" (2024) https://www.newscientist.com/article/2416821-chemical-cockta...
F-Zero courses from a dead Nintendo satellite service restored using VHS and AI
F-Zero: https://en.wikipedia.org/wiki/F-Zero :
> F-Zero begins in the year 2560 where the humanses' countless encounters with alien life forms throughout the universe greatly expanded Earth's social framework resulting in [...]
Odd excerpt… ai summary bot?
Bacteria Found 1,250 Meters Under Earth Can Turn Carbon Dioxide into Calcite
"Microbial Mineralization of Carbon Dioxide in Depleted Oilfields" (2024) https://agu.confex.com/agu/fm23/meetingapp.cgi/Paper/1274396 :
> Abstract: Carbon dioxide can be sequestered by conversion into carbonate minerals. Microbes accelerate this process using an enzyme called carbonic anhydrase. Carbonic acid solutions in Icelandic basalts showed inorganic carbonation in as little as two years. Adding microbes may reduce carbonation times to months or weeks. Depleted oil and gas fields, which often contain usable, existing wells, could also sequester carbon dioxide. [...] Carbon dioxide can be sequestered by conversion into carbonate minerals. Microbes accelerate this process using an enzyme called carbonic anhydrase. Carbonic acid solutions in Icelandic basalts showed inorganic carbonation in as little as two years. Adding microbes may reduce carbonation times to months or weeks. Depleted oil and gas fields, which often contain usable, existing wells, could also sequester carbon dioxide. [...]
> Preliminary results indicate that 500 bars at 80℃ are optimum parameters for the microbes to produce calcite crystals on diopside. Visible carbonates were present within ten days, much faster than inorganic processes.
Re: capping or P&A'ing [methane] wells [with radioactive tools]: "NASA finds super-emitters of methane" (2022) https://news.ycombinator.com/item?id=33431427
> A. Privately and/or Publicly grant to P&A wells:
($25k+ * n_wells)
+ ($7-8m+ * m_wells)
Is there anything more absorbent, for oil well cleanup?
"Self-cleaning superhydrophobic aerogels from waste hemp noil for ultrafast oil absorption and highly efficient PM removal" (2023) https://www.sciencedirect.com/science/article/abs/pii/S13835...
AMD funded a drop-in CUDA implementation built on ROCm: It's now open-source
I'm really rooting for AMD to break the CUDA monopoly. To this end, I genuinely don't know whether a translation layer is a good thing or not. On the upside it makes the hardware much more viable instantly and will boost adoption, on the downside you run the risk that devs will never support ROCm, because you can just use the translation layer.
I think this is essentially the same situation as Proton+DXVK for Linux gaming. I think that that is a net positive for Linux, but I'm less sure about this. Getting good performance out of GPU compute requires much more tuning to the concrete architecture, which I'm afraid devs just won't do for AMD GPUs through this layer, always leaving them behind their Nvidia counterparts.
However, AMD desperately needs to do something. Story time:
On the weekend I wanted to play around with Stable Diffusion. Why pay for cloud compute, when I have a powerful GPU at home, I thought. Said GPU is a 7900 XTX, i.e. the most powerful consumer card from AMD at this time. Only very few AMD GPUs are supported by ROCm at this time, but mine is, thankfully.
So, how hard could it possibly to get Stable Diffusion running on my GPU? Hard. I don't think my problems were actually caused by AMD: I had ROCm installed and my card recognized by rocminfo in a matter of minutes. But the whole ML world is so focused on Nvidia that it took me ages to get a working installation of pytorch and friends. The InvokeAI installer, for example, asks if you want to use CUDA or ROCm, but then always installs the CUDA variant whatever you answer. Ultimately, I did get a model to load, but the software crashed my graphical session before generating a single image.
The whole experience left me frustrated and wanting to buy an Nvidia GPU again...
Hey there -
I'm a maintainer (and CEO) of Invoke.
It's something we're monitoring as well.
ROCm has been challenging to work with - we're actively talking to AMD to keep apprised of ways we can mitigate some of the more troublesome experiences that users have with getting Invoke running on AMD (and hoping to expand official support to Windows AMD)
The problem is that a lot of the solutions proposed involve significant/unsustainable dev effort (i.e., supporting an entirely different inference paradigm), rather than "drop in" for the existing Torch/diffusers pipelines.
While I don't know enough about your set up to offer immediate solutions, if you join the discord, am sure folks would be happy to try walking through some manual troubleshooting/experimentation to get you up and running - discord.gg/invoke-ai
Hi! I really appreciate you taking the time to reply.
I have since gotten Invoke to run and was already able to get some results I'm really quite happy with, so thank you for your time and commitment working on Invoke!
I understand that ROCm is still challenging, but it seems my problems were less related to ROCm or Invoke itself and more to Python dependency management. It really boiled down to getting the correct (ROCm) versions of packages installed. Installing Invoke from PyPi always removed my Torch and installed CUDA-enabled Torch (as well as cuBLAS, cuDNN, ...). Once I had the correct versions of packages, everything just worked.
To me, your pyproject.toml looks perfectly sane, so I wasn't sure how to go about fixing the problem.
What ended up working for me was to use one of AMD's ROCm OCI base images, manually installing all dependencies, foregoing a virtual environment, cloning your repo (, building the frontend), and then installing from there.
The majority of my struggle would have been solved by a recent working Docker image containing a working setup. (The one on Docker Hub is 9 months old.) Trying to build the Dockerfile from your repo, I also ended up with a CUDA-enabled Torch. It did install the correct one first, but in a later step removed the ROCm-enabled Torch to switch it for the CUDA-enabled one.
I hope you'll consider investing some resources into publishing newer, working builds of your Docker image.
> AMD's ROCm OCI base images,
ROCm docs > "Install ROCm Docker containers" > Base Image: https://rocm.docs.amd.com/projects/install-on-linux/en/lates... links to ROCm/ROCm-docker: https://github.com/ROCm/ROCm-docker which is the source of docker.io/rocm/rocm-terminal: https://hub.docker.com/r/rocm/rocm-terminal :
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/rocm-terminal
ROCm docs > "Docker image support matrix":
https://rocm.docs.amd.com/projects/install-on-linux/en/lates...ROCm/ROCm-docker//dev/Dockerfile-centos-7-complete: https://github.com/ROCm/ROCm-docker/blob/master/dev/Dockerfi...
Bazzite is a ublue (Universal Blue) fork of the Fedora Kinoite (KDE) or Fedora Silverblue (Gnome) rpm-ostree Linux distributions; ublue-os/bazzite//Containerfile : https://github.com/ublue-os/bazzite/blob/main/Containerfile#... has, in addition to fan and power controls, automatic updates on desktop, supergfxctl, system76-scheduler, and an fsync kernel:
rpm-ostree install rocm-hip \
rocm-opencl \
rocm-clinfo
But it's not `rpm-ostree install --apply-live` because its a Containerfile.To install a ublue-os distro, you install any of the Fedora ostree distros: {Silverblue, Kinoite, Sway Atomic, or Budgie Atomic} from e.g. a USB stick and then `rpm-ostree rebase <OCI_host_image_url>`:
rpm-ostree rebase ostree-unverified-registry:ghcr.io/ublue-os/bazzite:stable
rpm-ostree rebase ostree-unverified-registry:ghcr.io/ublue-os/bazzite-nvidia:stable
rpm-ostree rebase ostree-image-signed:
ublue-os/config//build/ublue-os-just/40-nvidia.just defines the `ujust configure-nvidia` and `ujust toggle-nvk` commands:
https://github.com/ublue-os/config/blob/main/build/ublue-os-...There's a default `distrobox` with pytorch in ublue-os/config//build/ublue-os-just/etc-distrobox/apps.ini: https://github.com/ublue-os/config/blob/main/build/ublue-os-...
[mlbox]
image=nvcr.io/nvidia/pytorch:23.08-py3
additional_packages="nano git htop"
init_hooks="pip3 install huggingface_hub tokenizers transformers accelerate datasets wandb peft bitsandbytes fastcore fastprogress watermark torchmetrics deepspeed"
pre-init-hooks="/init_script.sh"
nvidia=true
pull=true
root=false
replace=false
docker.io/rocm/pytorch:
https://hub.docker.com/r/rocm/pytorchpytorch/builder//manywheel/Dockerfile: https://github.com/pytorch/builder/blob/main/manywheel/Docke...
ROCm/pytorch//Dockerfile: https://github.com/ROCm/pytorch/blob/main/Dockerfile
The ublue-os (and so also bazzite) OCI host image Containerfile has Sunshine installed; which is a 4k HDR 120fps remote desktop solution for gaming.
There's a `ujust remove-sunshine` command in system_files/desktop/shared/usr/share/ublue-os/just/80-bazzite.just : https://github.com/ublue-os/bazzite/blob/main/system_files/d... and also kernel args for AMD:
pstate-force-enable:
rpm-ostree kargs --append-if-missing=amd_pstate=active
ublue-os/config//Containerfile:
https://github.com/ublue-os/config/blob/main/ContainerfileLizardByte/Sunshine: https://github.com/LizardByte/Sunshine
moonlight-stream https://github.com/moonlight-stream
Anyways, hopefully this PR fixes the immediate issue: https://github.com/invoke-ai/InvokeAI/pull/5714/files
conda-forge/pytorch-cpu-feedstock > "Add ROCm variant?": https://github.com/conda-forge/pytorch-cpu-feedstock/issues/...
And Fedora supports OCI containers as host images and also podman container images with just systemd to respawn one or a pod of containers.
I actually used the rocm/pytorch image you also linked.
I'm not sure what you're pointing to with your reference to the Fedora-based images. I'm quite happy with my NixOS install and really don't want to switch to anything else. And as long as I have the correct kernel module, my host OS really shouldn't matter to run any of the images.
And I'm sure it can be made to work with many base images, my point was just that the dependency management around pytorch was in a bad state, where it is extremely easy to break.
> Anyways, hopefully this PR fixes the immediate issue: https://github.com/invoke-ai/InvokeAI/pull/5714/files
It does! At least for me. It is my PR after all ;)
Unfortunately NixOS (and Debian and Ubuntu) lack SELinux policies or other LSM implementations by default out of the box, and container-selinux contains more than e.g. docker.
Is there a way to 'restorecon --like / /nix/os/root72`; to apply SELonix extended filesystem attributes labels just to NixOS prefixes?
Some research is done with RPM-based distros; which have become so advanced with rpm-ostree support.
FWICS Bazzite has NixOS support, too; in addition to distrobox containers.
Bazzite has alot of other stuff installed that's not necessary when attempting to isolate sources of variance in the interest of reproducible research; but being for gaming it has various optimizations.
InvokeAI might be faster to install and to compute with with conda-forge builds.
> Proton+DXVK for Linux gaming
"Building the DirectX shader compiler better than Microsoft?" (2024) https://news.ycombinator.com/item?id=39324800
E.g. llama.cpp already supports hipBLAS; is there an advantage to this ROCm CUDA-compatibility layer - ZLUDA on Radeon (and not yet Intel OneAPI) - instead or in addition? https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#hi... https://news.ycombinator.com/item?id=38588573
What can't WebGPU abstract away from CUDA unportability? https://news.ycombinator.com/item?id=38527552
BLAS will only get you so far. About the highest level operation it has is matmul, which you can use to build convolution (im2col, matmul, col2im), but that won't be as performant as a hand optimized cuDNN convolution kernel. Same goes for any other high level neural net building blocks - trying to build them on top of BLAS will not get you remotely close to performance of a custom kernel.
What's nice about BLAS is that there are optimized implementations for CPUs (Intel MKL) as well as NVIDIA (cuBLAS) and AMD (hipBLAS), so while it's very much limited in what it can do, you can at least write portable code around it.
"CUDNN API supported by HIP" has a coverage table: https://rocm.docs.amd.com/projects/HIPIFY/en/amd-staging/tab...
ROCm/hipDNN wraps CuDNN on Nvidia and MiOpen on AMD; but hasn't been updated in awhile: https://github.com/ROCm/hipDNN
https://news.ycombinator.com/item?id=37808036 : conda-forge has various BLAS implementations, including MKL-optimized BLAS, and compatible NumPy and SciPy builds.
BLAS: Basic Linear Algebra Sub programs: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...
"Using CuPy on AMD GPU (experimental)" https://docs.cupy.dev/en/v13.0.0/install.html#using-cupy-on-... :
$ sudo apt install hipblas hipsparse rocsparse rocrand rocthrust rocsolver rocfft hipcub rocprim rccl
I guess I misunderstood you.
You were asking if this CUDA compatability layer might hold any advantage over HIP (e.g. for use by llama.cpp) ?
I think the answer is no, since HIP includes pretty full-featured support for many of the higher level CUDA-based APIs (cuDNN, cuBLAS, etc), while per the Phoronix article ZLUDA only (currently) has minimal support for them.
I wouldn't expect ZLUDA to provide any performance benefit over HIP either, since on AMD hardware HIP is just a pass-thru to MIOpen (AMD's equivalent to cuDNN), rocBLAS, etc.
GAN semiconductor defects could boost quantum technology
"Room Temperature Optically Detected Magnetic Resonance of Single Spins in GaN" (2024) https://www.nature.com/articles/s41563-024-01803-5 :
> Abstract: High-contrast optically detected magnetic resonance is a valuable property for reading out the spin of isolated defect colour centres at room temperature. Spin-active single defect centres have been studied in wide bandgap materials including diamond, SiC and hexagonal boron nitride, each with associated advantages for applications. We report the discovery of optically detected magnetic resonance in two distinct species of bright, isolated defect centres hosted in GaN. In one group, we find negative optically detected magnetic resonance of a few percent associated with a metastable electronic state, whereas in the other, we find positive optically detected magnetic resonance of up to 30% associated with the ground and optically excited electronic states. We examine the spin symmetry axis of each defect species and establish coherent control over a single defect’s ground-state spin. Given the maturity of the semiconductor host, these results are promising for scalable and integrated quantum sensing applications.
Coherent Interaction of a Few-Electron Quantum Dot with a THz Optical Resonator
"Researchers solve a foundational problem in transmitting quantum information" (2024) https://phys.org/news/2024-02-foundational-problem-transmitt... :
> They developed a new technology for transmitting quantum information over perhaps tens to a hundred micrometers [...,] "In our work, we couple a few electrons in the quantum dot to an electrical circuit known as a terahertz split-ring resonator," explains Kazuyuki Kuroyama, lead author of the study. "The design is simple and suitable for large-scale integration."
"Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 :
> Abstract: We have investigated light-matter hybrid excitations in a quantum dot (QD)-terahertz (THz) optical resonator coupled system. We fabricate a gate-defined QD in the vicinity of a THz split-ring resonator (SRR) by using a AlGaAs/GaAs two-dimensional electron system (2DES). By illuminating the system with THz radiation, the QD shows a current change whose spectrum exhibits coherent coupling between the electrons in the QD and the SRR as well as coupling between the 2DES and the SRR. The latter coupling enters the ultrastrong coupling regime and the coupling between the QD and the SRR is also very close to the ultrastrong coupling regime, despite the fact that only a few electrons reside in the QD.
Bioluminescent petunias now available for U.S. market
"An improved pathway for autonomous bioluminescence imaging in eukaryotes" (2024) https://www.nature.com/articles/s41592-023-02152-y :
> Abstract: The discovery of the bioluminescence pathway in the fungus Neonothopanus nambi enabled engineering of eukaryotes with self-sustained luminescence. However, the brightness of luminescence in heterologous hosts was limited by performance of the native fungal enzymes. Here we report optimized versions of the pathway that enhance bioluminescence by one to two orders of magnitude in plant, fungal and mammalian hosts, and enable longitudinal video-rate imaging.
Bioluminescence: https://en.wikipedia.org/wiki/Bioluminescence
How to center a div in CSS
It's taken decades for CSS to come up with anything as workable as putting content in a centered table. Yet during that same period, use of tables was shamed for layout.
HTML Tables don't wrap on mobile displays. If you build your site with tables for layout, it will probably have horizontal scrollbar(s) and you'll need to rewrite it for mobile; so a CSS framework is usually a safer choice for less layout rework, unless it's a data table.
HTML Tables need at least `<th scope="row|column">` to be accessible: https://developer.mozilla.org/en-US/docs/Learn/HTML/Tables/A...
"CSS grid layout": https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_grid_la... lists a few interactive resources for learning flexgrid:
- Firefox DevTools > INTRODUCTION TO CSS GRID LAYOUT: https://mozilladevelopers.github.io/playground/css-grid/
- CSS Grid Garden: https://cssgridgarden.com/
MDN > "Relationship of grid layout to other layout methods": https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_grid_la...
MDN: "Box alignment in grid layout": https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_grid_la...
> HTML Tables don't wrap on mobile displays
Sure they do. HN is, famously, rendered as tables.
word-wrap: break-word in one table row cell is not the same as 20% of a 240px landscape or a 1024px IE6 display.
shot-scraper supports --scale-factor, --width, --height, and --selector/-s '#elementid' with microsoft/playwright for the browser automation: https://shot-scraper.datasette.io/en/stable/screenshots.html...
Nvidia's Chat with RTX is an AI chatbot that runs locally on your PC
I’m struggling to understand the point of this. It appears to be a more simplified way of getting a local LLM running on your machine, but I expect less technically inclined users would default to using the AI built into Windows while the more technical users will leverage llama.cpp to run whatever models they are interested in.
Who is the target audience for this solution?
>It appears to be a more simplified way of getting a local LLM running on your machine
No, it answers questions from the documents you provide. Off the shelf local LLMs don't do this by default. You need a RAG stack on top of it or fine tune with your own content.
From "Artificial intelligence is ineffective and potentially harmful for fact checking" (2023) https://news.ycombinator.com/item?id=37226233 : pdfgpt, knowledge_gpt, elasticsearch :
> Are LLM tools better or worse than e.g. meilisearch or elasticsearch for searching with snippets over a set of document resources?
> How does search compare to generating things with citations?
pdfGPT: https://github.com/bhaskatripathi/pdfGPT :
> PDF GPT allows you to chat with the contents of your PDF file by using GPT capabilities.
GH "pdfgpt" topic: https://github.com/topics/pdfgpt
knowledge_gpt: https://github.com/mmz-001/knowledge_gpt
From https://news.ycombinator.com/item?id=39112014 : paperai
neuml/paperai: https://github.com/neuml/paperai :
> Semantic search and workflows for medical/scientific papers
RAG: https://news.ycombinator.com/item?id=38370452
Google Desktop (2004-2011): https://en.wikipedia.org/wiki/Google_Desktop :
> Google Desktop was a computer program with desktop search capabilities, created by Google for Linux, Apple Mac OS X, and Microsoft Windows systems. It allowed text searches of a user's email messages, computer files, music, photos, chats, Web pages viewed, and the ability to display "Google Gadgets" on the user's desktop in a Sidebar
GNOME/tracker-miners: https://gitlab.gnome.org/GNOME/tracker-miners
src/miners/fs: https://gitlab.gnome.org/GNOME/tracker-miners/-/tree/master/...
SPARQL + SQLite: https://gitlab.gnome.org/GNOME/tracker-miners/-/blob/master/...
https://news.ycombinator.com/item?id=38355385 : LocalAI, braintrust-proxy; promptfoo, chainforge, mixtral
> Are LLM tools better or worse than e.g. meilisearch or elasticsearch for searching with snippets over a set of document resources?
Absolutely worse, LLM are not made for it at all.
vDPA: Support for block devices in Linux and QEMU
> One of the significant benefits of vDPA is its strong abstraction, enabling the implementation of virtio devices in both hardware and software—whether in the kernel or user space. This unification under a single framework, where devices appear identical for QEMU facilitates the seamless integration of hardware and software components.
I honestly don’t know what this means. Is it faster? Is it more secure? Why would I use this vDPA thing instead of the good’ol virtio blk driver? Looking at the examples it certainly looks more cumbersome to setup…
Ceph Block Device 3rd Party Integrations: https://docs.ceph.com/en/latest/rbd/rbd-integrations/ : Kernel Modules, QEMU, libvirt, Kubernetes, Nomad, OpenStack, CloudStack, LIO iSCSI Gateway, Windows, qemu: https://docs.ceph.com/en/latest/rbd/qemu-rbd/#qemu-and-block...
`cephadm bootstrap` requires docker or podman and ssh: https://docs.ceph.com/en/latest/cephadm/install/#bootstrap-a...
Ceph Object Gateway: radosgw: https://docs.ceph.com/en/latest/radosgw/ :
> The Ceph Object Gateway provides interfaces that are compatible with both Amazon S3 and OpenStack Swift, and it has its own user management. Ceph Object Gateway can use a single Ceph Storage cluster to store data from Ceph File System and from Ceph Block device clients. The S3 API and the Swift API share a common namespace, which means that it is possible to write data to a Ceph Storage Cluster with one API and then retrieve that data with the other API.
virtio-blk is probably faster, but then do HA redundancy with physically separate nodes and network io anyway; or LocalPersistentVolumes
S3 or radosgw are not very relevant to a discussion about block storage devices.
Ceph's RBD is "special" in the sense that the client understands the clustering, and talks to multiple servers. If you wanted that in the mix, you'd have to run a local Ceph client -- like the Ceph software stack exposing block devices from kernel does. The only way I can see vDPA being relevant to that is to avoid middlemen layers, VM -> host kernel block device abstraction -> Ceph kernelspace RBD client. But the block device abstraction is pretty thin.
The real use case for vDPA, when talking about storage, seems to be standardizing an interface the hardware can provide. And then we're back to "why not NVMe?".
(Disclaimer: ex-Ceph-employee)
Visualization of a dense grid search over neural network hyperparameters
AutoML and [Partially] automated feature engineering have hyperparameters too. Some algorithms have no hyperparameters. And, OT did a complete grid search instead of a PSO or gradient descent, for which there are also adversarial cases.
Featuretools supports Dask EntitySets for larger-than-RAM feature matrices, or pandas on multiple cores: https://featuretools.alteryx.com/en/stable/guides/using_dask...
"Hyperparameter optimization with Dask": https://examples.dask.org/machine-learning/hyperparam-opt.ht... :
> HyperbandSearchCV is Dask-ML’s meta-estimator to find the best hyperparameters. It can be used as an alternative to RandomizedSearchCV to find similar hyper-parameters in less time by not wasting time on hyper-parameters that are not promising. Specifically, it is almost guaranteed that it will find high performing models with minimal training.
Note that e.g. TabPFN is faster or converges more quickly than xgboost and other gradient boosting with hyperparameter methods: https://news.ycombinator.com/item?id=37269376#37274671
"Stochastic gradient descent written in SQL" (2023) https://news.ycombinator.com/item?id=35063522 :
> What are some adversarial cases for gradient descent, and/or what sort of e.g. DVC.org or W3C PROV provenance information should be tracked for a production ML workflow?
https://x.com/jaschasd/status/1756930247633825827 :
> So it shouldn't (post-hoc) be a surprise that hyperparameter landscapes are fractal. This is a general phenomenon: in these panes we see fractal hyperparameter landscapes for every neural network configuration I tried, including deep linear networks.
New AI tool discovers realistic metamaterials with unusual properties
Metamaterials: https://en.wikipedia.org/wiki/Metamaterial
- "Temporal chirp, temporal lensing and temporal routing via space-time interfaces" (2023) https://news.ycombinator.com/item?id=39162149 :
- > "APL special topic: Time modulated metamaterials" (2023) https://pubs.aip.org/aip/apl/article/123/16/160401/2916954
- "Incandescent Temporal Metamaterials" (2023) https://news.ycombinator.com/item?id=39162302
Unscrambling the hidden secrets of superpermutations
Combinatorics are fun.
People who like reading this article would absolutely like Knuth's 4A volume of The Art of Computer Programming.
Which discusses Permutations, Combinations, Partitions, boolean tricks and more.
I'm think every so often someone asks if TAOCP is worth reading. Well, yes. It's like this article but denser and more mathematical.
------
I dont think TAOCP covers this superpermutaion problem. Maybe as an exercise though? I've only read the main section, not really dived into the exercise problems yet... Which are substantial and cover more obscure subjects.
/? Knuth's 4A volume of The Art of Computer Programming site:github.com : https://www.google.com/search?q=Knuth%27s+4A+volume+of+The+A... :
> Generate {n-tuples, permutations, combinations, partitions, set partitions, trees,} with combinatorial patterns
Combinatorics: https://en.wikipedia.org/wiki/Combinatorics
- Exercise: Permutations [& Combinations]: https://rosettacode.org/wiki/Permutations#Python https://rosettacode.org/wiki/Combinations_and_permutations#P...
- Exercise: Rewrite itertools.product as a generator that yields in a different order than itertools.product; write a different graph traversal that also covers without repetition
- Exercise: The Birthday problem and actual probability of cryptographic hash collision given hash length n: https://en.wikipedia.org/wiki/Birthday_problem
- Exercise: The Gambler's Fallacy: Given a sequence of random values that satisfy many of the tests in e.g. Google/paranoid_crypto.lib.randomness_tests, where would Gambler's Fallacy have caused loss? https://en.wikipedia.org/wiki/Gambler%27s_fallacy https://github.com/google/paranoid_crypto/tree/main/paranoid...
Combinatorics and physics: https://en.wikipedia.org/wiki/Combinatorics_and_physics
/?hnlog Ctrl-f Hilbert :
"Matrices and Graph" https://news.ycombinator.com/item?id=36739579 :
> [ Multigraph, networkx.MultiDiGraph , RDF Linked Data ]
> Tensor product of graphs: https://en.wikipedia.org/wiki/Tensor_product_of_graphs
> Hilbert space: https://en.wikipedia.org/wiki/Hilbert_space :
>> In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.
>> [...] The inner product between two state vectors is a complex number known as a probability amplitude.
Wave interference > Quantum interference: https://en.wikipedia.org/wiki/Wave_interference#Quantum_inte...
EM waves have amplitude in the interval [-1,1] or [-inf, +inf].
With EM waves we typically model constructive interference and destructive interference.
There is also a particle-like phononic quantum wave interpretation of EM waves.
Quantum waves are in the interval [0,1].
Quantum embedding is the process and study of encoding data as wave functions with e.g. phase.
https://news.ycombinator.com/item?id=38255569 :
> How many ways are there to roll a {2, 8, or 6}-sided die with qubits and quantum embedding?
Quantum superposition and combinatorics; why can't the quantum simulator run this code like an actual QC?
"Where does energy go during destructive interference?" (2018) https://news.ycombinator.com/item?id=32421509 :
> From Conservation_of_energy#Quantum_theory https://en.wikipedia.org/wiki/Conservation_of_energy#Quantum... :
>> In quantum mechanics, energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent [If the Hamiltonian is a time-independent operator]
{Employee, Conference, Computational resource} Scheduling with priority: https://news.ycombinator.com/item?id=22589911 :
> https://en.wikipedia.org/wiki/Hilbert_curve_scheduling :
>> [...] the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity.[1] Other space filling curves may also be used in various computing applications for similar purposes. [2]
And then fluids.
Computational Fluid Dynamics is still one of the harder problems in classical high performance computing and quantum computing because it is a combinatorically hard problem to model every possible fluid outcome in order to predict the most likely outcome(s).
/?hnlog Navier [e Euler, Stokes,] :
- "Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations" https://news.ycombinator.com/item?id=31049608 :
> [awesome-fluid dynamics, jax-cfd,]
And then Evolutionary Algorithms,
- Infinite Monkey Theorem: https://en.wikipedia.org/wiki/Infinite_monkey_theorem
- https://news.ycombinator.com/item?id=39110110#39139198 :
> Holman's "elegant normal form",
Evolutionary Algorithms > Convergence: https://en.wikipedia.org/wiki/Evolutionary_algorithm#Converg... :
> For EAs in which, in addition to the offspring, at least the best individual of the parent generation is used to form the subsequent generation (so-called elitist EAs), there is a general proof of convergence under the condition that an optimum exists. Without loss of generality, a maximum search is assumed for the proof: [...]
The discrete convolution is combinatorial.
Convolution > Discrete convolution: https://en.wikipedia.org/wiki/Convolution#Discrete_convoluti...
Category:Combinatorics: https://en.wikipedia.org/wiki/Category:Combinatorics
EIA to initiate data collection regarding electricity use by U.S. crypto miners
Do they ask all industries for energy usage information yet?
Maybe while you're at it, ask FDA to collect manufacturing and sales counts so that we could have a relative hazard metric:
(adverse event count) / (count sold)
(adverse event count) / (count manufactured)
Why doesn't [FDA] have production and sales data from all of the competitors already?TMK, datacenters in other industries, mines, and other industrial consumers of electricity with margin are not obligated to share internal information on inputs.
(It's a good idea to gauge the competition's energy efficiency, and they should have kwH/t operating figures; but other industries aren't yet also obligated to turn their hands and share such private operating data.)
Other data that might be of interest but may not be acceptable to demand from competing government and industry firms: dry cleaning costs, armored bank bag services costs, fraud prosecution costs, cash and coinage replacement costs, transaction cost per 1 USD and per 1 kwh, transaction time, level of ensured backup redundancy, how much of the waste heat from their datacenters they're recovering in the interest of efficiency, etc.
But then are they soliciting bids or competitive intelligence from competitors for an eventual USD stablecoin?
First open-source global energy system model
Similar research:
From "Computer simulation provides 4k scenarios for a climate turnaround" (2023) https://news.ycombinator.com/item?id=36578412 :
"Deep decarbonisation pathways of the energy system in times of unprecedented uncertainty in the energy sector" (2023) https://www.sciencedirect.com/science/article/pii/S030142152... https://scholar.google.com/scholar?cites=5181468319964302356...
Additional data sources:
- https://news.ycombinator.com/item?id=24759414 ; ElectricityMaps parses the EIA data too; but it's not intraday. Would intraday pricing create incentives to invest in grid scale energy storage to solve the duck and alligator curves?
Energy solutions:
This says the Szilard-Chalmers MOST process can store energy for 18 years: https://news.ycombinator.com/item?id=39111825
Gravity and infinite empty vacuums can also store/transmit energy.
Sudo for Windows
Yep, it's really happening. Sudo is coming to Windows. It's obviously not just a fork of the linux sudo - there's enough that's different about the permissions structure between OS's that just a straight port wouldn't make sense. But the dream of being able to run commands as admin, in the same terminal window - that's the experience we're finally bringing to users.
I've been working on this for the last few months now and I'm pretty excited to talk about it or answer any questions!
Why can't you do this with the tool that already exists to do the exact same thing, called runas?
Also 'start -Verb runas' in Powershell.
Is there anything shorter than `powershell.exe -executionpolicy unrestricted -file`?
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -InstallPSWindowsUpdate -UpdateWindows -UpdateChocoPackages
setup_windows.ps1:
https://github.com/westurner/dotfiles/blob/develop/scripts/s... powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -InstallPSWindowsUpdate -UpdateWindows -UpdateChocoPackages
powershell -ex bypass -f ./setup_windows.ps1 -I -UpdateW -UpdateC
(or whatever are short enough to uniquely identify parameter names for your script)What Algorithms Can Transformers Learn? A Study in Length Generalization
Doesn't this also generalize to LLMs (which are mostly all doing just one-next-word prediction):
https://news.ycombinator.com/item?id=38830186 :
>>> Making the prefix shorter tends to produce less coherent prose; making it longer tends to reproduce the input text verbatim. For English text, using two words to select a third is a good compromise; it seems to recreate the flavor of the input while adding its own whimsical touch.
But could classical LLMs approximate quantum relations?
https://news.ycombinator.com/item?id=39255848 :
> If there is some sort of e.g. geometric correspondence, it could be possible for a Church-Turing classical computer to compute quantum functions (that return wave functions) that a Church-Turing-Deutsch quantum computer can compute; but otherwise Lean [and all non-quantum LLMs] can't compute most quantum circuits either.
CUDA Quantum
NVIDIA/cuda-quantum: https://github.com/NVIDIA/cuda-quantum
Docs: https://nvidia.github.io/cuda-quantum/latest/
Which of the quantum circuit simulator backends supported by tequila are accelerateable with CUDA quantum?
src/tequila/simulators: https://github.com/tequilahub/tequila/tree/master/src/tequil...
VirtualBox KVM Public Release
For the past few months we have been working hard to provide a fast, reliable and secure KVM backend for VirtualBox. VirtualBox is a multi-platform Virtual Machine Monitor (VMM) with a great feature set, support for a wide variety of guest operating systems, and a consistent user interface across different host operating systems.
Cyberus Technology’s KVM backend allows VirtualBox to run virtual machines utilizing the Linux KVM hypervisor instead of the custom kernel module used by standard VirtualBox. Today we are announcing the open-source release of our KVM backend for Virtualbox.
Finally!
Every time I need to run a virtual machine, I choose libvirt because it's more performant and easy to deal with than Virtualbox (no kernel module, etc.), but the GUI choices are pretty terrible. The "best" libvirt GUI is virt-manager and it's very, very buggy and lacking features (i.e. doesn't play nice with HiDPI screens, no way of configuring IPv6, etc.)
Many times I have caved and chosen VirtualBox simply because at least it feels nice to use, even if not as performant as libvirt/kvm. Not anymore!
I thought virt manager was ok but honestly your complaints about it are specific and fair.
Virtual box has graphical configuration for a ton of different options. It also “just works” in many cases and is relatively easy to use.
I am surprised the open source community has not built better gui tools, and no project, closed or open has made configuring pcie passthrough easy.
I have always wanted to be able to run Windows in a virtualized session with my GPU for gaming, and use my onboard APU for the Linux host, but the configuration is daunting, and many of the games I play today don’t work on linux thanks to anticheat or DRM.
> no project, closed or open has made configuring pcie passthrough easy
"GPU passthrough with libvirt qemu kvm" https://wiki.gentoo.org/wiki/GPU_passthrough_with_libvirt_qe...
"PCI passthrough via OVMF" https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF :
> The Open Virtual Machine Firmware (OVMF) is a project to enable UEFI support for virtual machines. Starting with Linux 3.9 and recent versions of QEMU, it is now possible to passthrough a graphics card, offering the virtual machine native graphics performance which is useful for graphic-intensive tasks
KVM-GPU-Passthrough: https://github.com/BigAnteater/KVM-GPU-Passthrough
https://clayfreeman.github.io/gpu-passthrough/
I don't think that linking two different Wiki's (for different Linux distros) and two different github posts is "easy" compared to VirtualBox's very "fisher price" Next-Next-Next-Done GUI
Not saying I prefer one or the other, but it's worth bearing in mind where "the bar" is
FWICS from scanning those resources, there are a few shell commands to wrap with a config parser and an output parser for a GUI
E.g. virt-manager is built with glade XML and Python:
virt-manager/virt-manager//ui/createvm.ui: https://github.com/virt-manager/virt-manager/blob/main/ui/cr...
virt-manager/virt-manager//ui/gfxdetails.ui: https://github.com/virt-manager/virt-manager/blob/main/ui/gf...
virt-manager/virt-manager//ui/hoststorage.ui: https://github.com/virt-manager/virt-manager/blob/main/ui/ho...
virtManager/createvm.py: https://github.com/virt-manager/virt-manager/blob/main/virtM...
virtManager/device/addstorage.py: https://github.com/virt-manager/virt-manager/blob/main/virtM...
virtManager/device/gfxdetails.py: https://github.com/virt-manager/virt-manager/blob/main/virtM...
virtManager/addhardware.py:
DeviceController.TYPE_PCI
def populate_controller_model_combo(combo, controller_type):
https://github.com/virt-manager/virt-manager/blob/135cf17072... https://github.com/virt-manager/virt-manager/blob/135cf17072..."Locating the GPU": https://clayfreeman.github.io/gpu-passthrough/#locating-the-... :
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done;
iommu.sh gist:
https://gist.github.com/Roliga/d81418b0a55ca7682227d57af2778...iommu_groups.sh: https://github.com/drewmullen/pci-passthrough-ryzen/blob/mas... :
lspci -nns "${d##*/}"
"PCI passthrough via OVMF > 2. Setting up IOMMU > 2.2 Ensuring that the groups are valid": https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#En... :> An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. For instance, in the example above, both the GPU in 06:00.0 and its audio controller in 6:00.1 belong to IOMMU group 13 and can only be passed together. The frontal USB controller, however, has its own group (group 2) which is separate from both the USB expansion controller (group 10) and the rear USB controller (group 4), meaning that any of them could be passed to a virtual machine without affecting the others.
"Exporting your ROM": https://github.com/BigAnteater/KVM-GPU-Passthrough?tab=readm... :
lspci -vnn
find /sys/devices -name rom
# PATH_TO_ROM=
echo 1 > $PATH_TO_ROM
mkdir -p /var/lib/libvirt/vbios/
cat $PATH_TO_ROM >
/var/lib/libvirt/vbios/gpu.rom
echo 0 > $PATH_TO_ROM
"Attaching the GPU" [with `virsh`] https://clayfreeman.github.io/gpu-passthrough/#attaching-the... : <hostdev mode='subsystem' type='pci' managed='yes'>
<rom file='/path/to/gpu-dump.rom'/>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
</source>
</hostdev>
"Adding your GPU and USB devices to the VM" [with `virt-manager`]:
https://github.com/BigAnteater/KVM-GPU-Passthrough?tab=readm...> 1. Add every PCI device which has to do with your graphics card to the VM.
That's still MASSIVELY more complex than "Next -> Next -> Next -> Done"
I wish there was a port of UTM to linux
Gnome Boxes is an attempt at a similar interface, but yeah, it's not quite as polished.
virt-manager supports more complex libvirt XML configurations, can also manage VMs created by Gnome Boxes, but doesn't yet have IOMMU/PCIE passthrough with OVMF UEFI device selector and vm configuration gui: https://virt-manager.org/
Simple Precision Time Protocol at Meta
To me that looks like they are reinventing NTP, but not addressing all the issues of PTP.
A big problem with the PTP unicast mode is an almost infinite traffic amplification (useful for DDoS attacks). The server is basically a programmable packet generator. Never expose unicast PTP to internet. In SPTP that seems to be no longer the case (the server is stateless), but there is still the follow up message causing a 2:1 amplification. I think something like the NTP interleaved mode would be better.
It seems they didn't replace the PTP offset calculation assuming a constant delay (broadcast model). That doesn't work well when the distribution of the delay is not symmetric, e.g. errors in hardware timestamping on the NIC are sensitive to network load. They would need to measure the actual error of the clock to see that (the graphs in the article seem to show only the offset measured by SPTP itself, a common issue when improvements in time synchronization are demonstrated).
I think a better solution taking advantage of existing PTP support in hardware is to encapsulate NTP messages in PTP packets. NICs and switches/routers see PTP packets, so they provide highly accurate timestamps and corrections, but the measurements and their processing can be full-featured NTP, keeping all its advantages like resiliency and security. There is an IETF draft specifying that:
https://datatracker.ietf.org/doc/draft-ietf-ntp-over-ptp/
An experimental support for NTP-over-PTP is included in the latest chrony release. In my tests with switches that work as one-step transparent clocks the accuracy is same as with PTP (linuxptp).
SPTP does look a lot like NTP over PTP. I’m guessing they deployed this last year when nothing like it existed - the ietf draft (dated Jan 24, 2024) came much later after. They might even be involved with it. Anyways, it’s nice to see progress towards a simpler protocol that retains the precision of PTP.
How does SPTP compare to CERN's WhiteRabbit, which is built on PTP? https://en.wikipedia.org/wiki/White_Rabbit_Project
FWIW, from "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506 :
> TIL there's a regular heartbeat in the quantum foam; there's a regular monotonic heartbeat in the quantum Rydberg wave packet interference; and that should be useful for distributed applications with and without vector clocks and an initial time synchronization service
Kerbal Space Program 2 is not playable on Linux with Proton
ProtonDB suggests a number of command line parameters to try; like `mangohud gamemode`.
Why does the frame rate drop to <10fps when playing the intro video with the Sagan-sounding voiceover (with an RTX 3050 Ti, 4gb VRAM, and 40gb RAM on Steam with Proton Experimental and Proton-GE) due to DXVK?
Why is the audio tearing and dropping out?
KSP1 had Linux support.
For kids to learn from this game, they should not need an expensive gaming computer; how do I drop graphics to low and learn with this fancy high polygon count and unnecessary DXVK features?
Disney to take $1.5B stake in Epic Games
Disney gets more IP in front of more 13-25 year olds. This is a very impressionable age group, and can create life long fans. This is a good value proposition for Disney. Epic probably gets an increase in valuation -- lifeblood for tech companies.
Collaboration skins are massive for revenue. However, I'm concerned this relationship will force uncool collaborations with Fortnite and reduce it's appeal. Disney has had some flops recently. Long term the trick for Fortnite is to become the most sticky online videogame in history, with most games bleeding audience over time. Epic is more than just Fortnite, but I imagine this deal is entirely about Fortnite.
No, Disney uses Unreal for their rendered real-time sets. This is about more than just Fortnite.
Do they still? They used Unreal for the virtual sets in The Mandalorian S1, but for S2 they switched over to a different solution.
https://www.ilm.com/vfx/the-mandalorian/
> For season one of the series ILM StageCraft utilized Unreal Engine to perform the real-time render
https://www.ilm.com/vfx/the-mandalorian-season-2/
> The real-time render engine called Helios was specifically developed by ILM engineers
> Ctrl-F "Unreal" - no results
StageCraft IS Unreal Engine.
https://en.m.wikipedia.org/wiki/StageCraft
ILM used Unreal Engine to make StageCraft and kept iterating on it until it’s the awesome tech that it is. They have a vested interest in seeing the underlying engine continue to prosper.
StageCraft WAS Unreal Engine; it is not anymore.
StageCraft: https://en.wikipedia.org/wiki/StageCraft
What is Helios?
/? Helios StageCraft https://rebusfarm.net/news/ilm-stagecraft-a-virtual-producti... :
> StageCraft leverages Helios ILM’s real-time cinema render engine. It is a set of LED screens that work as a 360 extension digital set, allowing filmmakers to explore new ideas, communicate concepts, and execute shots in a collaborative and flexible production environment.
Is there a way to vary the [UE] AutoLOD for longer shots? https://news.ycombinator.com/item?id=38160089
That's not even cinematography! Because there aren't lenses, there are presumably Camera matrices.
Cinematography: https://en.wikipedia.org/wiki/Cinematography
Computer graphics: https://en.wikipedia.org/wiki/Computer_graphics
"Ask HN: What's the state of the art for drawing math diagrams online?" (2023) https://news.ycombinator.com/item?id=38355444 ; generative-manim, manimGPT, BlenderGPT, ipyblender, o3de, how do we teach primary math intuition with the platforms that reach them, how do we Manim in 3 or even 4D?
Manim > "Render with [Blender and/or od3e]" https://github.com/ManimCommunity/manim/issues/3362
FWIU Disney Games are often built with Panda3d, which works with pygame-web/pygbag in WASM now
Research: "Fabric of the Cosmos", "Cosmos", "How the Universe Works",
> Is there a way to vary the [UE] AutoLOD for longer shots?
UE5 (and other 4d graphics and physics simulators) automatically reduce the LOD Level-of-Detail for objects in the distance.
Is that LOD parametric with StageCraft software?
(For example, reportedly Cities Skyline 2 is bad slow because they included meshes for characters' teeth and expected AutoLOD to just make it work on the computers kids tend to have. It doesn't work on reasonable machines because the devs all have fast pro GPUs to develop on, so they don't know what the UX is for the average family (that would be happy to turn down the polygon count themselves for what we can learn from the gameplay). Having game devs dogfood with real-world devices that families afford would be good for these firms too.).
Hopefully they'll continue to sell games through non-Epic stores that people have already invested in.
And hopefully, they'll make sure their products work with Proton and thus popular Linux-based handheld gaming devices.
Secretive Ford 'Skunkworks' Team Developing Cheap EV to Beat Tesla and China
Soybean car: https://en.wikipedia.org/wiki/Soybean_car
US2269452A (1940. Ford) : https://patents.google.com/patent/US2269452A/en :
> The object of our invention is to provide an automobile chassis construction of simple, durable and inexpensive construction
Motive Kestrel (2010) https://www.google.com/search?q=motive+kestrel+hemp+car
HempEarth: "Producing World’s First 100% HEMP Aviation Composites" (2023) ; 10x stronger than steel: https://hempearth.ca/2023/02/01/hempearth-producing-worlds-f...
There a hemp 3d printing filaments;
Can sufficient biocomposites be high-pressure injection molded?
Are replaceable parts better than a corrosion-resistant unibody?
.
Sodium ion battery recycling
Lithium ion quick release binder
.
Ultracapacitor and supercapacitor anodes can be made from already-branching organic bast fiber
.
Taraxagum (Dandelion) Rubber
Demystifying GPU compute architectures
Robust continuous time crystal in an electron–nuclear spin system
"Robust continuous time crystal in an electron–nuclear spin system" (2024) https://www.nature.com/articles/s41567-023-02351-6 :
> The time crystal state enables fundamental studies of nonlinear interactions and has potential applications as a precise on-chip frequency standard.
Re: Retrocausality: https://news.ycombinator.com/item?id=38047149
Does retrocausality of time crystals affect quantum circuit measurement at t>0?
"Double-slit time diffraction at optical frequencies" (2023), "Incandescent Temporal Metamaterials" (2023) https://news.ycombinator.com/item?id=39162149
(Edit) /? Indefinite causal order: https://www.google.com/search?q=indefinite+causal+order
"Admissible causal structures and correlations" ( https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
Final Decision on Chromebook Case in Denmark
Notes about the "Turn on Linux" button that's still greyed out for students: https://westurner.github.io/hnlog/ ctrl-f "Turn on Linux"
Students cannot use Chromebooks to Linux, Python, or Git.
Google Colab requires internet access to run a notebook.
JupyterLite doesn't yet have "GitBash" in a terminal in WASM; though Chrome does now support local filesystem access for notebooks and data.
VScode + Containers or no deal.
Generating stable qubits at room temperature
> This breakthrough was made possible by embedding a chromophore, a dye molecule that absorbs light and emits color, in a metal-organic framework, or MOF, a nanoporous crystalline material composed of metal ions and organic ligands
"Room-temperature quantum coherence of entangled multiexcitons in a metal-organic framework" (2024) https://www.science.org/doi/10.1126/sciadv.adi3147
Single-photon quantum computers are also coherent at room temperature; though their stability:
"A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
What about phononic quantum computing though, are phononic wave functions stable/coherent?
MIT and IBM Find Clever AI Ways Around Brute-Force Math
Some comments on this publication: Their approach is to train a learned downsampler of the boundary and initial conditions, and pass it through a learned coarse-grained "simulator", but the output is only a specified target property, not the full PDE solution. The claim is a decrease in training data needed and faster execution.
The limitation is that you need to have a target property of interest which is not always the case when trying to simulate something.
The paper as it is written is very low quality. They basically ran no real benchmarks and ignored all the other work on this topic. They used a plain NN to act as a baseline and their justification for this was that if you used any of the other published surrogate models to predict a target property you would end up with a NN. This is not a good faith comparison at all. Besides that, they also only ran very tiny experiments which is not where the need for surrogate models lies. And even the layout of the paper is very chaotic.
Unrelated question: Is Nature considered a low quality journal for ML papers?
> Unrelated question: Is Nature considered a low quality journal for ML papers?
It isn’t published in Nature, it is published in Nature Machine Intelligence.
Nature has sprouted all these offshoots which are less prestigious than the original (although trying to benefit from its good name). Their prestige/etc varies depending on the specific offshoot in question. But often they aren’t one of the top journals in that specific field
Some or most of the talent doesn't have affiliation or a journal application fee.
Affiliation with a university or research institution was necessary for how many of the top journal articles in a given year?
Otherwise, people publish docs, whitepapers, open specs, and code with tests; and it's great regardless of journal.
JOSS publishes their cost structure. Figshare and Zenodo grant DOIs for git revisions. ORCID is optional and precedes W3C DIDs.
How I'm able to take notes in mathematics lectures using LaTeX and Vim (2019)
Previous discussion:
"How I'm able to take notes in mathematics lectures using LaTeX and Vim" (2019) https://news.ycombinator.com/item?id=19448678
https://news.ycombinator.com/item?id=32420925 :
> Computational Notebook speedrun competition ideas:
ProofWiki: Online compendium of mathematical proofs
From "Ask HN: Did studying proof based math topics make you a better programmer?" https://news.ycombinator.com/item?id=36463580 :
> Lean mathlib was originally a type checker proof assistant, but now leanprover-community is implementing like all math as proofs in Lean in the mathlib project
Lean Mathlib is composed of executable proofs written in Lean: https://leanprover-community.github.io/mathlib-overview.html
Be careful. While a proof in Lean is executable (it is a script, so to speak) it is conceptually a sequence of references to tactics. Writing a Lean proof does involve a highly specialised form of functional programming, but I wouldn't be at all sure that becoming an expert in Lean would improve your programming skills across the board.
Your comment also reminds me of the people who claim that the Curry-Howard isomorphism means "programming is math". It's not a claim that anyone should really be making in good faith. There's a lot more to programming than the lambda calculus.
Verbal skills are apparently more predictive of programming career success than math scores.
Is Quantum Logic the correct propositional logic? Is Quantum Logic a sufficient logic for all things?
I'd much rather work with machine-checkable proofs; though Lean is not what I've been taught math in either.
Coq-HoTT is written in Coq, not Lean.
A tool that finds the correspondence between proofs as presented and checkable proofs in a reasonable syntax would be helpful, I think.
If I start with "Why is 2+2=4?" [in this finite ring], I'm not sure how to find the relevant Lean code in Mathlib to prove my bias inductively, deductively, or abductively
> Verbal skills are apparently more predictive of programming career success than math scores.
Curious about the references!
I've heard this and always thought it was probably true even setting aside the soft skills part of the job, but that was before data engineering and data scientists were common titles. If it ever was true, the situation is probably more nuanced now than it used to be
Not all lean proofs are actually computable, see the computable function link in the url you gave.
Not all code functions are mathematical functions. An expression can describe a relation that is not a function per the definition like "a function always returns one output for one input".
Function (mathematics) https://en.wikipedia.org/wiki/Function_(mathematics)
The Navier-Stokes equations are PDEs, not functions; but are they computable in Lean, or can algorithms for finding solutions be expressed in Lean?
Lean docs > Missing undergraduate mathematics in mathlib: https://leanprover-community.github.io/undergrad_todo.html#:...
Is ℯ^(2ί π x) a function? It's a complex function, but Geogebra draws it as a ~ (unit circle) + (y=0 if x > 0).
What I mean is lean can prove things like one of cantors theorems, the proof of which requires you to derive from properties on a function f showing that there exists a function g inverse to f, and show something about the results of calling g.
such a function in lean is considered non-computable by lean because it cannot be evaluated by lean since g has merely been shown to exist.
Many proofs in lean however are computable, just not all.
I don't think SymPy has abstract Function attributes to reason about, but it does have limited reasoning about number classes given e.g. `x = Symbol('x', real=True)`
SymPy docs > Writing Custom Functions > Easy Cases: Fully Symbolic or Fully Evaluated: https://docs.sympy.org/latest/guides/custom-functions.html#w...
If there is some sort of e.g. geometric correspondence, it could be possible for a Church-Turing classical computer to compute quantum functions (that return wave functions) that a Church-Turing-Deutsch quantum computer can compute; but otherwise Lean can't compute most quantum circuits either.
"Stem Formulas" (2023) https://news.ycombinator.com/item?id=36817851#36839925
Leaky Vessels flaws allow hackers to escape Docker, runc containers
Frankly if a user can run the docker command he is basically root anyways. That's why you do not add random users to the docker group.
Just this week there was a story[0] with a subthread about how Docker requires you to mount a volume, so it is better than launching shell scripts directly. I know all software has bugs, but Docker did not prioritize security and is unfit for isolation of untrusted code.
I will continue to stick with VMs as my security boundary.
Maybe I’m in the minority here, but I use containers for ease of packaging and deployment, security isn’t really my expectation for the container as I also rely on VMs for that aspect. I have a handful of long-running VMs with docker engine installed, and that serves as my application platform.
Definitely not the minority, anyone who needs secure isolation for a container should be wrapping it in a single tenant VM
Firecracker has often been recommended (here on HN and elsewhere) for a VM-for container-execution model : https://github.com/firecracker-microvm/firecracker
There's also Cloud Hypervisor, which is very similar, and based off some of the same crosvm code and ideas.
gvisor; `runsc`: https://gvisor.dev/docs/ :
> gVisor provides a virtualized environment in order to sandbox containers. The system interfaces normally implemented by the host kernel are moved into a distinct, per-sandbox application kernel in order to minimize the risk of a container escape exploit. gVisor does not introduce large fixed overheads however, and still retains a process-like model with respect to resource utilization.
https://news.ycombinator.com/item?id=38609105
kata containers: https://github.com/kata-containers :
> Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.
Compiling Rust is testing
No, compiling is proving, not testing. This is fundamentally different. Testing can only demonstrate the correctness for the data being tested with (and for the execution paths exercised). Compiling proves correctness properties for all executions and all inputs. Compiling with a static type system is like proving a mathematical theorem. Testing is like evaluating a mathematical formula for specific inputs.
On the flip side, compiling only proves correctness for the code being compiled, and for the properties covered by the type system. For example, you won’t prove with compiling that your database code works correctly (or at all) with the targeted database product.
> Compiling proves correctness properties for all executions and all inputs
Compiling does not prove correctness for any program inputs.
Folks could list (1) compilation errors, and (2) test errors/failures in junit XML or W3C EARL. But typically if code fails to compile, the "test suite" isn't run.
Type checking is not property testing is not fuzzing.
> Compiling does not prove correctness for any program inputs.
It can.
For example in Scala there are libraries that allow you to define types for inputs e.g. this Int must be non-negative or this String must be greater than 5 characters.
Are those (non-Enum) precondition value constraints checked at compile-time or at runtime?
FWIW, in Python, from https://news.ycombinator.com/item?id=38579378 :
> Hypothesis generates tests from type annotations; and icontract and pycontracts do runtime type checking.
Pynguin also generates tests; but it does not use icontract-style type and value constraints as decorator preconditions in generating tests.
icontract-hypothesis: https://github.com/mristin/icontract-hypothesis
Compile time.
But obviously only for values that are determined within the code.
Gentoo x86-64-v3 binary packages available
- "Fedora Optimized Binaries for the AMD64 Architecture" currently says Targeted Release: Fedora 40 but rejected: https://fedoraproject.org/wiki/Changes/Optimized_Binaries_fo... https://news.ycombinator.com/item?id=38825503
The change was withdrawn to improve it for F41 [1]. There were concerns about using $PATH to find executables for v2/v3/v4, especially for containers and absolute path specifications.
Low-Power Wi-Fi Extends Signals Up to 3 Kilometers
Could this [1] be possible with "Wi-Fi HaLow, based on the IEEE 802.11ah standard", too?:
[1] "Sensor-Free Soil Moisture Sensing Using LoRa Signals" (2022) https://news.ycombinator.com/item?id=38768950
- "43 km line of sight with USB WiFi stick (2005)" https://news.ycombinator.com/item?id=30541576
- Kreosan English's modded lunar rover WiFi antenna videos: "100s of km" https://youtu.be/Nk-nj_BwoBE?si=0iwpQBFs9ZqFP0p8 ... 10x: https://youtu.be/GWq6L94ImX8?si=V2R8hpa3vAosbhvi
When you have unobstructed line of sight, a 5W handheld can reach all the way up to ISS without problem. It's very different on the ground, though, especially on high frequencies, where much smaller objects can prevent propagation.
Are helically-polarized emissions more likely to have higher S/N at range and through e.g. plasma? https://news.ycombinator.com/item?id=39119248#39132365
Physics-enhanced deep surrogates for partial differential equations
"MIT and IBM Find Clever AI Ways Around Brute-Force Math" (2024) https://spectrum.ieee.org/amp/mathematical-model-ai-26671392... :
> The scientists tested what they called physics-enhanced deep surrogate (PEDS) models on three kinds of physical systems. These included diffusion, such as a dye spreading in a liquid over time; reaction-diffusion, such as diffusion that might take place following a chemical reaction; and electromagnetic scattering.
> The researchers found these new models can be up to three times as accurate as other neural networks at tackling partial differential equations. At the same time, these models needed only about 1,000 training points. This reduces the training data required by at least a factor of 100 to achieve a target error of 5 percent.
"Physics-enhanced deep surrogates for partial differential equations" (2024) https://www.nature.com/articles/s42256-023-00761-y :
> Abstract: Many physics and engineering applications demand partial differential equations (PDE) property evaluations that are traditionally computed with resource-intensive high-fidelity numerical solvers. Data-driven surrogate models provide an efficient alternative but come with a substantial cost of training. Emerging applications would benefit from surrogates with an improved accuracy–cost tradeoff when studied at scale. Here we present a ‘physics-enhanced deep-surrogate’ (PEDS) approach towards developing fast surrogate models for complex physical systems, which is described by PDEs. Specifically, a combination of a low-fidelity, explainable physics simulator and a neural network generator is proposed, which is trained end-to-end to globally match the output of an expensive high-fidelity numerical solver. Experiments on three exemplar test cases, diffusion, reaction–diffusion and electromagnetic scattering models, show that a PEDS surrogate can be up to three times more accurate than an ensemble of feedforward neural networks with limited data (approximately 103 training points), and reduces the training data need by at least a factor of 100 to achieve a target error of 5%. Experiments reveal that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modelling complex systems, offering accuracy, speed and data efficiency, as well as physical insights into the process.
A National Initiative to Cut Academics
> [ Carnegie Foundation for the Advancement of Teaching , and the Educational Testing Service (ETS). ]
> Way back in 1906, the former organization gave us the Carnegie Unit, or credit hour, and the latter outfit has for many decades administered a suite of standardized tests including the SAT and GRE. Together, they provide much of the infrastructure on which our education system runs — an infrastructure that they now seek to dismantle.
> In a paper explaining their intentions, they criticize today’s schools for focusing on “a limited set of cognitive skills” — math, reading, historical and scientific knowledge. In a two-part plan, they hope to first dismantle the Carnegie Unit, which measures educational attainment through seat time, and then offer instead a bouquet of “affective skills” such as creativity and collaboration, relying on projects, portfolios, or even transcripts of class discussions to measure them.
Can population increase change Earth's gravitational pull?
Earth mass: https://en.wikipedia.org/wiki/Earth_mass :
> An Earth mass (denoted as M_E or M⊕ ( M_{\oplus } ), where 🜨 is the standard astronomical symbol for Earth), is a unit of mass equal to the mass of the planet Earth. The current best estimate for the mass of Earth is M🜨 = 5.9722×10^24 kg, with a relative uncertainty of 10^−4. It is equivalent to an average density of 5515 kg/m3.
Average mass of a human: 50-75kg, depending on whether it's babies or not
Population of Earth: 8.1x10e9
Total mass of humans: ~ 400-600 x 10e9
Proportion of Earth's mass composed of live humans: 500e9 / 5.9e24 = 8.4e-14 = 0.00000000000084 %
According to the video series on "How the pyramids were built" at https://thepump.org/ by The Pharaoh's Pump Society, the later pyramids had a pool of water at the topmost layer of construction such that they could place and set water-tight blocks of carved stone using a crane barge that everybody walked to the side of to lift.
Ask HN: What do you dislike about using Image Gen AI or AI assisted tools?
Like Midjourney or Stable Diffusion?
It doesn't say how much clean energy was utilized; "carbon.txt"
- [ ] ENH: indicate genai algo energy efficiency
re: carbon.txt (TOML not YAMLLD) and ld-signatures / Blockcerts (RDF) and a third-party certifier: https://github.com/thegreenwebfoundation/carbon.txt/issues/3...
Is the [system] prompt encoded into the image, or does that need to be copy/pasted too. https://news.ycombinator.com/item?id=36648302
AI Explainability; how/why this
"The VAE Used for Stable Diffusion Is Flawed" (2024) https://news.ycombinator.com/item?id=39215242
Thanks!
A physical qubit with built-in error correction
"Logical states for fault-tolerant quantum computation with propagating light" (2024) https://dx.doi.org/10.1126/science.adk7560
"Qubits without qubits" (2024) https://dx.doi.org/10.1126/science.adm9946
A photonic quantum computer!
"Miniaturized technique to generate precise wavelengths of visible laser light" https://news.ycombinator.com/item?id=38497506
"Progress on chip-based spontaneous four-wave mixing quantum light sources" (2024) https://news.ycombinator.com/item?id=39244182
"Attoscience unveils light-matter hybrid phase in graphite like superconductivity" (2023) https://news.ycombinator.com/item?id=38650723
Perhaps this is also useful for photonic quantum computing:
"Light unbound: Data limits could vanish with new optical antennas" (2021) https://engineering.berkeley.edu/news/2021/02/light-unbound-... :
"Photonic quantum Hall effect and multiplexed light sources of large orbital angular momenta" (2021) https://www.nature.com/articles/s41567-021-01165-8 :
> Our work gives direct access to the infinite number of orbital angular momenta basis elements and will thus enable multiplexed quantum light sources for communication and imaging applications.
First fault-tolerant quantum computer launching this year
- "A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
Aerugo – RTOS for aerospace uses written in Rust
(n.b. The project looks awesome, and it’s awesome that it’s written in Rust and working well. Great job folks!)
I’ve been the managing functional safety engineer for several safety-critical real-time embedded systems and safety-related components/SEooCs (including a RTOS), and something stood out to me after reading through the repository: This is not a safety-critical operating system, despite the project’s claim.
There was clearly requirements engineering and verification work done, and the authors are under absolutely no obligation to publish the requirements and methodologies. But there’s no safety manual / integrator’s guide provided, no evidence of any qualitative or quantitative safety analysis or modelling at any abstraction level having been performed, no evidence of a safety concept, and no evidence of a safety case. I recognize that this is likely intended as a PoC from the description of it being exploratory to produce a “Lessons Learned” report, but the website for the ESA initiative that’s driving this development claims that “The design of the system will be guided to support potential future qualification activities”. If that was done, there should be evidence of some of all of those work products, because that’s how you support future qualification. You cannot retroactively sprinkle safety on top of a system that hasn’t been designed with safety in mind.
I’d love to learn more about this — can you recommended any resources for building something from the ground up with the relevant safety-critical considerations? Appreciate any pointers
Here’s a decent textbook that covers the software side in aerospace (DO-178):
https://www.amazon.ca/Developing-Safety-Critical-Software-Pr...
You can read up on the system safety side in documents like ARP5754 and ARP4761 and hardware side in DO-254.
awesome-safety-critical > Software safety standards lists DO-178C and also DO-278: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
Global cancer cases will jump 77% by 2050, WHO report estimates
Measuring Reynolds similitude in superfluids could help prove quantum viscosity
Is there a quantum analogue of this guy that could help measure quantum curl and divergence and viscosity?:
"Perfect Resonant Absorption of Guided Water Waves by Autler-Townes Splitting" (2023) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
https://news.ycombinator.com/item?id=38369731 https://westurner.github.io/hnlog/#story-38369731
"Vanishing Act for Water Waves" (2023) https://physics.aps.org/articles/v16/196 :
> Cavities at the sides of a water channel can cause waves to be completely absorbed
> "Superfluids have long been considered an obvious exception to the Reynolds similitude," Dr. Takeuchi said, explaining that the Reynolds law of similitude states that if two flows have the same Reynolds number, then they are physically identical. "The concept of quantum viscosity overturns the common sense of superfluid theory, which has a long history of more than half a century. Establishing similitude in superfluids is an essential step to unify classical and quantum hydrodynamics."
"Quantum viscosity and the Reynolds similitude of a pure superfluid" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.109.L...
Re: superfluid quantum gravity w/ Bernoulli's, viscosity: https://news.ycombinator.com/item?id=39139053
Nano-scale inks could lighten airliners by hundreds of kilograms
Is aircraft-grade Hemp Composite lighter than fiberglass?
Aircraft-grade hemp composite appears to have a lower density than fiberglass, and greater tensility.
How does the greater tensility of [hemp-based] biocomposite wings and fuselages change the flight characteristics?
.
I remember seeing a YouTube video about how there are newer ways of laying down a round fiberglass fuselage that are much more efficient than basically additive lathing?
.
Speaking of aerospace-grade HEMP biocomposites instead of paint on fiberglass,
From https://news.ycombinator.com/item?id=39064549 :
> Would there be advantages to a tensile biocomposite with Starlite and/or Firepaste e.g. as a coating or blended?
> Can Starlite be used as heat shielding for [biocomposite] civilian rockets, and how would the reentry emissions differ?
I don't always use LaTeX, but when I do, I compile to HTML (2013)
Sphinx and reStructuredText are, IMHO, underrated power houses of document building. With extensions, you can hook them up to Zotero (or whatever)-managed bibtex files. You can render to beautiful HTML files, and you get latex PDFs and epubs for free. First class latex-math support, plenty of integrations with things like mermaid, graphviz, and the ability to build super-powerful custom directives to do basically anything. And way simpler/easier than pure LaTeX.
Heck you can even integrate a full-on requirements management system in them using sphinx-needs https://sphinx-needs.readthedocs.io/en/latest/
I guess latex is still unbeatable for writing complex math expressions. These days, when I don't need that, I'm happy with AsciiDoc.
Sphinx/reStructuredText supports math in LaTeX input format [1], so you can still go nuts with complex math expressions while still benefitting from the relative simplicity.
[1] https://www.sphinx-doc.org/en/master/usage/restructuredtext/...
Looks like AsciiDoc supports similar latex math blocks [2]. Are there reasons you can't stick with that when doing math?
[2] https://docs.asciidoctor.org/asciidoc/latest/stem/#block
Sphinx supports ReStructuredText and Markdown.
MyST-Markdown supports MathJaX and Sphinx roles and directives. https://myst-parser.readthedocs.io/en/latest/
jupyter-book supports ReStructuredText, Jupyter Notebooks, and MyST-Markdown documents:
You can build Sphinx and Jupyter-Book projects with the ReadTheDocs container, which already has LaTeX installed: https://github.com/executablebooks/jupyter-book/issues/991
myst-templates/plain_latex_book: https://github.com/myst-templates/plain_latex_book/blob/main...
GitHub supports AsciiDoc in repos and maybe also wikis?
Is there a way to execute code in code blocks in AsciiDoc, and include the output?
latex2sympy requires ANTLR.
For example: writing complicated expression invovling calculus/matrix. That's not something I need everyday, though.
I have documented at least 10 x 10 matrices with rst math directives and found it to be pretty convenient. I don't understand what the benefit of pure latex is in this context.
pandas.DataFrame().to_latex() [1] and tabulate [2] support latex table output.
[1] https://pandas.pydata.org/docs/reference/api/pandas.DataFram...
[2] https://github.com/astanin/python-tabulate/blob/master/tabul...
House restores immediate R&D deduction in new tax bill
https://news.ycombinator.com/context?id=38988189 :
>> "Since amortization took effect [ in 2022 thanks to a time-triggered portion of the Trump-era Tax Cuts and Jobs Act ("TCJA" 2017) ], the growth rate of R&D spending has slowed dramatically from 6.6 percent on average over the previous five years to less than one-half of 1 percent over the last 12 months," Estes said. "The [R&D] sector is down by more than 14,000 jobs"
Hopefully R&D spending at an average of 6.6% will again translate to real growth.
Gitlab's ActivityPub architecture blueprint
This is fantastic. I stick my personal projects on GitLab and Codeberg - but if I want people to contribute, I use GitHub; that's where the people are.
GitHub has a number of weird repos from popular projects which are just copies of other repos. You can look, but you can't touch. (WordPress and Linux are the big ones).
It would be brilliant to look at a project in my preferred UI and raise issues / PRs without having to sign up to yet another service.
Very excited to see how this pans out.
It would be awesome to see Git becoming decentralized again but what's the likelihood of GitHub implementing this?
- "Graph of Keybase commits pre and post Zoom acquisition" (2021) https://news.ycombinator.com/item?id=28814802 :
- "Key server (cryptographic)" https://en.wikipedia.org/wiki/Key_server_(cryptographic)
- W3C DID Decentralized Identifiers (that you can optionally locally generate like pubkey hash account identifiers)
- "Linked Data Signatures for GPG" https://gpg.jsld.org/ ; GPG in (JSON-LD) RDF
- ld-signatures is now W3C vc-data-integrity: "Verifiable Credential Data Integrity 1.0 Securing the Integrity of Verifiable Credential Data" https://www.w3.org/TR/vc-data-integrity/
- An example of GPG signatures on linked data documents: https://gpg.jsld.org/contexts/#GpgSignature2020
- vc-data-integrity specifies how to normalize the document by sorting keys ~ in the JSON before cryptographically signing the transformed, isomorphic graph
- SLSA.dev also specifies signed provenance metadata (optionally with sigstore.dev for centralized release artifact hashes), but not (yet?) with Linked Data
- Blockcerts: blockchain-certificates/cert-verifier-js , https://www.blockcerts.org/guide/ :
> Blockcerts is an open standard for building apps that issue and verify blockchain-based official records. These may include certificates for civic records, academic credentials, professional licenses, workforce development, and more.
> Blockcerts consists of open-source libraries, tools, and mobile apps enabling a decentralized, standards-based, recipient-centric ecosystem, enabling trustless verification through blockchain technologies.
> Blockcerts uses and encourages consolidation on open standards. Blockcerts is committed to self-sovereign identity of all participants, and enabling recipient control of their claims through easy-to-use tools such as the certificate wallet (mobile app). Blockcerts is also committed to availability of credentials, without single points of failure.
- [ ] SCH: link a git commit graph (with GPG signatures) with other linked data of an open source software project; for example (SLSA,) build logs and JSON-LD SBOMs.
- >> Is there an ACME-like thing to verify online identity control like Keybase still does?
Scientists create virucidal silicon surface without any chemicals
"Piercing of the Human Parainfluenza Virus by Nanostructured Surfaces" (2024) https://pubs.acs.org/doi/10.1021/acsnano.3c07099 ;
> ABSTRACT: This paper presents a comprehensive experimental and theoretical investigation into the antiviral properties of nanostructured surfaces and explains the underlying virucidal mechanism. We used reactive ion etching to fabricate silicon (Si) surfaces featuring an array of sharp nanospikes with an approximate tip diameter of 2 nm and a height of 290 nm. The nanospike surfaces exhibited a 1.5 log reduction in infectivity of human parainfluenza virus type 3 (hPIV-3) after 6 h, a substantially enhanced efficiency, compared to that of smooth Si. Theoretical modeling of the virus–nanospike interactions determined the virucidal action of the nanostructured substrata to be associated with the ability of the sharp nanofeatures to effectively penetrate the viral envelope, resulting in the loss of viral infectivity. Our research highlights the significance of the potential application of nanostructured surfaces in combating the spread of viruses and bacteria. Notably, our study provides valuable insights into the design and optimization of antiviral surfaces with a particular emphasis on the crucial role played by sharp nanofeatures in maximizing their effectiveness.
> Using this technology in [space] environments in which there is potentially dangerous biological material would make laboratories easier to control and safer for the professionals who work there.
> Spike the viruses to kill them. This seemingly unsophisticated concept requires considerable technical expertise and has one great advantage: a high virucidal potential that does not require the use of chemicals. The process of making the virucidal surfaces starts with a smooth metal plate, which is bombarded with ions to strategically remove material.
> The result is a surface full of needles that are 2 nanometers thick—30,000 would fit in a hair—and 290 high.
> [...] The research has revealed how these processes work and that they are 96% effective
"Lab Testing Reveals EnviroTextile’s Hemp Fabric Stops the Spread of Staph [and Pneumonia] Bacteria" (2021) https://www.envirotextiles.com/lab-testing-reveals-envirotex... :
> Staph is spread by direct contact and by touching items that are contaminated such as towels, sheets, privacy curtains, and clothing. As noted by the San Francisco Chronicle, “It is estimated that each year 2 million Americans become infected during hospital stays, and at least 90,000 of them die. MRSA (an antibiotic resistant strain of staph) is a leading cause of hospital-borne infections.” One of the most important recent discoveries is hemp’s ability to kill surface bacteria, while cotton, polyester, and polyethylene allow it to remain on their surfaces for up to months at a time. [...]
> Hemp fabric was tested against two bacteria strains, Staphylococcus Aureus (staph) and Klebsiella Pneumoniae (pneumonia). The fabric tested was a hemp blend, 60% hemp and 40% rayon. The staph test sample was already 98.5% bacteria free during the first measurement of the testing, while the pneumonia fabric sample was 65.1% bacteria free. These results, even prior to the tests completion, clearly display the fabrics unique capability at killing bacteria and reducing their spread. This is especially imperative for healthcare facilities.
> For infrared testing, the same hemp blend was analyzed resulting in a test result of 0.893, or nearly 90% resistant. Different blended fabrics have the potential to increase the percentage of this initial test, especially fabrics with a higher percentage of hemp. Many of hemp’s applications will benefit our military, and EnviroTextile’s hemp fabrics have recently been approved by the USDA as Federally Preferred for Procurement under their BioPreferred Program.
Incandescent Temporal Metamaterials
> thermal emission effects in time-modulated media
"Incandescent Temporal Metamaterials" (2023) https://www.nature.com/articles/s41467-023-40281-2 :
> Abstract: Regarded as a promising alternative to spatially shaping matter, time-varying media can be seized to control and manipulate wave phenomena, including thermal radiation. Here, based upon the framework of macroscopic quantum electrodynamics, we elaborate a comprehensive quantum theoretical formulation that lies the basis for investigating thermal emission effects in time-modulated media. Our theory unveils unique physical features brought about by time-varying media: nontrivial correlations between fluctuating electromagnetic currents at different frequencies and positions, thermal radiation overcoming the black-body spectrum, and quantum vacuum amplification effects at finite temperature. We illustrate how these features lead to striking phenomena and innovative thermal emitters, specifically, showing that the time-modulation releases strong field fluctuations confined within epsilon-near-zero (ENZ) bodies, and that, in turn, it enables a narrowband (partially coherent) emission spanning the whole range of wavevectors, from near to far-field
Temporal chirp, temporal lensing and temporal routing via space-time interfaces
"Double-slit time diffraction at optical frequencies" (2023) https://www.nature.com/articles/s41567-023-01993-w https://news.ycombinator.com/item?id=35433982 https://scholar.google.com/scholar?cites=2693318711586698434... :
"Incandescent Temporal Metamaterials" https://news.ycombinator.com/item?id=39162302
"APL special topic: Time modulated metamaterials" (2023) https://pubs.aip.org/aip/apl/article/123/16/160401/2916954 :
> Recently, there has been a growing interest in time modulated metamaterials. [1] These materials exhibit dynamic changes in their properties over time, representing a novel approach to manipulating waves and processing information. This approach goes beyond the constraints imposed by energy conservation and time-reversal symmetry. Time-switched and time-varying metamaterials, as they are also called, enable us to access the momentum and frequency of the waves, leading to a wide array of unique properties, such as time-refraction, [2] time-reversal, [3] nonreciprocal behavior,[4] wave synthesis and frequency conversion, [5,6] and amplification. [7] Time modulated metamaterials have potential for application in communication, security, sensing, and computation just to mention a few, across various wave domains from electromagnetism to acoustics and mechanics.
New Room Temp Superconductor Throws Hat in the Ring – This Time, It's Graphite
Graphite soaked in water may be a room temperature superconductor (2012) https://news.ycombinator.com/item?id=4538692 :
"Can Doping Graphite Trigger Room Temperature Superconductivity? Evidence for Granular High-Temperature Superconductivity in Water-Treated Graphite Powder" (2012) https://onlinelibrary.wiley.com/doi/10.1002/adma.201202219 https://scholar.google.com/scholar?cites=1480566877921340444...
This is the paper referenced in the article: https://onlinelibrary.wiley.com/doi/10.1002/qute.202300230
"Global Room-Temperature Superconductivity in Graphite" (2023) https://onlinelibrary.wiley.com/doi/10.1002/qute.202300230 :
> ... [Pyrolytic graphene] ... 4.5 K ≤ T ≤ 300 , T > 300 K ... A theory of global superconductivity emerging in the array of linear structural defects is developed which well describes the experimental findings and demonstrate that global superconductivity arises as a global phase coherence of superconducting granules in linear defects promoted by the stabilizing effect of underlying Bernal graphite via tunneling coupling to the three dimensional (3D) material.
What is the difference between Bernal (1961) graphite and N-layer graphene?
Electronic properties of graphene: https://en.wikipedia.org/wiki/Electronic_properties_of_graph...
Graphene (2004) : https://en.wikipedia.org/wiki/Graphene
Bilayer graphene (2004) : https://en.wikipedia.org/wiki/Bilayer_graphene
Graphene nanoribbon > Applications: https://en.wikipedia.org/wiki/Graphene_nanoribbon #Applications
Sxmo: Linux tiling window manager for phones
Wait.... Can this run on a normal intel/amd server and then be used in an xrdp/nx session? I need social compatibility and a good camera. I already tried i3 but it is difficult to use without a physical keyboard.
Could there be motion gestures for Sway on mobile?
- pinephone-sway-poc: https://github.com/Dejvino/pinephone-sway-poc#components
(Edit)
rpm-ostree now supports using OCI container images as host images; so upgrading the OS is easy and more error-proof because you can rollback to the previous (kernel + /etc overlay + root partition + packages) by selecting a different boot menu entry.
"Using OSTree Native Containers as Node Base Images" (2023) https://www.opensourcerers.org/2023/06/16/using-ostree-nativ...
China Added More Solar Panels in 2023 Than US Did in Its Entire History
Have PV tariffs increased or decreased US solar investment and sales?
Have the Pennsylvanians' solar tariffs helped or hurted American industry and trade relations? Are they still able to compete at internationally competitive price points given the presumed tariff advantage?
https://en.wikipedia.org/wiki/Trump_tariffs#Solar_panels
Noting that for example US Steel is soon to no longer be domestically owned; and there the Pennsylvanians' trade protectionism didn't solve either.
https://en.wikipedia.org/wiki/U.S._Steel
Would more tariffs have helped keep US Steel in the US?
Lightweight woven helical antenna could replace field-deployed dishes
Astrophysical jets produce helically and circularly-polarized emissions, too FWIU.
Presumably helical jets reach earth coherently over such distances because of the stability of helical signals.
1. Could a space agency harvest energy from a (helically and/or circularly-polarised) natural jet, for deep space and/or local system exploration? Can a spacecraft pull against a jet for relativistic motion?
2. Is helical the best way to beam power wirelessly; without heating columns of atmospheric water in the collapsing jet stream?
3. Is there a (hydrodynamic) theory of superfluid quantum gravity that better describes the apparent vorticity and curl of such signals and their effects?
> Presumably helical jets reach earth coherently over such distances because of the stability of helical signals.
I don't think this is correct.
1. Sure, but I doubt they're energetic enough to power a spacecraft. I don't think you can "pull" against radiation.
2. Not really. Atmospheric gasses are going to be aligned too randomly for polarization to matter much. Circular polarization can be trivially decomposed into linear polarization (with a 90° phase offset) so it can still interact as such.
3. Above my paygrade, but "superfluid quantum gravity" sounds like it's likely be firmly in the theoretical realm of physics. Maybe superfluid vacuum theory may be what you have in mind?
1.
/? energy harvesting (and PV and TPV)
/? inverse tractor beam https://www.google.com/search?q=inverse+tractor+beam
2. Perhaps helical is advantageous over distance (and through plasma). Given efficiency of a power transmission channel, what is the estimated loss due to scattering / heating intermediate mass over what volume of atmosphere.
3. Mass warps spacetime (in Lorentzian GR). Plasma have nonuniform mass. Plasma are apparently fluidic. Fluidic nonuniform plasma mass warps spacetime probably fluidically.
There are various conjectures about quantum n-body gravity, which GR does not solve for. Recently, many additional solutions to [perpetual] non-quantum n-body gravity were posted. https://news.ycombinator.com/item?id=37960035
To model a path of [a quasar jet] through such turbulence as empty space, curl and viscosity very probably matter.
Is empty space empty? The CMB Cosmological Microwave Background shows nonuniform distribution of mass/energy in the observable universe. Due to Dirac (before "Dirac sea" and e.g. Godel's Dust solutions), many have attempted to estimate where dark matter must be; though there are no confirmations of the existence of dark matter which is hypothesized to explanation discrepancies between predicted gravitational force distributions and also expansion constants like the Hubble constant.
A sufficient Superfluid Quantum Relativity must predict the behavior of particles in superfluids like Bose-Einstein condensates and superconductors, and SHOULD or MUST also predict n-body gravity.
Electrons appear to behave fluidically in superconductors.
Photons / Polaritons appear to behave fluidically in superfluids; "liquid light"
"Room-temperature superfluidity in a polariton condensate" (2017) https://www.nature.com/articles/nphys4147
https://news.ycombinator.com/item?id=38871054
https://news.ycombinator.com/item?id=38785433
https://news.ycombinator.com/item?id=38500760 :
>>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/ :
>>> [...] Vorticity is interpreted as spin (a particle's internal motion). Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity
And in this superfluid quantum gravity, there is no dark matter. It's more like a "Dirac sea" of quantum foam with pressure FWIU; and does this correspond to black hole topologies of particle affect.
/? collinearity in a helical beam: https://www.google.com/search?q=collinearity+in+a+helical+be...
"Collinear superposition of multiple helical beams generated by a single azimuthally modulated phase-only element" (2005) https://opg.optica.org/ol/abstract.cfm?uri=ol-30-24-3266 https://scholar.google.com/scholar?cites=4433448650775789641...
OAM: Orbital Angular Momentum of Light: https://en.wikipedia.org/wiki/Orbital_angular_momentum_of_li...
"Universal orbital angular momentum spectrum analyzer for beams" (2020) https://link.springer.com/article/10.1186/s43074-020-00019-5 https://scholar.google.com/scholar?cites=7156827987133110450... :
"Approaching the Fundamental Limit of Orbital Angular Momentum Multiplexing Through a Hologram Metasurface" (2022) https://arxiv.org/abs/2106.15120
> This limit considers only one type of polar- ization, and it should be doubled if dual polarizations are adopted. If more plane-wave modes beyond this limit are added, they will not be distinguishable or angularly re- solved. Evanescent modes are required to support the expanded angular spectrum, which, however, are not suitable for far-field communication.
"Phase Singularities to Polarization Singularities" (2020) https://www.hindawi.com/journals/ijo/2020/2812803/ :
> The phase gradient in a phase singularity and azimuth gradient in a polarization singularity circulate around the respective singularities.
"Engineering phase and polarization singularity sheets" (2021) https://www.nature.com/articles/s41467-021-24493-y :
> Optical phase singularities are zeros of a scalar light field. The most systematically studied class of singular fields is vortices: beams with helical wavefronts and a linear (1D) singularity along the optical axis. Beyond these common and stable 1D topologies, we show that a broader family of zero-dimensional (point) and two-dimensional (sheet) singularities can be engineered. We realize sheet singularities by maximizing the field phase gradient at the desired positions. These sheets, owning to their precise alignment requirements, would otherwise only be observed in rare scenarios with high symmetry. Furthermore, by applying an analogous procedure to the full vectorial electric field, we can engineer paraxial transverse polarization singularity sheets. As validation, we experimentally realize phase and polarization singularity sheets with heart-shaped cross-sections using metasurfaces. Singularity engineering of the dark enables new degrees of freedom for light-matter interaction and can inspire similar field topologies beyond optics, from electron beams to acoustics.
Spreadsheet errors can have disastrous consequences – yet we keep making them
What are some Software Development methods for reducing errors:
1. AUTOMATED TESTS; test assertions
To write spreadsheet tests:
A. Write your own test assertion library for their macro language; write assertEqual() in VBscript and Apps Script.
B. Use another language with a test library and a test runner; e.g. Python and the `assert` keyword, unittest.TestCase().assertEqual() or pytest.
C. Test the spreadsheet GUI with something like AutoHotKey.
From https://news.ycombinator.com/item?id=35896192 :
> The Scientific Method is testing, so testing (tests, assertions, fixtures) should be core to any scientific workflow system.
> awesome-jupyter#testing: https://github.com/markusschanta/awesome-jupyter#testing
> ml-tooling/best-of-jupyter lists papermill/papermill under "Interactive Widgets/Visualization" https://github.com/ml-tooling/best-of-jupyter#interactive-wi...
Pandas docs > Comparison with spreadsheets: https://pandas.pydata.org/docs/getting_started/comparison/co...
Pandas docs > I/O > Excel files: https://pandas.pydata.org/docs/user_guide/io.html#excel-file...
nteract/papermill: https://github.com/nteract/papermill :
> papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks. [...]
> This opens up new opportunities for how notebooks can be used. For example:
> - Perhaps you have a financial report that you wish to run with different values on the first or last day of a month or at the beginning or end of the year, using parameters makes this task easier.
"The World Excel Championship is being broadcast on ESPN" (2022) https://news.ycombinator.com/item?id=32420925 :
> Computational notebook speedrun ideas:
Show HN: Codemodder – A new codemod library for Java and Python
Hi HN, I’m here to show you a new codemod library. In case you’re not familiar with the term "codemod", here’s how it was originally defined AFAICT:
> Codemod is a tool/library to assist you with large-scale codebase refactors
Codemods are awesome, but I felt they were far from their potential, and so I’m very proud to show you all an early version of a codemod library we’ve built called Codemodder (https://codemodder.io) that we think moves the "field" forward. Codemodder supports both Python and Java (https://github.com/pixee/codemodder-python and https://github.com/pixee/codemodder-java). The license is AGPL, please don’t kill me.
Primarily, what makes Codemodder different is our design philosophy. Instead of trying to write a new library for both finding code and changing code, which is what traditional codemod libraries do, we aim to provide an easy-to-use orchestration library that helps connect idiomatic tools for querying source code and idiomatic tools for mutating source code.
So, if you love your current linter, Semgrep, Sonar, or PMD, CodeQL or whatever for querying source code – use them! If you love JavaParser or libCST for changing source code – use them! We’ll provide you with all the glue and make building, testing, packaging and orchestrating them easy.
Here are the problems with existing codemod libraries as they exist today, and how Codemodder solves them.
1. They’re not expressive enough. They tend to offer barebones APIs for querying code. There’s simply no way for these libraries to compete with purpose-built static analysis tools for querying code, so we should use them instead.
2. They produce changes without any context. Understanding why a code change is made is important. If the change was obvious to the developer receiving the code change, they probably wouldn’t have made the mistake in the first place! Storytelling is everything, and so we guide you towards making changes that are more likely to be merged.
3. They don’t handle injecting dependencies well. I have to say we’re not great at this yet either, but we have some of the basics and will invest more.
4. Most apps involve multiple languages, but all of today’s codemod libraries are for one language, so they are hard to orchestrate for a single project. We’ve put a lot of work into making sure these libraries are aligned with open source API contracts and formats (https://github.com/pixee/codemodder-specs) so they can be orchestrated similarly by downstream automation.
The idea is "don’t write another PR comment saying the same thing, write a codemod to just make the change automatically for you every time". We hope you like it, and are excited to get any feedback you might have!
How does libCST compare to e.g. pyCQA/redbaron? What about for EA Evolutionary Algorithms; does it preserve comments, or update docstrings and type annotations in mutating the code under test?
Is it necessary to run `black` (and `precommit run --all-files`) to format the code after mutating it?
Instagram/LibCST: https://github.com/Instagram/LibCST
PyCQA/redbaron: https://github.com/PyCQA/redbaron
E.g. PyCQA/bandit does static analysis for security issues in Python code: https://github.com/PyCQA/bandit
https://news.ycombinator.com/item?id=38677294
https://news.ycombinator.com/item?id=24511280 ... https://analysis-tools.dev/tools?languages=python
Hi! Great questions. I'm the lead maintainer of the Python version of the Codemodder framework so I'll do my best to answer.
> How does libCST compare to e.g. pyCQA/redbaron?
LibCST is similar to redbaron in the sense that it does preserve comments and whitespace. The "CST" in LibCST refers to "concrete syntax tree", which preserves comments and whitespace, as opposed to an "abstract syntax tree" or "AST", which does not. Our goal is to make the absolute minimal changes required to harden and improve code, and messing with whitespace would be counter to that goal. It's worth noting that redbaron no longer appears to be maintained and the most recent version of Python that it supported was 3.7 which is now itself EOL.
> What about for EA Evolutionary Algorithms
Can you elaborate? I am familiar with the concept of evolutionary algorithms but I'm not sure I understand what you mean in this context.
> does it preserve comments, or update docstrings and type annotations in mutating the code under test?
Codemodder does preserve comments. Currently none of our codemods update docstrings; I'm not sure we currently have any cases where that would make sense. We do make an effort to update type annotations where appropriate.
> Is it necessary to run `black` (and `precommit run --all-files`) to format the code after mutating it?
Yes, it is currently necessary to run `black` and `precommit` if you're using it on your project. While `black` is incredibly popular, we also can't assume that it's being used on any given project. Running `black` would cause each updated file to be completely reformatted which would lead to very noisy and difficult-to-review changes. I would like to explore better solutions to this issue going forward.
I am familiar with `bandit`. It's a fairly simple security linter and is useful for finding some common issues. It's also pretty prone to false positives and noisy findings. Not every problem identified by `bandit` is something that can be automatically fixed; for example I can't replace a hard-coded password without making a lot of (breaking) assumptions about the structure of your application and the manner in which it is deployed.
I'd love to get your feedback on Python Codemods! Give us a star on GitHub and feel free to open an issue or PR: https://github.com/pixee/codemodder-python
Thanks for your reply!
I think they called it an FST "Full Syntax Tree", which is probably very similar to a CST "Concrete Syntax Tree". At the time that moses was written, Python's internal AST hadn't sufficient code to mutate sufficiently for moses' designs.
MOSES: Meta-Optimizing Semantic Evolutionary Search :
https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutio... :
> All program evolution algorithms tend to produce bloated, convoluted, redundant programs ("spaghetti code"). To avoid this, MOSES performs reduction at each stage, to bring the program into normal form. The specific normalization used is based on Holman's "elegant normal form", which mixes alternate layers of linear and non-linear operators. The resulting form is far more compact than, say, for example, boolean disjunctive normal form. Normalization eliminates redundant terms, and tends to make the resulting code both more human-readable, and faster to execute.
> The above two techniques, optimization and normalization, allow MOSES to outperform standard genetic programming systems.
opencog/asmoses: https://github.com/opencog/asmoses
MOSES outputs Combo (a LISP), Python as an output transform IIUC, and now Atomese with asmoses, which links to a demo notebook: https://robert-haas.github.io/mevis-docs/code/examples/moses...
Evolutionary algorithm > Convergence: https://en.wikipedia.org/wiki/Evolutionary_algorithm#Converg...
/? mujoco learning to walk [with evolutionary selection / RL Reinforcement Learning] https://www.google.com/search?q=mujoco+learning+to+walk&tbm=...
...
Semgrep: https://en.wikipedia.org/wiki/Semgrep links to OWASP Source Code Analysis Tools: https://owasp.org/www-community/Source_Code_Analysis_Tools
But what's static analysis or dynamic analysis source code analysis without Formal Verification?
"Nagini: A Static Verifier for Python": https://pm.inf.ethz.ch/publications/EilersMueller18.pdf https://github.com/marcoeilers/nagini :
> However, there is currently virtually no tool support for reasoning about Python programs beyond type safety.
> We present Nagini, a sound verifier for statically-typed, concurrent Python programs. Nagini can prove memory safety, data race freedom, and user-supplied assertions. Nagini performs modular verification, which is important for verifi- cation to scale and to be able to verify libraries, and automates the verification process for programs annotated with specifications.
Deal > Formal verification > Background; Hoare logic, DbC Design by Contract, Dafny, Z3: https://deal.readthedocs.io/basic/verification.html#backgrou... :
> 2021. deal-solver. We released a tool that converts Python code (including deal contracts) into Z3 theorems that can be formally verified.
Lies, Damn Lies and Analog Inputs (Comparing ADCs on ESP32, Pico and Arduino)
If these were better, those might be quantum computers.
Except, you'd need a phononic representation of the signal pre/post ADC and/or better voltage quantization.
Is this level of component variance the reason we can't do quantum computers with electron charge?
Qubit > Physical implementations does list Electron spin and also Electron charge, which ADCs quantize: https://en.wikipedia.org/wiki/Qubit#Physical_implementations
Compared to a $3 ESP32 and a $6 Pico without a RTC Real Time Clock,
HiFiBerry DAC ADC Pro has a "Dedicated 192kHz/24bit high-quality Burr-Brown ADC" https://www.hifiberry.com/docs/data-sheets/datasheet-dac-adc...
ADC: Analog-to-digital converter: https://en.wikipedia.org/wiki/Analog-to-digital_converter :
> In electronics, an analog-to-digital converter (ADC, A/D, or A-to-D) is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an analog input voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.
Integral linearity: https://en.wikipedia.org/wiki/Integral_linearity
Carbon footprint of homegrown food five times greater than conventional
Not at all surprising, having participated in both.
There are massive economies of scale at play between a 1000 acre farm and a backyard garden plot.
As a general rule of thumb, if it costs more to grow it at home, it probably has a bigger footprint. The embodied energy and carbon footprint for a few garden tools is more than enough to counteract the entire savings.
Not to say it is always the case. Home crops that grow in native soil with minimal support are great. I recommend tree kale like a madman to just about everyone with a garden.
https://gaertnerbuch.luberaedibles.com/en/tree-kale-8211-the....
Yeah I really enjoy gardening but I don't actually grow anything calorie-dense in my garden. Potatoes are somewhat difficult and there's no way I could beat the store's efficiency on those.
More expensive and fragile fruits and vegetables seem efficient though. I think my efficiency on heirloom tomatoes and strawberries beats the grocery store, again going by unit cost.
FWICS on YouTube, potatoes can be grown in a jute bag or a 5gallon bucket with a hole cut in the side for harvesting and tomatoes on top.
Hopefully we discover additional methods of efficient home cultivation.
Soil depletion in industrial farming is probably more work to remedy because there's not enough residue mulch (~compost) for a field but there is enough for a garden.
How does mandatory composting of food waste change the - indeed probably contrived to affect real estate markets - relative efficiency of no till farming and no dig gardening?
revng translates (i386, x86-64, MIPS, ARM, AArch64, s390x) binaries to LLVM IR
Usually such things are called lifters. Wonder how this tool compares to other existing LLVM IR lifters, such as remill[0] and rellume[1].
Hey, rev.ng core dev here.
I guess the core difference is that we do not implement the instructions semantics ourselves, but we reuse QEMU.
QEMU, when in emulation mode (i.e., not KVM), goes from executable code to an intermediate representation (tiny code instructions) and then compile it back to the host architecture.
We use the first part of QEMU but then translate tiny code instructions into LLVM IR.
This means that:
1. We don't have to implement instruction semantics on our own;
2. We can easily support the architectures supported by QEMU. Currently we support x86-64, i386, ARM, AArch64, MIPS and S390. https://wiki.qemu.org/Documentation/Platforms
3. We get out of the box the accuracy in instructions semantics of a mature emulator such as QEMU.
We used to be able to lift `gcc` to LLVM IR, recompile it and have it working.
However, the real difference is that, while it still works, we're now no longer focused on binary-to-binary translation, but we're now 100% focused on writing a full blown decompiler, i.e., something that goes from binary code to (valid) C and has a nice interactive UI.
As you can imagine, the lifter is really just the first half of a decompiler.
For instance, one core feature of rev.ng is automatic detection of `struct`s, just by looking at how they're used: https://twitter.com/_revng/status/1575553313827069952
If you have been doing some reverse engineering, I'm sure you'd appreciate this feature.
Bonus, take a look at our UI: https://twitter.com/_revng/status/1720440515265474631
As a side note, towards the end of the month we're releasing the new website and starting to invite people to the closed beta. If you wanna join: https://rev.ng/register-for-nightly.html
I once had the idea to do malware-similarity analysis. The X86 should first be lifted into a IL, so it gets "normalized" (e.g. register independant). The problem with all lifters is though that even a trivial "add rax, 1" generated a lot of IL code (probably 50-100 lines in LLVM IL), as the lifter had to implement all side effects of the X86 instructions in a fake memory space (i used remill if i remember correctly).
Does this lifter have a similar implementation, or will a "add rax, 1" be lifted to something like "register1 += 1"?
I had some ideas about binary diffing, but it's a difficult topic and I'm too much of a noob in ML to get to something working in a decent time frame.
I think something ABI-, compiler- and architecture-agnostic would be super cool and I started to build a training data set.
I wouldn't diff individual instructions though, I'd go for something more highlevel, such as features of the CFG and type of operations in the nodes.
Ghidriff: Ghidra Binary Diffing Engine, ghidra-patchdiff-correlator: https://news.ycombinator.com/item?id=38870593
Winlator: Android app that lets you to run Windows apps with Wine
Fallout 3 video: https://www.youtube.com/watch?v=9E4wnKf2OsI
800x600, around 20 fps, settings on high.
What about proton-ge wine with Android x86? https://github.com/GloriousEggroll/proton-ge-custom
(Edit) the Moonlight + LizardByte/Sunshine server part of Steam-Headless can do 4k 120fps FWIU; and there's already a Moonlight client for Android [TV]. https://github.com/Steam-Headless/docker-steam-headless
Why would you run Android x86 on an Android ARM device?
Why would you run DirectX games in WINE also with x86 to ARM translation in userspace on Android?
because you own an arm device. emulation exploded on arm androids, see yuzu, dolphin and so on
"revng translates (i386, x86-64, MIPS, ARM, AArch64, s390x) binaries to LLVM IR" (2024) https://news.ycombinator.com/item?id=38975204
Generate Knowledge with Semantic Graphs and RAG
Is there different output each time you run this function; does it converge upon true statements predicated by premises?
Is there different output with different system prompts, then?
/? "system prompt" https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I haven't noticed that to be the case. Small prompt changes can change the output though as with all LLMs.
What about citations, and indicating provenance of derived knowledge?
neuml/txtai, neuml/paperai: https://github.com/neuml/paperai
"Wikidata, with 12B facts, can ground LLMs to improve their factuality" (2023) https://news.ycombinator.com/context?id=38309408
Good points, as you reference.
txtai can build citations for single record answers. https://neuml.hashnode.dev/build-rag-pipelines-with-txtai
I have it on my list to figure out how to cite multiple answer/multiple record citations.
Oxxcu, converting CO₂ into fuels, chemicals and plastics
While these technologies definitely seem to have a future, it's important to remember that, in order to create fuel from CO2 it requires at least as much energy as burning fuel creates (at 100% efficiency), so in the long term this is more of a "way to compress excess energy from the grid for light-weight applications" rather than a replacement for all our current fuel consumption. It seems that they know that as well since they are targeting aviation.
Creating synthetic fuels also seems like the only practical option in the table for seasonal storage. If solar energy is free in summer, even pathetic round trip efficiency could be worthwhile to provide winter reserves.
"Solar energy can now be stored for up to 18 years, say scientists" https://news.ycombinator.com/item?id=34027647 :
> /? MOST Molecular Solar Thermal Energy Storage
https://www.google.com/search?q=MOST%3A+Molecular+Solar+Ther... :
> ... Molecular solar thermal energy storage systems (MOST) offer emission-free energy storage where solar power is stored via valence isomerization in molecular photoswitches. These photoswitchable molecules can later release the stored energy as heat on-demand.
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C44&q=MOS...
> The technology is based on a specially designed molecule of carbon, hydrogen and nitrogen that changes shape when it comes into contact with sunlight.
> It shape-shifts into an ‘energy-rich isomer’ - a molecule made up of the same atoms but arranged together in a different way. The isomer can then be stored in liquid form for later use when needed, such as at night or in the depths of winter.
> A catalyst releases the saved energy as heat while returning the molecule to its original shape, ready to be used again.
> Over the years, researchers have refined the system to the point that it is now possible to store the energy for an incredible 18 years
The Szilard-Chalmers MOST process.
And then at what efficiency? "How the gas turbine conquered the electric power industry" https://news.ycombinator.com/item?id=38307596#38314851
18 years of energy storage tops CAES Compressed Air Energy Storage, which is less lossy than batteries and ultracapacitors, and probably even molten salt, and maybe gravitational energy storage too?
Though, you could do CAES with captured CO2 and it would be less of an accelerant than standard compressed air. How many CO2 fire extinguishers can be filled and shipped off-site per day?
Can CO2 can be made into QA'd [graphene] air filters for [onsite] [flue] capture?
Ask HN: How do I learn to build a wood-frame house?
I recently found out that I love building and doing manual labour, like digging or anything really. It's been a blessing, psycologically, to free my mind from SWE stuff.
Recently, my wife and I bought a piece of land to get out of rent. Now I want to build my own house (with professional help, for sure).
What are good resources to learn the wood structure part of it? Ideally, detailed videos showing how-tos, but anything would help!
From https://news.ycombinator.com/item?id=37175721#37188180 :
> FWIU Round homes fare best in windstorms: https://news.ycombinator.com/item?id=37175721#37188180
From https://news.ycombinator.com/item?id=37793288 :
> "Zero energy ready homes are coming" https://news.ycombinator.com/item?id=35060298
> "Ask HN: Why don't datacenters have passive rooflines like Net Zero homes?" https://news.ycombinator.com/item?id=37579505
So IMHO, round, with a roofline that causes passive heat exchange and ventilation, and Net Zero, with a heated slab, low VOCs, and a metal roof for rainwater collection and irrigation.
A stick frame house is built with standard dimensional lumber, with the walls nailed on the ground and raised.
A timber frame house is built with posts and beams made out of larger lumber.
Roof trusses are joined with metal (steel,) bracket plates typically as flatbed-delivered stacks prefabricated in a climate-conditioned warehouse.
Triangles, Arches, Circles and Spheres distribute load in structures.
Only certain tilings resist shearing.
Walls fall over if unsupported on their third axis.
Water fills behind nonporous (block) walls.
A polygon mesh is deformable, but a idk triangularly-filled mesh is not.
Triangles' side length to angle ratios only have one solution: triangles can't shear without the side or joined angle breaking (and so roof trusses are typically triangular and subtended into smaller triangles).
To shear a rectangle, you push and pull on it (and its vertices remain connected).
Shearing is usually described as a linear transformation (but which point is the origin, and so a linear transformation matrix about an origin is actually insufficient, and rigid body splined kinematics also omits GR).
The roof and stairs of a house are typically the strongest parts of a stick frame structure.
A square house twists underneath of a (relatively more static) triangular roof in high winds (with high shearing force due to wind).
Stick frame studs are '16" on center'
Doorways and windows have framed headers with crple studs that may be spaced less than 16" apart.
Add 2x8+ plated headers to insulate and structurally secure exterior doors and windows.
In the United States, a (2024) 2x4 is 1.5x3.5", and a 2x6 is 1.5x5.5".
With 2x6s or 2x8s, there is more room for higher R-value insulation.
Framing (construction) https://en.wikipedia.org/wiki/Framing_(construction)
I have saved woodworking videos in a playlist for later review with materials from my neighbor's house first: https://youtube.com/playlist?list=PLt_DvKGJ_QLau_vXQQnJKgTXQ...
Framing and (Hurricane proof) building code videos are accidentally interspersed, per traditional woodworking video procedure.
Also doing software And carpentry: "The Newbie Woodworker" on Kickback table saw Safety: https://youtu.be/ZUZ8hRm7a8g?si=tDYX92TUQKPvNQip
Joinery software, Japanese joinery: https://news.ycombinator.com/item?id=36191589
Many older house framing videos show old methods that are not to code. For example, modern code requires that footers be bolted to the _____.
/? home framing terminology: https://www.google.com/search?q=home+framing+terminology
To be a building inspector, you train on the building code and then you label code violations.
Smart builders and contractors can hire and train their own preemptive inspectors to inspect it before you have it inspected by a 3rd party home inspector; for resale; for someone else with an infant to live in.
/? Building a house: https://www.youtube.com/results?sp=mAEA&search_query=Framing...
We saw a Pi running underwater at CES in Las Vegas
> there is a Raspberry Pi that has been running underwater in the lobby of HZO’s building for 525 days and counting. [...]
> We’re not sure which treatment our tiny green computer got — HZO produces a number of specialised coatings. Perhaps the Raspberry Pi got a parylene coating, as that seems to be a highly water-resistant and submersible option. They also offer plasma-applied coatings that are apparently a good low-cost option if you’re not going to be chucking your tech into the deep on a regular basis
Does the coating get ruined if you unplug and plug back in stuff such as the HDMI cable and power cord?
Yes
How so? The contacts themselves are probably fine without the coating, and which other parts would plugging and unplugging a cable into a port (re)expose?
If you pull the plug, you break the seal. Water will be able to make contact with the connectors and create shorts between the signals.
What if you let them dry before plugging them back in?
A conformal coating is like putting a circuit board in a plastic bag. In this application they applied it around the connectors. If you remove the connectors, the seal is broken. If you break the seal and submerge the board in water you will have a short.
Usually in these types of applications, you use DEUTSCH connectors https://www.te.com/usa-en/products/brands/deutsch.html?tab=p... that have an o-ring to seal the connector and allow it to be exposed to the elements and provides a seal that also allows removing the connector.
How would you do regenerative braking train couplers between engines and battery cars? (That at present have just an air hose to hold brakes open per Westinghouse, but no power wiring or data cabling or a spec yet)
Would standard EV charger specs work for coupling battery train cars to engines and/or regenerative braking cars?
Maybe at least 1 Gbps and PoE+ to support a mesh network for sensors?
Presumably hyperloop teams have solutions for this?
Starlite
Would there be advantages to a tensile biocomposite with Starlite and/or Firepaste e.g. as a coating or blended?
Can Starlite be used as heat shielding for [biocomposite] civilian rockets, and how would the reentry emissions differ?
You're aware that capsules commonly use cork as ablative heat shielding, right?
If you want to be better, you'd need something like the shuttle to get by without ablatives.
Half of recent US inflation due to high corporate profits, report finds
No. No. No. This is not how it works.
Inflation is ONLY caused by fiat currency not backed by anything real; The federal government writing checks for objects/services where the total amount expended exceeds the amount intaken via taxes. Literally any other explanation is incomplete.
When the retail prices of essential commodities increase, sellers increase their own prices in order to afford higher direct and indirect costs like gas and groceries and rent.
In commodity exchange licensed markets, shorts and other market pressures are expected to bring prices back down to equilibria or stabilities around such; but can or do retail consumer buying patterns bring retail prices back down?
> but can or do retail consumer buying patterns bring retail prices back down?
No, they do not and cannot, because these are just all secondary effects, not causes.
Imagine if we were all using Bitcoin. Imagine if one person on the Bitcoin network was allowed to not have to expend provide proof of work to produce Bitcoins, but everyone else was. This person began spending extravagantly, creating Bitcoins whenever they wanted to, while everyone else was forced to do a proof of work. Everything downstream becomes a secondary effect, trying to deal with that person's spending.
That is how Fiat currencies work without backing.
The Bretton-Woods agreement was not unique to the US. Populist farmers had wanted USD to be backed with Silver, too (because they had silver on their farms). "Follow the Yellowbrick Road" in the Wizard of Oz is contextually about bimetallism and William Jennings Bryan, the Cowardly Lion.
So, if USD was suddenly also backed by silver (or lab-grown diamonds), how would that change the relative purchasing power of a dollar?
Instead it is said that we have a petrodollar (or a "narco-oil") state with large standing army to force international trade to use the most stable currency of trade; and how sensitive to the price of commodities like oil is our economic stability?
There were fewer economic calamities in frequency and severity while we were on the gold standard (when we had gold reserve-backed fiat currency), but less growth.
Anyways, which economic effects are secondary when most of economics is nonlinear in that a change in quantity X results in a change in Y results in e.g. "back pressure" on X?
Which are particles and which are quasiparticles from which emergent dynamics can emerge; which are secondary effects? Also, describe international turmoil / [false] information, and commodity prices.
Conflict/Unrest => Price Action => Price Action in other markets => Unsustainable increases in CPI and reductions in PPP
You do realize that people (miners) add new bitcoins into existence every 10 minutes?
Sure there's currently a limit of "21 million bitcoins" but there's only 19 million right now and miners keep minting more so it's currently inflationary. I'm sure those miners won't collude and decide that at 21 million it'd sure be nice if they kept the ability to mint more.
All non-PQ (Post Quantum) chains will need to hard fork to support PQ.
Account addresses (hashes of public keys) will be longer with PQ, so form validation logic will need to be updated to accommodate longer strings.
Anyone can hardfork Bitcoin (or Litecoin) and thereby create a new asset. They don't have to recognize the existing Bitcoin blockchain like e.g. _ Cash and _ Diamond. (Bitcoin did not proactively or responsively protect its trademark.)
For there to be more than 21 million Bitcoin, they would need to hardfork.
Bitcoin is both inflationary and deflationary; while block rewards equitably distribute Bitcoin to miners and thereby increase the [Bitcoin] MSB (Monetary Supply Base), the amount of "burnt" and "lost to the depths" Bitcoin also continues to increase. (Presumably in part due to our shared general incapacity to manage sk/pk keypairs without losing them, even when there's money on it).
Calculus on Computational Graphs: Backpropagation (2015)
Nice blog. I'll be provocative/pedantic for no good reason and say that what's described isn't "calculus" per se, because you can't do calculus on discrete objects like a graph. However, you can define the derivative purely algebraically (as a linear operation which satisfies the Leibniz chain/product rule), which is more accurately what is being described.
>because you can't do calculus on discrete objects like a graph
Of course you can, what do you think shift operators and recurrence relations are? https://en.wikipedia.org/wiki/Finite_difference?#Calculus_of...
Physicists Design a Way to Detect Quantum Behavior in Large Objects, Like Us
"Mass-Independent Scheme to Test the Quantumness of a Massive Object" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... :
> ABSTRACT: The search for empirical schemes to evidence the nonclassicality of large masses is a central quest of current research. However, practical schemes to witness the irreducible quantumness of an arbitrarily large mass are still lacking. To this end, we incorporate crucial modifications to the standard tools for probing the quantum violation of the pivotal classical notion of macrorealism (MR): while usual tests use the same measurement arrangement at successive times, here we use two different measurement arrangements. This yields a striking result: a mass-independent violation of MR is possible for harmonic oscillator systems. In fact, our adaptation enables probing quantum violations for literally any mass, momentum, and frequency. Moreover, coarse-grained position measurements at an accuracy much worse than the standard quantum limit, as well as knowing the relevant parameters only to this precision, without requiring them to be tuned, suffice for our proposal. These should drastically simplify the experimental effort in testing the nonclassicality of massive objects ranging from atomic ions to macroscopic mirrors in LIGO.
WebGPU is now available on Android
How do you run the task manager with Android Chrome?
Does Android Chrome have the per-tab hover card RAM use feature as desktop chrome?
From https://news.ycombinator.com/item?id=37840416 :
>> From "Manifest V3, webRequest, and ad blockers" (2022) https://news.ycombinator.com/item?id=32953286 :
>> What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU
Speedbump – a TCP proxy to simulate variable network latency
Very many apps poorly perform with intermittent network connectivity, in Diaster Relief scenarios.
More app developers could help others by testing with simulated intermittent connectivity.
From "Toxiproxy is a framework for simulating network conditions" (2021) https://news.ycombinator.com/item?id=29084277#29088775 :
> Many apps lack 'pending in outbox' functionality that we expect from e.g. email clients.
> - [ ] Who could develop a set of reference toxiproxy 'test case mutators' (?) for simulating typical #DisasterRelief connectivity issues?
My favorite is not filling buffers before sending the packet and then everything with a crappy internet connection (dropped packets + high latency = TCP retransmissions) is suddenly doing 120kps because you're only sending 50 byte packets and every 10th one is getting lost. Meanwhile, that's a thread on your server not doing useful work.
Would you mind describing this problem more fully? I'm currently learning TCP more deeply and this sounds interesting but I don't quite understand the issue.
There are two choice you have when sending data:
1. Wait until you have enough data to send in at least one packet (maximize throughput).
2. Send data as soon as you have it, even if it is less than a packet's worth. (minimize latency).
If you want to send a file, for example, you should use the first choice. A common or naive implementation might be to read the file in chunks of 9kb (about the size of a jumbo frame), send it to the socket, and then flush it. There are multiple problems with that approach.
1. Not everyone has jumbo frames, so if we had a common MTU of 1500 bytes on our link, that means you'd actually send ~6.25 packets worth of data.
2. Even if you were using jumbo frames, TCP headers use up that space. So, if you flush the entire frame's worth of data, you'd send something like 1.1 packets.
3. The filesystem might or might not give you the maximum bytes you request. If there is i/o pressure, you might only get back some random amount instead of the 9k you requested. So, if you flush the buffer to the TCP stack, you'd only send that random amount instead of what you assumed would be 9kb.
These mistakes generally send lots of small packets very quickly, which is fine when the link is fat and short. As soon as one packet gets lost, you're still sending a bunch of small packets, but now we have to stop and wait for the lost packet again before we can start handling more of them. So, if you are sending to someone on, say, airport/cafe wifi, they will have atrocious download speeds even though they have a pretty fat link (the amount of retransmissions due to wifi interference is a large % of the bandwidth). This is very similar to "head of line blocking" but on the link level.
I only had to learn about this fairly recently because my home is surrounded by a literal ton of wifi access points (over 20!) which causes quite a bit of interference. On wifi, I can get around 800mbps on the link but about every 10th packet needs to be retransmitted due to having so many access points around me (not to mention my region uses most of the 5G band for airport radar, so there are only a few channels available in that band). So, when applications/servers don't fill the buffers, I get about 50kpbs. When they do fill the buffers, I can get around 300mbps maximum, even though I have an 800mbps link.
Hope that helps.
From "MOOC: Reducing Internet Latency: Why and How" https://news.ycombinator.com/item?id=37285586#37285830 : sqm-autorate, speedtest-netperf, netperf/iperf, flent dslreports_8dn
Would there be a benefit to autotuning with e.g. sqm-autorate in the suggested environments?
Memory Loss from TBI Reversed
> Importantly for diagnostic and treatment purposes, the researchers found that the memory loss attributed to head injury was not a permanent pathological event driven by a neurodegenerative disease. Indeed, the researchers could reverse the amnesia [non-invasively with lasers]
TIL infrared light stimulates neuronal growth and blue and green inhibit.
Could an infrared [DLP] waveguide coherent beams?
FWIU, there are new two/multiple beam waveguiding methods for converging beyond occluding media.
Would a space-filling curve be a better way to quantize and thereby address specific neurons of brains, relative to loci; or is there that much functional specificity anyway?
Read and write to brains affects chain of custody, so we'll probably need total recall for something like this; and an immutable ledger that stores admissable header/footer dates for documents.
And what about neuroantiinflammatories and neurogeneration in the hippocampus, while you're imaging with that resolution?
WhisperSpeech – An open source text-to-speech system built by inverting Whisper
Interested to see how it performs for Mandarin Chinese speech synthesis, especially with prosody and emotion. The highest quality open source model I've seen so far is EmotiVoice[0], which I've made a CLI wrapper around to generate audio for flashcards.[1] For EmotiVoice, you can apparently also clone your own voice with a GPU, but I have not tested this.[2]
[0] https://github.com/netease-youdao/EmotiVoice
[1] https://github.com/siraben/emotivoice-cli
[2] https://github.com/netease-youdao/EmotiVoice/wiki/Voice-Clon...
Have you released your flashcard app?
If you're interested, I have a small side project (https://imaginanki.com) for generating Anki decks with images + speech (via SDXL/Azure).
Some language learning resources From "Show HN: Open-source tool for creating courses like Duolingo" (2023) https://news.ycombinator.com/item?id=38317345 :
> ENH: Generate Anki decks with {IPA symbols, Greek letters w/ LaTeX for math and science,
SQLite 3.45 released with JSONB support
If anyone wants to try this out on macOS here's the fastest way I've found to try a new SQLite version there: https://til.simonwillison.net/sqlite/sqlite-version-macos-py...
Short version:
cd /tmp
wget 'https://www.sqlite.org/2024/sqlite-amalgamation-3450000.zip'
unzip sqlite-amalgamation-3450000.zip
cd sqlite-amalgamation-3450000
gcc -dynamiclib sqlite3.c -o libsqlite3.0.dylib -lm -lpthread
DYLD_LIBRARY_PATH=$PWD python3 -c "import sqlite3; print(sqlite3.sqlite_version)"
That prints "3.45.0" for me.If you have https://datasette.io/ installed you can then get a web UI for trying it out by running:
DYLD_LIBRARY_PATH=$PWD datasette
If you change the version, URL, and the sha256 in conda-forge/sqlite-feedstock//recipe/meta.yaml and send a PR, it should build end then deploy the latest version so that you can just `mamba install -y sqlite libspatiallite sqlite-utils` without also mamba installing gcc or clang. https://github.com/conda-forge/sqlite-feedstock/blob/main/re... https://github.com/conda-forge/sqlite-feedstock/blob/main/re...
Review: Mitigation measures to reduce tire and road wear particles
"Tire dust makes up the majority of ocean microplastics" (2023) https://news.ycombinator.com/item?id=37728005
Mitigation measures:
- Make tires out of Dandelion tubber
- Remove tire walls from the ocean
- "Chauffeur mode" that lowers torque especially at initial acceleration so that the tires don't spin excessively
Raspberry Pi is manufacturing 70K Raspberry Pi 5s per week
Raspberry really needs to pick up the pace to compete with newer (albeit more expensive) SBCs with Intel N100 and similar processors. The TDPs are very comparable, but the x86_64 still blows it out of the park performance-wise. Also, software support is a non-issue on Intel platforms, which still is not the case for Arm platforms.
A Rockchip RK3588S with an NPU TPU and an RTC battery is a good idea for safer complex SBC applications.
From https://news.ycombinator.com/item?id=38007967 :
> The RTk.GPIO is a Plug & Play USB Device which adds 28 x Raspberry Pi style GPIO pins to your computer
An RP2040 (RPi Pico) is also a USB 2x20 GPIO, with a uf2 bootloader and upgradeable firmware.
Conda-forge builds arm64 Linux and now MacOS packages from feedstocks.
There's usually not a maintained arm64 copy of containers though, so you must build containers yourself with or for ARM64 with cross-compilation from a faster build machine, which distrobox makes really simple. https://news.ycombinator.com/item?id=38505448 ... https://github.com/89luca89/distrobox/blob/main/docs/useful_...
Penrose – Create diagrams by typing notation in plain text
Oh, I thought these would be to create spacetime diagrams. As in Penrose Diagrams: https://en.wikipedia.org/wiki/Penrose_diagram
For Penrose spacetime diagrams I found xhorizon [1] and einsteinpy [2]
1: https://github.com/xh-diagrams/xhorizon
2: https://github.com/einsteinpy/einsteinpy/issues/480
There's a 'spacetime' GH topic: https://github.com/topics/spacetime
/? spacetime diagrams site:github.com : https://www.google.com/search?q=spacetime+diagrams+site%3Agi...
Also there there's
Penrose graphical notation aka Tensor Diagrams: https://en.wikipedia.org/wiki/Penrose_graphical_notation :
> In mathematics and physics, Penrose graphical notation or tensor diagram notation is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971.[1] A diagram in the notation consists of several shapes linked together by lines.
Penrose graphical notation (tensor diagram notation) of a matrix product state of five particles
The notation widely appears in modern quantum theory, particularly in matrix product states and quantum circuits. In particular, Categorical quantum mechanics which includes ZX-calculus is a fully comprehensive reformulation of quantum theory in terms of Penrose diagrams, and is now widely used in quantum industry.
> The notation has been studied extensively by Predrag Cvitanović, who used it, along with Feynman's diagrams and other related notations in developing "birdtracks", a group-theoretical diagram to classify the classical Lie groups.[2] Penrose's notation has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra.
(From a post about Tensor Networks this week: https://news.ycombinator.com/item?id=38922467 )
Git Branches as a Social Construct
`git push -f` and solipsistic shared reality; when you `push -f` everyone gets to vote on what chain of commit hashes is the real ${orgname}/${reponame}//${branch}
There is no formal consensus mechanism beyond push permission in git itself; and so what would be regressive centralization of technically decentralized [git] in order to enforce directory-level permissions like e.g. CODEOWNERS is actually necessary for quality control.
Eventually, Github and Gitlab integrated code review like ReviewBoard and Gerrit (instead of mailing list patchbombs and quilt); and code review approval from zero one or more authorized reviewers is now an optional necessary precondition for merging to a git branch as a social construct.
DevSecOps says, Pull Requests (or Merge Requests) are merged only when (1) all of the automated tests pass; and (2) enough necessary reviewers have indicated approval.
Git doesn't tell you when it's necessary to have full test coverage and manual infosec review in development cycles that produce releases, and neither do Pull Requests.
https://westurner.github.io/hnlog/#comment-19552164 ctrl-f hubflow
It looks like datasift's gitflow/hubflow docs are 404'ing, but the original nvie blog post [1] has the Git branching workflow diagrams; which the wpsharks/hubflow fork [3] of datasift/gitflow fork [2] of gitflow [1]has a copy of in the README:
[1] https://github.com/nvie/gitflow
[2] https://github.com/datasift/gitflow
[3] https://github.com/wpsharks/hubflow?tab=readme-ov-file
https://learngitbranching.js.org/ is still a great resource, and it could work on mobile devices.
The math of VCS deltas and mutable and immutable content-addressed DAG nodes identified by 2^n bits describing repo/$((2*inf)) bits ;
>> "ugit – Learn Git Internals by Building Git in Python" https://www.leshenko.net/p/ugit/
SLSA.dev is a social construct atop e.g. git, which is really a low-level purpose-built tool and Perl and now Python porcelain.
jj (jujutsu) is a git-compatible VCS CLI: https://github.com/martinvonz/jj
"Ask HN: Best Git workflow for small teams" (2016) https://news.ycombinator.com/item?id=12941997
How much social construct atop git is necessary for e.g. the CPython project?
Python Devguide > Getting Started > [ Git bootcamp and cheat sheet , Lifecycle of a pull request ] https://devguide.python.org/getting-started/
Python Devguide > Development workflow > Development cycle > Branches: https://devguide.python.org/developer-workflow/development-c... :
> [ In-development (main) branch, Maintenance branches, Security branches, End-of-life branches ]
Git branches have lifecycles.
US tech innovation dreams soured by changed R&D tax laws
Wikihouse: Open-Source Houses
There is a pretty cool project for a complete open source village: -https://www.opensourceecology.org/
Supposed to be all the machines you need to supply a whole village, made from low cost basic materials.
I think it’s from 2014 not sure if it’s still being worked on or if anyone actually has used the blueprints in a real world scenario.
TIL about The Liberator: The world's first open source compressed earth brick press. https://www.opensourceecology.org/back-to-compressed-earth-b...
A multiple-CEB unit that makes interlocking blocks that don't require mortar could build on work from this project.
Add'l notes on CEB, Algae, Sargassum, Hemp in the 2024 International and US Residential Building Code, LEGO-like Hempcrete block: https://news.ycombinator.com/item?id=37693225
FWIU Round homes fare best in windstorms: https://news.ycombinator.com/item?id=37175721#37188180
And curvy half walls one brick wide don't fall down
[CEB] "Crinkle crankle wall" https://en.wikipedia.org/wiki/Crinkle_crankle_wall
> A multiple-CEB unit that makes interlocking blocks that don't require mortar could build on work from this project.
They have a page about interlocking blocks and according to it, they are simply too expensive, as you need a higher share of cement in blocks to stabilize them.
Rkyv: A zero-copy deserialization framework for rust
rust_serialization_benchmark: https://github.com/djkoloski/rust_serialization_benchmark
apache/arrow-rs: https://github.com/apache/arrow-rs
From https://arrow.apache.org/faq/ :
> How does Arrow relate to Flatbuffers?
> Flatbuffers is a low-level building block for binary data serialization. It is not adapted to the representation of large, structured, homogenous data, and does not sit at the right abstraction layer for data analysis tasks.
> Arrow is a data layer aimed directly at the needs of data analysis, providing a comprehensive collection of data types required to analytics, built-in support for “null” values (representing missing data), and an expanding toolbox of I/O and computing facilities.
> The Arrow file format does use Flatbuffers under the hood to serialize schemas and other metadata needed to implement the Arrow binary IPC protocol, but the Arrow data format uses its own representation for optimal access and computation
The star of that benchmark is actually nanoserde.
1. outperforms a bunch of binary formats (as well as all other text formats) 2. has zero dependencies with fast compile times (even the derive macros are custom and don't use syn family crates as build-time deps)
Annoying how they mix µs and ms inside the same timing column. Makes it harder to visually assess.
Mixing milliseconds down to picoseconds made the columns too wide. All of the benchmark data is also available as JSON in the benchmark_results directory if you wanted to drop it into sheets or some other analysis.
Programming Language for Ternary Computing
https://news.ycombinator.com/item?id=38677782 re: number / logical separable states:
sign * e^(x*yi*zπ)
Or maybe instead: sign * e^(x*yi*zπ*(a_1*inf))
sign * e^(x*i^y*π^z*(inf^a_1))
But that's not ternary at all.Python has `True or False or None`, which you conventionally test with `is`
The Tensor Network
Tensor network: https://en.wikipedia.org/wiki/Tensor_network :
> The wave function is encoded as a tensor contraction of a network of individual tensors.[3] The structure of the individual tensors can impose global symmetries on the wave function (such as antisymmetry under exchange of fermions) or restrict the wave function to specific quantum numbers, like total charge, angular momentum, or spin. It is also possible to derive strict bounds on quantities like entanglement and correlation length using the mathematical structure of the tensor network.[4] This has made tensor networks useful in theoretical studies of quantum information in many-body systems. They have also proved useful in variational studies of ground states, excited states, and dynamics of strongly correlated many-body systems. [5]
...
Tensor product network: https://en.wikipedia.org/wiki/Tensor_product_network
Tensor product model transformation of : https://en.wikipedia.org/wiki/Tensor_product_model_transform...
HOSVD; Higher-Order Singular Value Decomposition
TP model transformation in control theory: https://en.wikipedia.org/wiki/TP_model_transformation_in_con...
...
https://tensornetwork.org/diagrams/ :
> Let us look at some example diagrams for familiar low-order tensors:
Tensor diagram = Penrose graphical notation > Tensor operations: https://en.wikipedia.org/wiki/Penrose_graphical_notation ... Trace Diagrams, Spin Network, [... Knots and Quantum Invariant theories, Chern-Simons Theory, Topology]
Can AI replace a co-founder?
Founders, I am building an AI Co-founder to automate your efforts. We are starting with automating fund raising which is one of the most crucial part during an early stage startup.
We imagine that an AI partner can further helps you with brainstorming ideas, is good with tech, helps you to structure your business, helps you with automation of the work , has tools for different skills like lead generation marketing etc. Most important can identify what's right and wrong with your process and provide the optimal feedback to it.
What else should we add in the roadmap? Open to any support or advice, thanks!
- YC Startup Library > Cofounder: https://www.ycombinator.com/library/search?query=Cofounder
- Business plan > Business plans for start-ups > Typical structure for a business plan for a start-up venture: https://en.wikipedia.org/wiki/Business_plan#Business_plans_f...
- MetaGPT: https://github.com/geekan/MetaGPT :
> MetaGPT takes a one line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents, etc.
- https://news.ycombinator.com/item?id=29141796 ; "Co-Founder Equity Calculator"
- "Ask HN: What are your go to SaaS products for startups/MVPs?" (2020) https://news.ycombinator.com/item?id=23535828 ; FounderKit, StackShare
- From "Ask HN: Steps to forming a company?" https://news.ycombinator.com/item?id=19031826 :
>> USA Small Business Administration: "10 steps to start your business." https://www.sba.gov/starting-business/how-start-business/10-...
>> "Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California" https://github.com/leonar15/startup-checklist
- Playbook
- Handbook
- "Business Development Unit"
Generate a pitch deck with manimGPT demonstrating our projected wild path to ultimate market share, parametrically
What-Ifs like causal.app, but with no understanding of where the technology is
Nominate software projects for ClusterFuzz due to metrics
Suggest sentiment intervations at operations people
Suggest major pivots and thus necessary major version bumps in our SemVer-versioned Business Plan
Bikeshed me on new names for said pivot, hypothetically; before we consider modifying the product mix
Write :JobPosting(s) to use as genai system prompts
Suggest new board members
Silicon Valley (HBO)
Various things, boss
DIY Book Scanner
I was interested to see how they solved automated page turning... but none of the designs appear to address this?
Sounds like a repetitive motion injury waiting to happen after you get done scanning Fountainhead.
Fully automatic ones do exist, but its generally not done to not risk damage to the books. And without ocr of page numbers you always risked missing pages.
https://www.youtube.com/watch?v=kvM-tjrS2-U
That said, a lot of automation processes in place do destroy the book by cutting the spine and scanning it.
"Ask HN: What's the best out-of-box Document OCR/Analyzing/recognition API?" (2024) https://news.ycombinator.com/item?id=38829242 ; BetterOCR :
> Better text detection by combining multiple OCR engines (EasyOCR, Tesseract, and Pororo)
FWIU there's a way to image multiple stacked sheets of ancient scrolls without unrolling them?
Buffett once bet $1M that he could beat a group of hedge funds over 10 years
Misleading hedline.
Buffet bet $1M that S&P 500 ETF (VOO) index fund would beat hedge fund portfolio selected by Ted Seides. Low cost index funds performing generally better is well documented. Buffet's bet is common sense.
Reasons why:
- lower cost is important over longer term.
- Well diversified index fund picks every success, unlike actively managed portfolio.
I’m not sure it was considered common sense when the bet was made. And when it settled there were various claims that Buffet got lucky in that stocks just went up over the 10 years – perhaps he wouldn’t have won if there was some kind of crash.
I’m not claiming he was wrong but I’m not sure one can really reduce it to those few points as if it makes it all obvious.
> perhaps he wouldn’t have won if there was some kind of crash.
But there was! That bet started in 2007, and 2008-2009 saw the biggest percentage drop in the S&P 500 in decades with the GFC. We're talking down to levels last seen in 1996, at the worst of it in March 2009.
The S&P500 is a backtesting reference benchmark.
You can compare portfolio performance through drawdown periods with backtesting tools.
Macroeconomic drawdowns are good times to have cash for acquisitions; instead of free government cheese.
"Tear Sheets: Definition and Examples in Finance, Vs. Prospectus" https://www.investopedia.com/terms/t/tearsheets.asp :
> While tear sheets date back to the old days when stockbrokers would rip individual pages out of the S&P summary book and send them to current or potential clients, most information is extracted online today. Therefore, any concise representation of a company's business fundamentals could be considered a tear sheet.
From https://news.ycombinator.com/item?id=24428206#24429801 :
> pyfolio.tears.create_interesting_times_tear_sheet measures algorithmic trading algorithm performance during "stress events" https://github.com/quantopian/pyfolio/blob/4b901f6d73aa02ceb... :
>> Generate a number of returns plots around interesting points in time, like [...]
From https://news.ycombinator.com/item?id=19111911 :
> pyfolio/examples/zipline_algo_example.ipynb: https://nbviewer.org/github/quantopian/pyfolio/blob/master/p... > "Worst Drawdown Periods"
Drawdown > Trading definitions: https://en.wikipedia.org/wiki/Drawdown_(economics)#Trading_d...
awesome-quant > Python > Trading & Backtesting: https://github.com/wilsonfreitas/awesome-quant#trading--back...
Bit-for-bit reproducible builds with Dockerfile
SOURCE_DATE_EPOCH specification: https://reproducible-builds.org/specs/source-date-epoch/
/? SOURCE_DATE_EPOCH: https://www.google.com/search?q=SOURCE_DATE_EPOCH
MotorOS: a Rust-first operating system for x64 VMs
"Maestro: A Linux-compatible kernel in Rust" (2023) https://news.ycombinator.com/item?id=38852360#38857185 ; redox-os, cosmic-de , Motūrus OS; MotorOS
Scientists Destroy Illusion That Coin Toss Flips Are 50–50
"Discrete uniform distribution" https://en.wikipedia.org/wiki/Discrete_uniform_distribution
Randomness test: https://en.wikipedia.org/wiki/Randomness_test
Google/paranoid_crypto > docs/randomness_tests.md > Tests: https://github.com/google/paranoid_crypto/blob/main/docs/ran...
"10 Tbit/s physical random bit generation with chaotic microcomb" (2023) https://news.ycombinator.com/item?id=38044587 ; without DRBG, too
"51% Dynamical Bias in the Coin Toss [pdf]" (2007) https://news.ycombinator.com/item?id=38725981
"Researchers flip coins 350k times to find out if odds are 50/50" (2023) https://news.ycombinator.com/item?id=37958554 :
"Fair coins tend to land on the same side they started: Evidence from 350,757 flips" (2023) https://arxiv.org/abs/2310.04153
SPH: Smoothed-Particle Hydrodynamics
- In this Hydrodynamics, it looks like `i` is a matrix element subscript (and there is no √-1)
- Fedi's is a superhydrodynamic quantum gravity; with Gross-Pitaevski :
From "Light and gravitational waves don't arrive simultaneously" (2023) https://news.ycombinator.com/item?id=38056295 :
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
> In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics)
Researchers create first functional graphene semiconductor
"Ultrahigh-mobility semiconducting epitaxial graphene on silicon carbide" (2024) https://www.nature.com/articles/s41586-023-06811-0 :
> Semiconducting graphene plays an important part in graphene nanoelectronics because of the lack of an intrinsic bandgap in graphene1. In the past two decades, attempts to modify the bandgap either by quantum confinement or by chemical functionalization failed to produce viable semiconducting graphene. Here we demonstrate that semiconducting epigraphene (SEG) on single-crystal silicon carbide substrates has a band gap of 0.6 eV and room temperature mobilities exceeding 5,000 cm2 V−1 s−1, which is 10 times larger than that of silicon and 20 times larger than that of the other two-dimensional semiconductors. It is well known that when silicon evaporates from silicon carbide crystal surfaces, the carbon-rich surface crystallizes to produce graphene multilayers2. The first graphitic layer to form on the silicon-terminated face of SiC is an insulating epigraphene layer that is partially covalently bonded to the SiC surface [3]. Spectroscopic measurements of this buffer layer [4] demonstrated semiconducting signatures [4], but the mobilities of this layer were limited because of disorder5. Here we demonstrate a quasi-equilibrium annealing method that produces SEG (that is, a well-ordered buffer layer) on macroscopic atomically flat terraces. The SEG lattice is aligned with the SiC substrate. It is chemically, mechanically and thermally robust and can be patterned and seamlessly connected to semimetallic epigraphene using conventional semiconductor fabrication techniques. These essential properties make SEG suitable for nanoelectronics.
Ghidriff: Ghidra Binary Diffing Engine
https://clearbluejar.github.io/posts/ghidriff-ghidra-binary-...
Re: Diaphora bindiff and ghidra-patchdiff-correlator and awesome-ida-x64-olly-plugin: https://github.com/fr0gger/awesome-ida-x64-olly-plugin/blob/...
SpaceX Illegally Fired Workers Critical of Musk, Federal Agency Says
8 out of 13,000 employees wanted to circulate a letter complaining about Musk's tweets and the Labor board claims they were "illegally" fired over this. Which law specifically requires employers to permit every employee to spam the company with their personal grievances? This isn't union organization or whistleblowing. They could use a little more detail on how and why they believe this firing is illegal.
>You have the right to act with co-workers to address work-related issues
How do Musk’s personal statements/opinions outside of work count as work-related issues?
Genuinely asking, I don’t understand how these rules are enforced.
Apple has locked down internal communications about non work-related topics on internal slack and email in the wake of RTO discussions. If this is grounds for a lawsuit, I think other companies are probably open to these kinds of lawsuits too.
[deleted]
A x-platform package mgmt (apt/pacman/flatpak/etc.) wrapper with super powers
smartpm (-2012) had similar objectives: https://github.com/smartpm/smart
`wajig help` has a number of useful package management commands: https://github.com/gjwgit/wajig/tree/main/docs
`dnf history` and `conda env export --from-history` are useful here, too.
etckeeper, usrlog.sh because bash history fails; into Ansible playbooks because shell quoting and logging
Recently I learned that rpm-ostree now supports OCI container images as host system images. https://news.ycombinator.com/item?id=38828040#38830000
Rpm-ostree automatically installs your (dnf) packages atop the root image, and then layers your /etc.
Container2wasm: Convert Containers to WASM Blobs
Do any GUI frameworks support WASM?
I've been looking for a way to run GUI applications remotely for a while, specifically on a wlroots compositor. Projects like this (maybe one day) and https://github.com/udevbe/greenfield are interesting since they essentially make access universally accessible.
container2wasm > Additional Resources: https://github.com/ktock/container2wasm#additional-resources :
vscode-container-wasm: https://github.com/ktock/vscode-container-wasm :
> VSCode extension for running containers on VSCode for the web (e.g. github.dev, [vscode.dev,]), leveraging container2wasm
Chromebooks can run vscode.dev and WASM.
Is this the best way to `git clone` and run `python -m this` in a Bash/ZSH shell on a Chromebook?
This could be useful with jupyter-repo2docker.
Maestro: A Linux-compatible kernel in Rust
There's also Kerla [1] (Monolithic kernel in Rust, aiming for Linux ABI compatibility), but that seems to have gone dormant for few years.
Or Redox OS, which is still there: https://www.redox-os.org/. It has a micro kernel design. But it is a bit more mature probably. And also MIT licensed so, there probably is some opportunity for code sharing.
Seems that the project is dead. The repository does not receive any commit for two years.
No its not. See: https://gitlab.redox-os.org/redox-os/redox/
https://www.redox-os.org/news/development-priorities-2023-09...
The redox-os post mentions cosmic desktop, and future wayland support, which may now already be almost implemented?
The System76 blog appears to have updates regarding COSMIC DE: https://blog.system76.com/post/the-spirit-of-cosmic-december...
Components of Cosmic Desktop Rust-based Desktop Environment: https://github.com/pop-os/cosmic-epoch#components-of-cosmic-...
cosmic-comp/src/wayland/handlers https://github.com/pop-os/cosmic-comp/tree/master_jammy/src/...
Possible Meissner effect near room temperature: copper-substituted lead apatite
Some background: there were two Chinese teams publicly pursuing LK-99-derived room temperature superconductor, which I arbitrarily named "north China team" and "south China team". North China team was headed by Hongyang Wang (who lives in Beijing) and south China team was headed by Yao Yao (who lives in Guangzhou). They used different synthesis and different analysis, i.e. north China team used hydrothermal synthesis and used SQUID measurement, while south China team used solid state synthesis and used EPR measurement.
This is a joint paper of both teams. They reproduced results of each other (this is unclear in the paper, but stated in their behind-the-scene posts) and measured a clear sign of superconductivity. It is "near room temperature", because they are sure about 250 K (hence "near"), but not sure about 300 K. As for "possible", the behind-the-scene post makes it clear it is false modesty.
If you are interested, you definitely want to read behind-the-scene posts. Read them here: https://www.zhihu.com/question/637763289 (they are in Chinese). Hongyang Wang is 真可爱呆 and Yao Yao is 洗芝溪.
How does 250 K compare to current[1] superconductors?
[1] pun definitely intended.
250 K at ambient pressure is still revolutionary, as it can be reached by dry ice. The highest critical temperature at ambient pressure had been something like 150 K: still above boiling point of liquid nitrogen, and won Nobel Prize in Physics in 1987.
The lowest temperature a freezer can achieve is typically around -23 degrees Celsius (-9 degrees Fahrenheit).
That's without laser cooling, near-field thermophotopoltaics (TPV)), or thermoelectric solid-state refrigeration.
Infrared photons are thermal energy. Thermophotovoltaics and thermoelectrics convert work due to the thermal gradient into electricity.
Laser cooling: https://en.wikipedia.org/wiki/Laser_cooling
Cooling with LEDs in reverse: https://issuu.com/designinglighting/docs/dec_2022/s/17923182 :
"Near-field photonic cooling through control of the chemical potential of photons" (2019) https://www.nature.com/articles/s41586-019-0918-8 https://scholar.google.com/scholar?cites=8589611114160282602... :
> This demonstration of active nanophotonic cooling—without the use of coherent laser radiation—lays the experimental foundation for systematic exploration of nanoscale photonics and optoelectronics for solid-state refrigeration and on-chip device cooling.
Constraining dynamics of rotating black holes via the gauge symmetry principle
If gauge symmetry breaks in superfluids (ie. Bose-Einstein condensates); and there are superfluids at black hole thermal ranges; do gauge symmetry constraints break in [black hole] superfluids?
/? Is there high temperature superfluidity in black holes? (And what about low?) https://www.google.com/search?q=Is+there+high+temperature+su...?
Email addresses are not good 'permanent' identifiers for accounts
re: ORCID, schema.org/Person and schema.org/identifier, W3C DID: Decentralized Identifiers: https://news.ycombinator.com/item?id=28701355 :
> DIDs can replace ORCIDs - which you can also just generate a new one of - for academics seeking to group their ScholarlyArticles by a better identifier than a transient university email address.
DIDs are typically (or always) the public key part of a public/private (asymmetric) keypair.
> When would a DID be a better choice than a UUID? [or an email address]
Using a Markov chain to generate readable nonsense with 20 lines of Python
>> Making the prefix shorter tends to produce less coherent prose; making it longer tends to reproduce the input text verbatim. For English text, using two words to select a third is a good compromise; it seems to recreate the flavor of the input while adding its own whimsical touch.
Next word prediction; vector databases:
Vector database: https://en.wikipedia.org/wiki/Vector_database
Ask HN: What has quantum computing achieved so far?
As a lowly web developer I struggle to understand what concrete progress has been made in quantum computing. I understand there are papers that have shown results that no classical computer could achieve but what has translated into practical applications?
Quantum supremacy: https://en.wikipedia.org/wiki/Quantum_supremacy
"Scott’s Supreme Quantum Supremacy FAQ" (2019) https://news.ycombinator.com/item?id=21053405 https://www.scottaaronson.com/blog/?p=4317
/? quantum supremacy 2024: https://www.google.com/search?q=quantum+supremacy+2024
/? practical quantum advantage: https://www.google.com/search?q=quantum+advantage
Quantum annealing > D-Wave implementations : https://en.wikipedia.org/wiki/Quantum_annealing#D-Wave_imple... :
> In December 2015, Google announced that the D-Wave 2X outperforms both simulated annealing and Quantum Monte Carlo by up to a factor of 100,000,000 on a set of hard optimization problems.[35]
Timeline of quantum computing and communication > 2023: https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_... :
> 14 June [2023] – IBM computer scientists report that a quantum computer produced better results for a physics problem than a conventional supercomputer.[365][366] :
"Evidence for the utility of quantum computing before fault tolerance" (2023) : https://www.nature.com/articles/s41586-023-06096-3 :
> Quantum advantage can be approached in two steps: first, by demonstrating the ability of existing devices to perform accurate computations at a scale that lies beyond brute-force classical simulation, and second by finding problems with associated quantum circuits that derive an advantage from these devices. Here we focus on taking the first step and do not aim to implement quantum circuits for problems with proven speed-ups.
Quantum Algorithm Zoo lists speedups by algorithm; https://quantumalgorithmzoo.org/
https://westurner.github.io/hnlog/#q-tequila ctrl-f "tequila" :
From "Computational Chemistry Using PyTorch" https://news.ycombinator.com/item?id=36825353 :
> Tequila wraps Psi4, Madness, and/or PySCF for Quantum Chemistry with Expectation Values: https://github.com/tequilahub/tequila#quantumchemistry
And from https://github.com/tequilahub/tequila#quantum-backends :
> Quantum Backends currently supported by tequilahub/tequila: Qulacs, Qibo, Qiskit, Cirq (SymPy), PyQuil, QLM / myQLM
tequilahub/tequila-tutorials: https://github.com/tequilahub/tequila-tutorials
- tequila-tutorials/BasicUsage.ipynb: https://github.com/tequilahub/tequila-tutorials/blob/main/Ba...
- tequila-tutorials/Quantum_Calculator.ipynb : https://github.com/tequilahub/tequila-tutorials/blob/main/Qu...
Practical Q12 QIS STEM learning exercise ideas: https://news.ycombinator.com/item?id=38687045#38801630
Merkle Town: Explore the certificate transparency ecosystem
How CT works > "How CT fits into the wider Web PKI ecosystem": https://certificate.transparency.dev/howctworks/
Certificate Transparency > Tools for inspecting CT logs: https://en.wikipedia.org/wiki/Certificate_Transparency
Solar Cell Efficiency Table
Solar-cell efficiency: https://en.wikipedia.org/wiki/Solar-cell_efficiency
Ask HN: What's the best out-of-box Document OCR/Analyzing/recognition API?
I tried with Google Document AI and AWS Textract, they both seem a bit hard to use and AWS is especially pricy
"Show HN: BetterOCR combines and corrects multiple OCR engines with an LLM" (2023) https://news.ycombinator.com/context?id=38056243 :
junhoyeo/BetterOCR: https://github.com/junhoyeo/BetterOCR :
> Better text detection by combining multiple OCR engines (EasyOCR, Tesseract, and Pororo)
Thank you so much! We went with AWS finally because we have startup credits, but good to know the options and previous discussions here!
Bazzite – a SteamOS-like OCI image for desktop, living room, and handheld PCs
TIL about various things for rpm-ostree distros:
gnome-randr-rust: https://github.com/maxwellainatchi/gnome-randr-rust :
> `xrandr` for Gnome/wayland, on distros that don't support `wlr-randr`
Kernel-fsync: https://copr.fedorainfracloud.org/coprs/sentry/kernel-fsync/
gnome-vrr: https://copr.fedorainfracloud.org/coprs/kylegospo/gnome-vrr/ :
gsettings set org.gnome.mutter experimental-features "['variable-refresh-rate']"
obs-vkcapture: https://copr.fedorainfracloud.org/coprs/kylegospo/obs-vkcapt...system76-scheduler: https://copr.fedorainfracloud.org/coprs/kylegospo/system76-s...
Model scale vs. domain knowledge in statistical forecasting of chaotic systems
"Model scale versus domain knowledge in statistical forecasting of chaotic systems" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> ABSTRACT: Chaos and unpredictability are traditionally synonymous, yet large-scale machine-learning methods recently have demonstrated a surprising ability to forecast chaotic systems well beyond typical predictability horizons. However, recent works disagree on whether specialized methods grounded in dynamical systems theory, such as reservoir computers or neural ordinary differential equations, outperform general-purpose large-scale learning methods such as transformers or recurrent neural networks. These prior studies perform comparisons on few individually chosen chaotic systems, thereby precluding robust quantification of how statistical modeling choices and dynamical invariants of different chaotic systems jointly determine empirical predictability. Here, we perform the largest to-date comparative study of forecasting methods on the classical problem of forecasting chaos: we benchmark 24 state-of-the-art forecasting methods on a crowdsourced database of 135 low-dimensional systems with 17 forecast metrics. We find that large-scale, domain-agnostic forecasting methods consistently produce predictions that remain accurate up to two dozen Lyapunov times, thereby accessing a long-horizon forecasting regime well beyond classical methods. We find that, in this regime, accuracy decorrelates with classical invariant measures of predictability like the Lyapunov exponent. However, in data-limited settings outside the long-horizon regime, we find that physics-based hybrid methods retain a comparative advantage due to their strong inductive biases.
"Can Machine Learning Predict Chaos? This Paper from UT Austin Performs a Large-Scale Comparison of Modern Forecasting Methods on a Giant Dataset of 135 Chaotic Systems" https://www.marktechpost.com/2023/12/25/can-machine-learning... :
From https://twitter.com/wgilpin0/status/1737934964757565848 :
> I found that large domain-agnostic models (Transformers, LSTM, etc) can forecast chaos really far into the future (>10 Lyapunov times). With enough training history, they outperform physics methods (next-gen reservoir computers, neural ODE, etc).
The art of high performance computing
I'm interested in what people think of the approach to teaching C++ used here. Any particular drawbacks?
I'm a very experienced Python programmer with some C, C++ and CUDA doing application level research in HPC environments (ML/DL). I'd really like to level up my C++ skills and looking through book 3 it seems aimed exactly at the right level for me - doesn't move too slowly and teaches best practices (per the author) rather than trying to be comprehensive.
> This book is notable for its coverage of MPI and OpenMP in both C, Fortran, C++, and (for MPI) Python.
"The Art of HPC", volume 2 > "Parallel Programming for Science Engineering" https://theartofhpc.com/pcse/index.html
FWIW, MPI is only one way to Python for HPC.
ipyparallel will run MPI jobs over tunnels you create yourself IIRC.
A chapter on dask-scheduler, CuDF, CuGraph (NetworkX), DaskML, and CuPy, and dask-labextension would be more current.
Dask doesn't handle data storage for you, so it's your responsibility to make sure that the data store(s) before each barrier are not the performance bottleneck.
Dask docs > High Performance Computers: https://docs.dask.org/en/stable/deploying-hpc.html
Sources of random may be the bottleneck. You don't know until you profile the job across the cluster.
Re: eBPF-based tracing tools: https://news.ycombinator.com/item?id=31688180
And then something about GitOps (and ChatOps), code review and revision, and project resource quotas
Aloe vera plants turned into energy-storing supercapacitors
"All Plant-Based Compact Supercapacitor in Living Plants" https://onlinelibrary.wiley.com/doi/full/10.1002/smll.202307... :
> Abstract: Biomass-based energy storage devices (BESDs) have drawn much attention to substitute traditional electronic devices based on petroleum or synthetic chemical materials for the advantages of biodegradability, biocompatibility, and low cost. However, most of the BESDs are almost made of reconstructed plant materials and exogenous chemical additives which constrain the autonomous and widespread advantages of living plants. Herein, an all-plant-based compact supercapacitor (APCSC) without any nonhomologous additives is reported. This type of supercapacitor formed within living plants acts as a form of electronic plant (e-plant) by using its tissue fluid electrolyte, which surprisingly presents a satisfying electrical capacitance of 182.5 mF cm−2, higher than those of biomass-based micro-supercapacitors reported previously. In addition, all constituents of the device come from the same plant, effectively avoid biologically incompatible with other extraneous substances, and almost do no harm to the growth of plant. This e-plant can not only be constructed in aloe, but also be built in most of succulents, such as cactus in desert, offering timely electricity supply to people in extreme conditions. It is believed that this work will enrich the applications of electronic plants, and shed light on smart botany, forestry, and agriculture.
> From https://news.ycombinator.com/item?id=36965107 :
>>> Here's a discussion about the lower costs of hemp supercapacitors as compared with graphene super capacitors: https://news.ycombinator.com/item?id=16814022
> FWIU, Heat-treated hemp bast fiber is comparable to graphene electrodes, and at least was far less costly because bast fiber is otherwise a waste output (and graphene at least was hazardous to produce and doesn't have a natural dendritic branch structure).
> "Hemp Carbon Makes Supercapacitors Superfast" (2013) https://www.asme.org/engineering-topics/articles/energy/hemp... :
>> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.
Would hemp ultracapacitor anodes work for aloe vera -based super capacitors? What would be the production cost and functional advantages?
Quantum Leap in Graphite: Attoscience Lights the Way to Superconductivity
"Enhanced optical conductivity and many-body effects in strongly-driven photo-excited semi-metallic graphite" (2023) https://www.nature.com/articles/s41467-023-43191-5 :
> Abstract: The excitation of quasi-particles near the extrema of the electronic band structure is a gateway to electronic phase transitions in condensed matter. In a many-body system, quasi-particle dynamics are strongly influenced by the electronic single-particle structure and have been extensively studied in the weak optical excitation regime. Yet, under strong optical excitation, where light fields coherently drive carriers, the dynamics of many-body interactions that can lead to new quantum phases remain largely unresolved. Here, we induce such a highly non-equilibrium many-body state through strong optical excitation of charge carriers near the van Hove singularity in graphite. We investigate the system’s evolution into a strongly-driven photo-excited state with attosecond soft X-ray core-level spectroscopy. We find an enhancement of the optical conductivity of nearly ten times the quantum conductivity and pinpoint it to carrier excitations in flat bands. This interaction regime is robust against carrier-carrier interaction with coherent optical phonons acting as an attractive force reminiscent of superconductivity. The strongly-driven non-equilibrium state is markedly different from the single-particle structure and macroscopic conductivity and is a consequence of the non-adiabatic many-body state.
IBM and Top Universities to Advance Quantum Education for 40k Students
Q12: Quantum K12; K12 QIS
- "Future is quantum: universities look to train engineers for an emerging industry" (2023) https://news.ycombinator.com/item?id=38255226 ; QISkit tuts, QuantumQ, The Qubit Game, Quantum Dots as teachable systems,
- "Quantum in the Chips and Science Act of 2022" https://news.ycombinator.com/item?id=32421566 :
> The NSF Next Generation Quantum Leaders Pilot Program authorized by this legislation, and which builds upon NSF’s role in the Q-12 Education Partnership, will help the Nation develop a strong, diverse, and future-leaning domestic base of talent steeped in fundamental principles of quantum mechanics, the science that underlines a host of technologies
> #Q12 > QIS K-12 Framework: https://q12education.org/learning-materials/framework
>> #Q12 > QIS K-12 Framework: https://q12education.org/learning-materials/framework :
> The framework for K-12 quantum education outlined here is an expansion of the original QIS Key Concepts, providing a detailed route towards including QIS topics in K-12 physics, chemistry, computer science and mathematics classes. The framework will be released in sections as it is completed for each subject.
> As QIS is an emerging area of science connecting multiple disciplines, content and curricula developed to teach QIS should follow the best practices. The K-12 quantum education framework is intended to provide some scaffolding for creating future curricula and approaches to integrating QIS into
> physics,
> computer science,
> mathematics, and
> chemistry
> (mathematics and chemistry are not yet complete). The framework is expected to evolve over time, with input from educators and educational researchers.
From https://news.ycombinator.com/item?id=34574791 :
> "World Quantum Day: Meet our researchers and play The Qubit Game" ; 2023-01-29
> Additional Q12 (K12 QIS Quantum Information Science) ideas?:
> - Exercise: Port QuantumQ quantum puzzle game exercises to a quantum circuit modeling and simulation library like Cirq (SymPy) or qiskit or tequila: https://github.com/ray-pH/quantumQ
> - Exercise: Model fair random coin flips with qubit basis encoding in a quantum circuit simulator in a notebook
> - Exercise: Model fair (uniformly distributed) [2, 8, then] 6-sided die rolls with basis state embedding or amplitude embedding or better (in a quantum circuit simulator in a notebook)
> QISkit tuts
https://learning.quantum.ibm.com/catalog/courses
IBM Quantum > Hello World: https://docs.quantum.ibm.com/start/hello-world
https://westurner.github.io/hnlog/#story-38658931 ctrl-f "QISkit"
High-sensitivity terahertz detection by 2D plasmons in transistors
"Gate-readout and a 3D rectification effect for giant responsivity enhancement of asymmetric dual-grating-gate plasmonic terahertz detectors" (2023) https://www.degruyter.com/document/doi/10.1515/nanoph-2023-0... :
> [...] An antenna connected to the source and gate electrodes of a single-gate-type of these transistors feeds AC THz voltage between them, resulting in excitation of 2D plasmons in their channels and generation of rectified photocurrent by their hydrodynamic nonlinearities [11, 12]. Alternatively, we have developed the so-called grating-gate transistors [12–17] as a type of plasmonic THz detectors. The grating-gate structure serves as a deep-subwavelength coupler that enables direct, efficient, broadband conversion from the incident THz waves to the 2D plasmons, rather than a coupling through an integrated antenna as in the single-gate transistors. [...]
"Metasurface antenna to manipulate all fundamental characteristics of EM waves" (2023) https://news.ycombinator.com/item?id=38658967 :
> "Electrons turn piece of wire into laser-like light source" (2023) https://news.ycombinator.com/item?id=33490730 :
> "Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2023) https://doi.org/10.21203/rs.3.rs-1572967/v1
^^ FWIU this transmits laser and Thz wavelengths?
Snomed CT Entity Linking Challenge
Tangentially, but also to the point,
Software Development IDEs have (optional) autocomplete;
Unstructured Medical Coding interfaces could also have autocomplete,
such that when you type `icd:` or `snomed:` it presents a search interface for that particular medical terminology / vocabulary / system of classification / categories.
GNU Health > Issue tracker > "Freetext ICD-10 references as URIs (e.g. icd10:A01)" (2013) https://lists.gnu.org/archive/html/health-dev/2013-12/msg000...
I'm not convinced that auto complete would add much value to clinical workflows. Clinicians typically don't enter the codes themselves. In order to write those unstructured narrative blocks they might type them in manually, fill in the blanks on a template, or dictate. But in any case the clinician typically isn't entering billing or clinical codes. Usually certified medical coders add codes afterwards using specialized interfaces, often with some kind of NLP assistance.
Yeah, so it's at that workflow where they're not getting linked data edges out of their own unstructured data input.
To prompt them at that time for whether or not this referenced thing in a dictated .txt file is actually a thing with a URI and have them confirm that those are the correct annotations for their input would save a lot of time and money.
Requisite study for Linked Data clinical coding / informatics:
- HIPAA and unstructured notes, HIPAA and Linked Data; Informed Consent; Precision Medicine
- Tokens; NLP, stemming, conceptual entity recognition
- The LODcloud; a great big graph of Linked Open Data (that our data does not yet link to, is siloed separately from, does not yet have references to existing URIs in)
- RDFa: RDF-in-html-Attributes
- JSON-LD: JSON Linked Data
- Schema.org/MedicalEntity ; /docs/schemas.html -> "Health and medical types" https://schema.org/docs/meddocs.html
- TTS Text-to-Speech, Speech-to-Text; Multimodal models with RAG, LORA, RLHF, Transformer Self Attention tensor Networks, Benchmarking OpenAI Whisper with relative performance metrics, ONNX
- FHIR for EHR data portability (JSON-LD,)
UX Improvements:
- More IDE-like unstructured data entry
- Auto-annotate this text field in place
- Auto-annotate this pasted text
- Auto-annotate this text field and display the linked data annotations
- Display the revisions of the unstructured note -> annotated linked data document
- Autocomplete with search from e.g. icd:
- Frequently used annotations per Clinician/Provider
- Recently used annotations per Clinician/Provider
- Hover over visually distinguished auto-recognized entities and approve/clarify/reject
- Indicate which prov:Agent added which annotations; human (name(s)), AI (signed git revision of repo URI)
- Indicate degree of confidence in annotation (note that AGI hypergraph systems have TruthValue and also AttentionValue, like attention networks; and that AGI systems may someday also support CDSS: Clinical Decision Support Systems and Health Decision Support Systems)
Right, but physicians currently don't have time to do that and don't get paid for it. For provider organizations it ends up being more profitable to hire certified coders to do the coding so that physicians can focus on treating patients.
Why aren't they trained in the above, and who will ask large EHR vendors for those features, who would do the asking and is there a forms committee yet?
It doesn't make sense to train most physicians in coding. We already have a shortage of doctors. They need to focus on directly treating patients while lower paid staff handle other tasks.
Most EHRs already have features for building custom data entry template forms for common situations, which can allow for selecting specific codes. Large provider organizations typically have internal committees to design these forms in order to ensure some level of consistency in patient charts.
Hot take: Most clinicians should code at time of data entry in order to not play telephone with patients' medical information.
Transmission chain method > Application in studies of memory: https://en.wikipedia.org/wiki/Transmission_chain_method
https://en.wikipedia.org/wiki/False_memory_syndrome#Evidence... :
> Human memory is created and highly suggestible
Though nowhere is it shown that clinicians are even competent enough to add RDFa linked data annotations to their TXT unstructured notes with an encabulator.
It's clear that there's value in disambiguation; did they say "biomemetic" or "liquid metal" days ago on a dictaphone in a loud environment?
> Though nowhere is it shown that clinicians are even competent enough to add RDFa linked data annotations to their TXT unstructured notes with an encabulator.
You can add Linked Data (JSON-LD RDF) annotations with threaded markdown to document Trump vectors with W3C Web Annotations.
This [SNMOED-CT, ICD10-] competition could be scored with Web Annotations as the schema for the output.
W3C Web Annotations open source implementations:
- hypothesis/h: https://github.com/hypothesis/h
- https://github.com/linkeddata/dokieli
- https://www.w3.org/annotation/wiki/Implementations
* You can add Linked Data (JSON-LD RDF) annotations with threaded markdown to document term vectors with W3C Web Annotations.
ElasticSearch's Term Vector optionally returns with_positions_offsets, with_positions_offsets_payloads: https://www.elastic.co/guide/en/elasticsearch/reference/curr...
Meilisearch has showMatchesPosition: https://www.meilisearch.com/docs/reference/api/search#show-m... :
> `showMatchesPosition` returns the location of matched query terms within all attributes, even attributes that are not set as searchableAttributes.
Meilisearch > Comparison to Alternatives; Algolia, ElasticSearch, Meilisearch, Typesense: https://www.meilisearch.com/docs/learn/what_is_meilisearch/c...
And then now instead, Vector search engines: TODO
Vector database: https://en.wikipedia.org/wiki/Vector_database
You're missing the point. We have a shortage of clinicians. Any extra time spent doing data entry is time not spent treating patients. As long as claims are getting paid, coded clinical data quality is a secondary concern. Changing this would require major structural realignments in the entire healthcare system, which obviously won't happen any time soon.
> The objective of this competition is to link spans of text in clinical notes with specific topics in the SNOMED CT clinical terminology. Participants will train models based on real-world doctor's notes which have been de-identified and annotated with SNOMED CT concepts by medically trained professionals. This is the largest publicly available dataset of labelled clinical notes, and you can be one of the first to use it!
NER: Named Entity Recognition: https://en.wikipedia.org/wiki/Named-entity_recognition
awsome-medical-coding-nlp: https://github.com/acadTags/Awesome-medical-coding-NLP
awesome-ehr-deep-learning: https://github.com/hurcy/awesome-ehr-deeplearning
awesome-ner: https://github.com/smiyawaki0820/awesome-ner
awesome-bioie > Research groups: https://github.com/caufieldjh/awesome-bioie#groups-active-in...
SNOMED-CT as RDF: https://sphn-semantic-framework.readthedocs.io/en/latest/ext...
...
SNOMED-CT is a Medical Terminology.
SNOMED-CT: https://en.wikipedia.org/wiki/SNOMED_CT
Traditionally, a Terminology Service provides a query endpoint over a schema like SNOMED-CT (which is a nonredistributable medical XML schema, which is challenging for Open Linked Data):
https://github.com/NCIP/lexevs
https://github.com/OpenConceptLab
https://openconceptlab.org/terminology-service/
https://github.com/MedevaKnowledgeSystems/pymedtermino/blob/... :
> For SNOMED CT, ICD10 and MedDRA, the data are not included (because they are not freely redistribuable) but they can be downloaded in XML format. PyMedTermino includes scripts for exporting these data into SQLite3 databases.
Show HN: AI generated coloring pages for kids
Paint by number: https://en.wikipedia.org/wiki/Paint_by_number
A prompt:
Generate a JS + SVG color by number coloring book page for children from the image at this URL: , such that when the user interacts with the SVG layer object of that segment of the coloring page its background changes in any of a number of ways: solid fill, fill under the touch path bounded by the hull(s) of the segments which correspond to the selected numbered color, reveal the image underneath, fill with Chiaroscuro shading,
Include addition and subtraction math problem solution explanations and sample problems
Link to the most relevant (Khan Academy) sequences with lessons and exercises for the corresponding curricula
Include IPA characters, corresponding alphabetic letter phonemes, short words which feature such phonemes, and draw smiley faces on svg graphics depicting said words
Electronic soil boosts crop growth
Electroculture has a very long history dating back the to mid 18th century [1] but it's still a bit of a mystery. Quite fittingly the man who pioneered it, Jean-Antoine Nollet, also discovered osmosis which is probably why the method is effective.
The electrical current causes electrophoresis, which increases ion mobility and allows the roots to more easily absorb them via electroosmosis while also increasing ion exchange between insoluble soil particles and the water. On top of that the ions coming off the electrodes probably contribute since copper is a cofactor in many enzymes that effect photosynthesis, respiration, and signal transduction.
It doesn't always work though, depending on the soil, the electrical current use (strength and duration), and the species. Done incorrectly all it'll do is damage the root system and kill the plant.
"Electrical currents associated with arbuscular mycorrhizal interactions" (1995) https://scholar.google.com/scholar?cites=3517382204909176031...
(Edit) Pourbaix diagram > Applications > Concept of pe in environmental chemistry; "EH–pH diagram or a pE/pH diagram" : https://en.wikipedia.org/wiki/Pourbaix_diagram#Concept_of_pe...
TDS: https://en.wikipedia.org/wiki/Total_dissolved_solids
EC probe: https://en.wikipedia.org/wiki/Electrical_conductivity_meter
> arbuscular mycorrhizal interactions
This is REALLY interesting.
I was just looking at horizontal gene transfer, which is the idea of genes transfering between plant species via the mycorhizzal hyphea (root highway bettween plant and mushroom mycelium).
https://news.ycombinator.com/threads?id=samstave#38756295
Look at the minerals which are transfered via the Hyphea: (conductive)
>mycorrhizae, known as root fungi, form symbiotic associations with plant roots. In these associations, the fungi are actually integrated into the physical structure of the root. The fungi colonize the living root tissue during active plant growth. Through mycorrhization
>>the plant obtains phosphate and other minerals, such as zinc and copper, from the soil.
>The fungus obtains nutrients, such as sugars, from the plant root. Mycorrhizae help increase the surface area of the plant root system because hyphae, which are narrow, can spread beyond the nutrient depletion zone. <-- may lead to why the plants may seem far enough apart.
Whats interesting to note would be the conductivity of the mycelial network entangled the root system. I'd bet you could find the absolute right frequency based on plant species..
What would have been a truly interesting alternate history would have been if Tesla would have prevailed.
Both through wireless power transfer, but, relating to this - his transfer of electricity through the ground.
I wonder if that had become an adopted technology if we would have seen an effect on the flora?
EDIT: I'd really like to know what the electrical impulses between a mycorhyzzal (Gunna start calling it Mz) and a plants root system is - They specifically connect with "vascular root" based species...
This seems to indicate plants whith root systems which may be conductive.
The metaphysical Alex Grey in me wants to believe that the mycelium is just mycological neurons without a body and sybiotically parasites its way around in a Beneficial Dictator manner in which it is a net positive viral biological entity. (but its still a neuron)
FWIU ground discharge is a significant liability; "must've been lodestone there".
Higher EC soil is presumably a better path to ground. Lightning current is too strong and damages plants.
Null hypothesis: There are no companion planting approaches that result in greater yield due to idk symbiotic Eh-Ph homeostasis
Three Sisters: Corn, Bean, Squash
Tomatoes and potatoes grow well together in a 5gal HDPE bucket with hole cut into the side to access the potatoes; no idea whether there is electrical production from the tomatoes waving in the wind
Companion planting > Mechanisms doesn't list soil EC or electricity Output given e.g. Eh-Ph and moisture: https://en.wikipedia.org/wiki/Companion_planting
Similar research:
- "Low current around roots boosts plant growth" (2023) https://news.ycombinator.com/item?id=38256137 :
"In situ self-induced electrical stimulation to plants: Modulates morphogenesis, photosynthesis and gene expression in Vigna radiata and Cicer arietinum" (2023) https://www.sciencedirect.com/science/article/abs/pii/S15675... https://scholar.google.com/scholar?cites=2047559626560052080...
"eSoil: Low power bioelectronic growth scaffold enhances crop seedlings growth" (2023) http://dx.doi.org/10.1073/pnas.2304135120
- "Electronic “soil” enhances crop growth" https://www.eurekalert.org/news-releases/1029701 :
> Barley seedlings grow on average 50% more when their root system is stimulated electrically through a new cultivation substrate. In a study published in the journal PNAS, researchers from Linköping University have developed an electrically conductive “soil” for soilless cultivation, ; eSoil
Looks like the PNAS DOI URL is 404'ing?
AI generated reference that actually just doesn't exist and hallucinated?
maybe: https://pubmed.ncbi.nlm.nih.gov/37494449/ or https://pubmed.ncbi.nlm.nih.gov/36935365/ found by searching for mentioned researcher: Stavrinidou
also from 2015: https://pubmed.ncbi.nlm.nih.gov/26702448/
The DOI PNAS URL works now! http://dx.doi.org/10.1073/pnas.2304135120
Also neat for eSoil: "Sensor-Free Soil Moisture Sensing Using LoRa Signals" (2022) https://news.ycombinator.com/item?id=38768947
"Bacterial sensors send a jolt of electricity when triggered" (2022) https://news.rice.edu/news/2022/bacterial-sensors-send-jolt-...
"Real-time bioelectronic sensing of environmental contaminants" (2022) https://www.nature.com/articles/s41586-022-05356-y ; bioelectric chemopresence sensors that release electricity when [...]
Knowledge Graph Reasoning Based on Attention GCN
"Snomed CT Entity Linking Challenge" https://news.ycombinator.com/item?id=38744177 :
> - Indicate degree of confidence in annotation (note that AGI hypergraph systems have TruthValue and also AttentionValue, like attention networks
From "AutoML-Zero: Evolving Code That Learns" https://news.ycombinator.com/item?id=23787359 :
> How does this compare to MOSES (OpenCog/asmoses) or PLN? https://github.com/opencog/asmoses https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22... (2006)
opencog/atomspace is a hypergraph for knowledge graphs with TruthValue and AttentionValue. https://github.com/opencog/atomspace
examples/python/create_atoms_simple.py: https://github.com/opencog/atomspace/blob/master/examples/py...
- [ ] Clone Atomspace hypergraph with RDFstar and SPARQLstar.
ONNX is a standard and also now an ecosystem for exchange of neural networks. https://en.wikipedia.org/wiki/Open_Neural_Network_Exchange
RDFHDT: RDF Header, Dictionary, Triples: is fast to read but not write.
From https://news.ycombinator.com/item?id=35810320 :
> Is there a better way to publish Linked Data with existing tools like LaTeX, PDF, or Word? Which support CSVW? Which support RDF/RDFa/JSON-LD?
Similar tools for running SQL queries over CSVs in Python:
- pandas.dataframe.read_csv(), dask.dataframe.read_csv(), dask_cudf.read_csv(); .read_parquet()
- DuckDB / DuckDB-WASM: https://news.ycombinator.com/item?id=36502107 https://github.com/duckdb/duckdb https://github.com/duckdb/duckdb-wasm
- sqlite-utils > "Running queries directly against CSV or JSON" https://sqlite-utils.datasette.io/en/stable/cli.html#running... and optionally datasette / datasette-lite ?csv= for a GUI: https://github.com/simonw/datasette-lite#loading-csv-data
PartCAD the first package manager for CAD models
build123d currently lists just bd-warehouse in the docs under Part Libraries: https://build123d.readthedocs.io/en/latest/external.html#par...
build123d is an open source parametric 2D and 3D CAD tool in Python that succeeds cadquery, and instead of jQuery/pyquery-like method chaining has `with:` context managers. https://build123d.readthedocs.io/en/latest/
bd-warehouse is a parametric parts library: https://bd-warehouse.readthedocs.io/en/latest/ :
> [...] With just a few lines of code, you can create parametric parts that are easily reviewable and version controlled using tools like git and GitHub. Documentation can be automatically generated from the source code of your designs, similar to the documentation you’re currently reading. Additionally, comprehensive test suites can automatically validate parts, ensuring that no flaws are introduced during their lifecycle.
> The benefits of adopting a full software development pipeline are numerous and extend beyond the scope of this text. ; git w/ tests and CI/CD
Ideas for open source (parametric) parts libraries:
- [ ] Dimensional lumber (1x4, 2x4, 2x10) and e.g. HempWood would be a great addition to a parts library.
- [ ] ldraw LEGO unofficial parts catalog: "LDraw.org Parts Update 2023-06" https://library.ldraw.org/updates?latest
- [ ] MicroBit
- [ ] RaspberryPi
- [ ] GlTF: "Google Earth 3D Models Now Available as Open Standard (GlTF)" (2023) ; land, buildings: https://news.ycombinator.com/item?id=35896176
- [ ] GlTF: planes: https://news.ycombinator.com/item?id=38268003 , trains: https://www.google.com/search?q=gltf+trains
- [ ] Humans with splines: https://github.com/killop/anything_about_game#humanstage , https://github.com/daz3d/DazToBlender ; Daz2Blender a few open source meshes
FWIW Conda-forge has signatures and build with clang in CI according to feedstock conda recipes (which have a meta.yml and optionally build.sh and/or build.bat, and a URL to watch for upstream changes).
https://SLSA.dev supports https://sigstore.dev artifact signatures.
Docker containers are ~= OCI containers which are stored in OCI container image repositories; which can host other artifacts with content signatures too.
Ask HN: How Does Relativity Apply to Rotational Motion?
I'm trying to get a clearer understanding of how relativity works. While I understand that movement is relative in terms of linear motion, I'm curious about how this applies to rotational motion. Specifically, does the absence of centrifugal force in a system indicate that there is absolutely no rotation occurring? In other words, can we use the lack of centrifugal forces as a definitive sign of non-rotation in absolute terms, or is this concept still subject to relativity?
In Superfluid Quantum Relativities (or Superfluid Quantum Gravity model(s)), the vorticity of the quasiparticle fluid presumably results from the nonlinear [complex] attractor system;
But your question is about whether there can be centrifugal force in a non-rotating system.
There are black holes that aren't rotating.
Godel's dust fluid solutions to Relativity eventually satisfied the prevailing criteria of cosmological expansion and rotation.
Centrifugal force: https://en.wikipedia.org/wiki/Centrifugal_force
Inertial frame of reference: https://en.wikipedia.org/wiki/Inertial_frame_of_reference
If there are no straight-line paths through spacetime which is necessarily curved if there is mass inertia to describe, aren't most spacetime paths relatively curved and is that rotation?
All transformations in Minkowski space-time are rotations in space-time.
Geodesics in general relativity: https://en.wikipedia.org/wiki/Geodesics_in_general_relativit... :
> In Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic. [11]
In superfluid spacetime, n-body gravity is most correctly predicted by a fluidic or a dust model with vorticity; which is a like rotation.
/? Vorticity curl: https://www.google.com/search?q=vorticity+curl :
- "Vorticity dynamics" (2016) https://www.researchgate.net/publication/300133454_Vorticity... :
> The vorticity field is the curl of the velocity field, and is twice the rotation rate of fluid particles. The vorticity field is a vector field, and vortex lines may be determined from a tangency condition similar to that relating streamlines to the fluid velocity field. However, vortex lines have several special properties and their presence or absence within a region of interest may allow certain simplifications of the field equations for fluid motion. In particular, vortex lines are carried by the flow and cannot end within the fluid, and this constrains their possible topology. Vorticity is typically present at solid boundaries and it may diffuse into the flow via the action of viscosity. Vorticity may be generated within a flow wherever there is an unbalanced torque on fluid elements, such as when pressure and density gradients are misaligned. The characteristics and geometry of a vortex line allow the velocity it induces at a distant location to be determined. Thus, multiple vortex lines that are free to move within a fluid may interact with each other. In a rotating coordinate frame, the observed vorticity depends or the frame's rotation rate.
Ask HN: Best blog tutorial explaining Assembly code?
https://news.ycombinator.com/item?id=23930335#23931373 ; the HLA book: https://en.wikipedia.org/wiki/High_Level_Assembly
Code a sum function, Compile artifact1; Mutate/Change a line, Compile artifact2; and bindiff artifact1 artifact2
HLA is easier to follow than diffing to see what ASM gcc builds from C without flags to simplify the output bytecode. https://news.ycombinator.com/item?id=36454485
Learn X in Y minutes > MIPS Assembly: https://learnxinyminutes.com/docs/mips/
This is almost Hello World:
https://stackoverflow.com/questions/2246748/assembly-linking... ~:
cat hello.asm
# section .text
# jmp [rax]
nasm -f elf64 hello.asm
objdump -Sr hello.o
/? nasm helloworld:
https://www.google.com/search?q=nasm+helloworld :- https://www.devdungeon.com/content/hello-world-nasm-assemble... : succinct Hello World program; 32bit ; - [ ] port this to 64bit
- "NASM Tutorial" https://cs.lmu.edu/~ray/notes/nasmtutorial/ ; a good "blog tutorial explaining Assembly code" per OT
"What is better "int 0x80" or "syscall" [or "sysenter" or VDSO] in 32-bit code on Linux?" https://stackoverflow.com/questions/12806584/what-is-better-... links to https://stackoverflow.com/questions/12776340/system-calls-im... which links to the vDSO Wikipedia article, which explains that vDSO supports ASLR and is more portable than `int 80` (which is the old 32bit way to syscall)
vDSO: https://en.wikipedia.org/wiki/VDSO
"Linux Assembly HOWTO" http://tldp.org/HOWTO/Assembly-HOWTO/
MASM only works on Windows. NASM works on Lin/Mac/Win; there's also a nasm.exe. And, like MASM, NASM assembles Intel ASM syntax (unlike GNU Assembler (`gas`))
/? nasm masm differences: https://www.google.com/search?q=nasm+masm+differences
/? Yasm Fasm:
x86 Assembly/x86 Assemblers: https://en.wikibooks.org/wiki/X86_Assembly/x86_Assemblers ; GAS, YASM; MASM, JWASM; NASM, FASM, YASM; HLA
WASM: WebAssembly > Specification > Wasm program > Code representation > C source code and corresponding WebAssembly https://en.wikipedia.org/wiki/WebAssembly#Code_representatio...
WASI: https://en.wikipedia.org/wiki/WebAssembly :
> WebAssembly System Interface (WASI) is a simple interface (ABI and API) designed by Mozilla intended to be portable to any platform.[84] It provides POSIX-like features like file I/O constrained by capability-based security.[85][86] There are also a few other proposed ABI/APIs.[87][88]
[88]: webassembly/wasm-c-api: https://github.com/WebAssembly/wasm-c-api
/? WASM tutorial: https://www.google.com/search?q=wasm+tutorial
/? WASM helloworld: https://www.google.com/search?q=wasm+helloworld
Learn X in Y minutes > WebAssembly: https://learnxinyminutes.com/docs/wasm/
WebAssembly/WABT: The WebAssembly Binary Toolkit: https://github.com/WebAssembly/wabt
https://docs.docker.com/desktop/wasm/ :
docker run \
--runtime=io.containerd.wasmedge.v1 \
--platform=wasi/wasm \
secondstate/rust-example-hello
> Docker Desktop downloads and installs the following runtimes that you can use to run Wasm workloads: io.containerd.slight.v1
io.containerd.spin.v2
io.containerd.wasmedge.v1
io.containerd.wasmtime.v1
io.containerd.lunatic.v1
io.containerd.wws.v1
io.containerd.wasmer.v1
Do all of these have WASI?React Jam just started, making a game in 13 days with React
React is not traditionally used for making games, but that's part of the fun and the challenge. React Jam is particularly relevant for React devs who wanna make a game, but don't wanna learn Unity, Unreal, or some other game engine.
This is the 3rd React Jam. We got a lot of requests to do a React Jam over the winter holidays so we've organized one. We chose to make it a bit longer (13 days instead of 10 days) to help account for being busy with family events etc.
>> React is not traditionally used for making games, but that's part of the fun and the challenge.
From https://news.ycombinator.com/item?id=38437821 :
> MS Flight Simulator cockpits are built with MSFS Avionics Framework which is React-like and MIT licensed:
https://github.com/microsoft/msfs-avionics-mirror/tree/main/...
preactjs may or may not be faster: https://preactjs.com/
Million.js is faster than preact, and lists a number of references under Acknowledgements: https://github.com/aidenybai/million#acknowledgments
> We use a novel approach to the virtual DOM called the block virtual DOM. You can read more on what the block virtual DOM is with Virtual DOM: Back in Block and how we make it happen in React with Behind the block().
React API reference > Components > Profiler: https://react.dev/reference/react/Profiler
From the React Developer Tools browser extension docs: https://react.dev/learn/react-developer-tools :
> Now, if you visit a website built with React, you will see the Components and Profiler panels.
The Redux DevTools extension also inspects apps written without React+Redux; Angular, Cycle, Ember, Fable, Freezer, Mobx, PureScript, Reductive, and Aurelia apps: https://github.com/reduxjs/redux-devtools/blob/main/extensio...
Cool, didn't know that MS Flight Simulator cockpits used a React-like framework. Thanks for sharing!
Np.
From https://news.ycombinator.com/context?id=35887168 re: ipyflow I learned about ReactiveX for Python (RxPY) https://rxpy.readthedocs.io/en/latest/ .
https://github.com/ipyflow/ipyflow :
> IPyflow is a next-generation Python kernel for Jupyter and other notebook interfaces that tracks dataflow relationships between symbols and cells during a given interactive session, thereby making it easier to reason about notebook state.
FWIU e.g. panda3d does not have a react or rxpy-like API, but probably does have a component tree model?
https://news.ycombinator.com/item?id=38527552 :
>> It actually looks like pygame-web (pygbag) supports panda3d and harfang in WASM
> Harfang and panda3d do 3D with WebGL, but FWIU not yet agents in SSBO/VBO/GPUBuffer
Ask HN: Do LLMs need a context window?
Excuse me for this potentially dumb question, but..
Why don't we train LLMs with user inputs at each step instead of keeping the model static and feeding the whole damn history everytime?
I think I may have a clue as to why actually: Is it because this would force us to duplicate the model for every user (since their weights would diverge) and company like OpenAI deem it too costly?
If so, will the rise in affordability of local models downloaded by individuals enable the switch for the continuous training approach soon enough or am I forgetting something?
You can't train an LLM in real time feasibly. Training and inference are two different things.
OpenAI has a separate service for finetuning ChatGPT and it is not speedy, and that's likely with shenanigans.
Even if the user input at every step is just normal conversation sentences? By that I mean very short, not MB of text.
Yes, it would take several times as much memory (e.g. the AdamW optimizer uses 8 bytes per paramter). Plus, your latest sentences would just get mixed in to the model, and it wouldn't have a concept of the current conversation.
There are very different approaches that might behave more like what you want, maybe RWKV.
Thanks for your suggestion about RWKV. I'll read up on that.
https://github.com/BlinkDL/RWKV-LM#rwkv-discord-httpsdiscord... lists a number of implementations of various versions of RWKV.
https://github.com/BlinkDL/RWKV-LM#rwkv-parallelizable-rnn-w... :
> RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", from 4 major params: R W K V)
> RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.
> So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state).
> Our latest version is RWKV-6,
Sensor-Free Soil Moisture Sensing Using LoRa Signals (2022)
"Sensor-free Soil Moisture Sensing Using LoRa Signals" (2022) https://dl.acm.org/doi/abs/10.1145/3534608 :
> Abstract: Soil moisture sensing is one of the most important components in smart agriculture. It plays a critical role in increasing crop yields and reducing water waste. However, existing commercial soil moisture sensors are either expensive or inaccurate, limiting their real-world deployment. In this paper, we utilize wide-area LoRa signals to sense soil moisture without a need of dedicated soil moisture sensors. Different from traditional usage of LoRa in smart agriculture which is only for sensor data transmission, we leverage LoRa signal itself as a powerful sensing tool. The key insight is that the dielectric permittivity of soil which is closely related to soil moisture can be obtained from phase readings of LoRa signals. Therefore, antennas of a LoRa node can be placed in the soil to capture signal phase readings for soil moisture measurements. Though promising, it is non-trivial to extract accurate phase information due to unsynchronization of LoRa transmitter and receiver. In this work, we propose to include a low-cost switch to equip the LoRa node with two antennas to address the issue. We develop a delicate chirp ratio approach to cancel out the phase offset caused by transceiver unsynchronization to extract accurate phase information. The proposed system design has multiple unique advantages including high accuracy, robustness against motion interference and large sensing range for large-scale deployment in smart agriculture. Experiments with commodity LoRa nodes show that our system can accurately estimate soil moisture at an average error of 3.1%, achieving a performance comparable to high-end commodity soil moisture sensors. Field studies show that the proposed system can accurately sense soil moisture even when the LoRa gateway is 100 m away from the LoRa node, enabling wide-area soil moisture sensing for the first time
Fedora 40 Plans To Unify /usr/bin and /usr/sbin
If you don't need root to run it, don't install it in */sbin/.
Packages are installed into /usr/bin and /usr/sbin.
Locally-built software (e.g. `./configure --prefix=/; make; make install`) is installed into /usr/local/bin and /usr/local/sbin, which should still always require elevated privileges to write to.
~/.local/bin is convenient but dangerous, because a process should not be able to overwrite itself on disk (unless the user has allowed it to auto-update itself, instead of using a package manager with out-of-band public key retrieval)
This is all probably in the discussions,
A. "Audit all binaries run as root" (TODO lookup the probably CIS control URL for this; look for disk-resident programs that run with elevated privileges)
A.1. Find setuid and setgid files:
sudo \
find / -type f \( -perm -4000 -o -perm -2000 \) -exec ls -ld '{}' \; \
2> find-setuid.errors \
> find-setuid.files
A.2. Find setcap binariesA.3. Find extended filesystem attribute permissions with e.g. getfacl
A.4. List everything in */sbin that only root users should need to run (*)
A.5. Find everything that's `chown root; chmod o-x`;
(When you remove the e(x)ecute bit from a directory, that user/group/other can't list the files in the directory; which is logged to stderr when e.g. `find`
Nowadays we can just do the SBOM tool on everything in every */?bin direcotry and just check those
If "merge sbin into bin" isn't already changed in the FHS spec, I'm -0 to -1 on the idea and don't see really any benefits.
Filesystem Hierarchy Standard: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
Disadvantage: Users would inadvertantly exec() programs they don't need to run log exec()s of, if /sbin and /bin are merged; there will be more binary enumeration log messages to parse.
Disadvantage: Other distros won't merge /sbin and /bin, so the paths in log messages in support issues and mailing lists would be less universally searchable
Challenges with unsupervised LLM knowledge discovery
"Challenges with unsupervised LLM knowledge discovery" (2023) https://arxiv.org/abs/2312.10029 :
> Abstract: We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge -- instead they seem to discover whatever feature of the activations is most prominent.
- NewsArticle about / that repeats a ScholarlyArtice: "A New Research from Google DeepMind Challenges the Effectiveness of Unsupervised Machine Learning Methods in Knowledge Elicitation from Large Language Models" (2023) https://www.marktechpost.com/2023/12/20/a-new-research-from-...
A sweater made from new aerogel fiber tests warmer than one made from down
https://news.ycombinator.com/item?id=38743949 :
> Potential aerogel application: low-VOC washable waterproof sleeping pads with high R value
What about microplastic shedding when washed?
Did you look up the composition of the 'synthetic' material? If you did, https://www.science.org/doi/10.1126/science.adj8013 you will find that the fibre is composed of chitosan, https://en.wikipedia.org/wiki/Chitosan is a sugar. Which is treated with N,N-Dimethylformamide https://en.wikipedia.org/wiki/Dimethylformamide a solvent.
So it appears that this is not a plastic. Maybe you could check to see if this fibre is actually biodegradable?
It's actually referred to as a bioplastic in the linked Wikipedia article and is a solid, polymerizing organic molecule with "plastic"-like properties -- and, apparently, known biological activity.
I think it's a valid question that its status as not being petroleum derived does not necessarily answer meaningfully.
From the wiki article.
“Once discarded, chitosan-constructed objects are biodegradable and non-toxic.[66] … Pigmented chitosan objects can be recycled,… Unlike other plant-based bioplastics (e.g. cellulose, starch)”
Polar bear fur-inspired sweater is thinner than a down jacket – and just as warm
> The synthetic fibre is an aerogel coated with polyurethane and is flexible, washable and wearable.
"Biomimetic, knittable aerogel fiber for thermal insulation textile" (2023) https://www.science.org/doi/10.1126/science.adj8013 https://scholar.google.com/scholar_lookup?title=&journal=Sci....
If lab grown meat is possible, so should lab grown fur. A lab-grown fur with some sort of way to verify that it's lab-grown with a jeweler's scope or similar might be a viable product?
FWIU arctic survival folks can explain the types of fur, and why no synthetic is even really comparable.
From "xPrize Wildfire Competition" (2023) https://news.ycombinator.com/item?id=35660231 re: airogels and hydrogels:
>> Problem: #airogel made of CO² is an excellent insulator that's useful for many applications; but it needs structure, so foam+airogel but that requires smelly foam
>> Possible solution: Cause structure to form in the airogel.
"Superelastic and Ultralight Aerogel Assembled from Hemp Microfibers" (2023) https://onlinelibrary.wiley.com/doi/full/10.1002/adfm.202300... ; flax might be sufficient, too
Potential aerogel application: low-VOC washable waterproof sleeping pads
"Are there any closed cell foam sleeping pads with an R-value of 3 to 4? Tips on hacking non-inflating pads to make them more winter proof?" (2023) r/Ultralight https://www.reddit.com/r/Ultralight/comments/16h19h1/are_the...
What is the difference in R-value of (sleeping pads) with these materials?
- Add high R-value reflective tape to the ground side
- Add a mylar emergency blanket -like windshield sunscreen below the pad
- Double the (closed cell foam) pads
- Add aerogel to the (closed cell foam) manufacturing process
- Add aerogel to XPS Extruded Polystyrene
Criteria: Recyclability, Washability, Waterproofness, Mass, R-Value,
Most 16-year-olds don't have servers in their rooms
Now that the iPad kid generation is growing up, I fear most don't even know what a file system is. The post desktop world is real.
Abstractions are comfortable, but they might reduce the number of techie teenagers like OP in the future, and that worries me a lot
My nephew, a sharp kid who started his freshman year at college in September, saw me working on my computer a year ago.
He asked what I was doing and I said, “making a website”. He asked, reasonably, how do you do that?
Not wanting to get into details I said “it’s just writing some files that chrome can then read”. He asked, “what’s a file”?
I told him, but I was floored. He’s not every kid, but it’s had me thinking about how younger folks (I’m 38) will see computers.
K12 CS Framework only mentions "file" 4 times: https://k12cs.org/wp-content/uploads/2016/09/K%E2%80%9312-CS...
Computer file: https://en.wikipedia.org/wiki/Computer_file
File system: https://en.wikipedia.org/wiki/File_system
ArXiv now offers papers in HTML format
Since the article doesn't link to any example HTML article, here's a random link:
https://browse.arxiv.org/html/2312.12451v1
It's cool that it has a dark mode. Didn't see a toggle but renders in the system mode.
Overall will make arXiv a lot more accessible on mobile.
And here's the PDF of the same paper for comparison: https://arxiv.org/pdf/2312.12451.pdf
The contrast is massive. I'm much more likely to read the html version; that PDF is deeply off-putting in some hard to define way. Maybe it's the two columns, or the font, or the fact that the format doesn't adjust to fit different screen sizes.
This is very interesting, because for me it's just the opposite. In particular the two column layout is just more readable and approachable for me. The PDF version also allows for a presentation just as the authors intended. I guess it's good that they offer both now.
Do you work extensively with LaTeX?
Two columns is good, albeit annoying on mobile. But the font. The typeface kills me, and almost every LaTeX-generated document sports it.
Hilariously, I would probably tolerate the HTML version a lot better if it had the font from the PDF (and FWIW, the answer for me is "no: I don't work with LaTeX at all... I just read a lot of papers").
https://github.com/neilpanchal/spinzero-jupyter-theme /fonts/{cmu-text,cmu-mono} :
> "Computer Modern" is used for body text to give it a professional/academic look
For anyone who needs it, arxiv-vanity is amazing: https://www.arxiv-vanity.com/
arxiv-sanity-lite: https://github.com/karpathy/arxiv-sanity-lite
From Nand to Tetris (2017)
Similar: "Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" https://news.ycombinator.com/item?id=37086102
New bill to bar wealthy colleges in the US from accepting federal student loans
Seems like this would only bar working-class and lower-class applicants with high ambitions and impeccable high school records from the ability to attend top universities. True, they could attend Honors programs from state schools instead. But it still doesn't feel right.
Honors colleges are a massive scam. They give the student more work and far more costs for only a small fraction of the prestige associated with top universities.
They also usually prevent one from applying AP credits or IB credits to college courses, as their "honors" versions of classes are not considered equivalent to any AP class. More money for mom and dad to pay because their kid got waitlisted and later rejected from Berkeley.
"Is college worth it? A return-on-investment analysis" (2021) https://news.ycombinator.com/item?id=29007377 :
>>> Why would people make an investment with insufficient ROI (Return on Investment)?
>> Insufficient information.
>> College Scorecard [1] is a database with a web interface for finding and comparing schools according to a number of objective criteria. CollegeScorecard launched in 2015. It lists "Average Annual Cost", "Graduation Rate", and "Salary After Attending" on the search results pages. When you review a detail page for an institution, there are many additional statistics; things like: "Typical Total Debt After Graduation" and "Typical Monthly Loan Payment"
Which curricular assignments have actual career ROI for cash-strapped students?
PostgreSQL 16 Bi-Directional Logical Replication
/? postgres replication https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
"Pgactive: Active-Active Replication Extension for PostgreSQL on Amazon RDS" (2023) https://news.ycombinator.com/item?id=37838223
/? "pgactive" site:github.com https://www.google.com/search?q=%22pgactive%22+site%3Agithub...
cloudnative-pg/cloudnative-pg: https://github.com/cloudnative-pg/cloudnative-pg :
> primary/standby
cloudnative-pg > Logical replication #13: https://github.com/cloudnative-pg/cloudnative-pg/issues/13
/? logical replication:
https://www.google.com/search?q=logical+replication
pgadmin docs > Publication Dialog; logical replication: https://www.pgadmin.org/docs/pgadmin4/development/publicatio...
https://github.com/dalibo/pg_activity#faq ; pip install `pg_activity[psychopg]` :
> FAQ: I can't see my queries only TPS is shown
(How) Do any ~pg_top tools delineate logical replication activity?
pgcenter > PostgreSQL statistics [virtual tables] (and also /proc) https://github.com/lesovsky/pgcenter#postgresql-statistics
...
"Show HN: ElectricSQL, Postgres to SQLite active-active sync for local-first apps" (2023) from the creators of CRDTs https://news.ycombinator.com/item?id=37590257
electric-sql/electric: https://github.com/electric-sql/electric :
> ElectricSQL is a local-first software platform that makes it easy to develop high-quality, modern apps with instant reactivity, realtime multi-user collaboration and conflict-free offline support.
> Local-first is a new development paradigm where your app code talks directly to an embedded local database and data syncs in the background via active-active database replication. Because the app code talks directly to a local database, apps feel instant. Because data syncs in the background via active-active replication it naturally supports multi-user collaboration and conflict-free offline
"SQLedge: Replicate Postgres to SQLite on the Edge" (2023) https://news.ycombinator.com/item?id=37067980 ; gh-ost, dolt, sqldiff
/?hnlog ctrl-f Consistenc, Consensus :
- "A few notes on message passing" https://news.ycombinator.com/item?id=26535969 :
> The C in CAP theorem is for Consistency [3][4]. [...]
> [3] https://en.wikipedia.org/wiki/Consistency_model
> [4] https://en.wikipedia.org/wiki/CAP_theorem
XetCache: Improve Jupyter notebook reruns by caching cells
It may be better to just start with managing caching with code.
In the standard library, the @functools.cache and @functools.cached_property function and method decorators do LRU caching in RAM only. https://docs.python.org/3/library/functools.html
Dask docs > "Automatic Opportunistic Caching": https://docs.dask.org/en/stable/caching.html#automatic-oppor... ; dask.cache.Cache(size_bytes:int) ... "Cache tasks, not expressions"
Pickles are not a safe way to deserialize data; there is not a data only pickling protocol.
So caching arbitrary cell objects (or e.g. stack frames) to disk, as pickles at least, creates risk of code injection if the serialized data contains executable code objects.
Similarly, the file permissions on e.g. rr traces. https://en.wikipedia.org/wiki/Rr_(debugging)
Dataclasses in the standard library helps with object serialization, but not with caching cell outputs containing code objects.
Apache Arrow and Parquet also require schema for efficient serialization.
LRU: Least-Recently Used
MRU: Most-Recently Used; most frequently accessed
Out-of-order execution in notebooks may or may not have wasted cycles of human time and CPU time. If the prompt numbers aren't sequential, what you ran in a notebook is not necessarily what others will get when running that computation graph in order; and so it's best practice to "Restart and Run All" to test the actual control flow before committing or pushing.
There are ways to run and test notebooks in order on or before git commit (and compare their output with the output from the previous run) like software with tests.
I agree about being explicit with this, though I will warn about using the python functools caching, because it changes the logic of your program. Because it doesn't copy data, any return that's mutable is risky. It's also not obvious this happens, and even less obvious if you're not keenly aware of this kind of issue being likely.
Does any cache imply invariance, which DbC like icontract can be used to check for really only within one program execution?
Plenty of caches are built that have this property. Streamlit for example offers two different caching helpers, one is faster but only safe if you don't mutate the return value and one is slower but safer as it makes a copy. Diskcache, my go-to, appears to be safe with regards to this too.
Regardless of cache implementation, if the notebook input cells are run in a different sequence like 0,1,2,2,2,3 the output with caching there is not necessarily same as without caching and not necessarily the same as 0,1,2,2,3 ; caching alone does not achieve functional idempotency, in fact that's many more partners that can diverge when the interest is parametric reproducibility (and notebooks are already a strange abstraction for computation graphs complex enough to justify e.g. dask.cache.Cache() for, and there are reasons to not always enable dask.cache.Cache() globally (if the argument were whether auto-caching is safe enough to be the default behavior in notebooks))
Can we stop bringing up "pickles are not a safe way to deserialize data" for things that do not matter, like caching on local machine?
Yes, pickle can be exploited by someone with ~/ write access. But you know what else someone with that access do? Append to .bashrc, add to desktop startup script, create pth file in local python dist, create usercustomize, append code into any locally installed libraries, add code to config of many editors...
I've seen so much effort wasted because people rejected pickle for being insecure in contexts where it doesn't matter at all.
If there is a privilege escalation or just an OS exec() as the app process user in a container with no interactive shell, no desktop, root-owned site-packages configured at image build time, and/or a pex zipapp or similar with no ~/.local (`type -a python; python -m site`) then it can write() to the on disk files or a database with schema and field length limits that doesn't store code.
I am having trouble parsing your reply, looks like some parts are missing. Can you rephrase?
We were talking about jupyter notebooks, which are typically used for exploratory work, and thus are executed either on local desktop under default user account, or in containers with pretty open access controls. My point was that under those circumstances, it is OK to assume any cache data is trusted.
Your reply somehow talks about secured containers and privilege escalation but I cannot figure out how is it connected to running Jupyter on local laptop, nor the threat model you are worried about.
The issue comes when cells take many minutes or even hours to run (intentionally or not). The ideal is indeed sequential, and this helps me a lot with maintaining the sequential ordering as it simplifies and speeds up the “restart and run all” process.
Oh I understand the issue.
E.g. dask-labextension does not implicitly do dask.cache for you.
How are the objects serialized, and are code objects serialized to files on disk, what is the permission umask on such files, and what directory (/var/cache) should they be selinux-labeled in (when it is running code from not memcache instead of the source) because if you can write to those cache files, you control the execution flow of the notebook (which is already unreproducibly out-of-ordered without consideration)
Dockerfile and Containerfile also cache outputs as layers.
`docker build --layers` is the default: https://docs.podman.io/en/latest/markdown/podman-build.1.htm...
container/common//docs/Containerfile.5.md: https://github.com/containers/common/blob/main/docs/Containe...
The podman manpages are more sufficient than the docker manpages.
To further illustrate considerations when adding caching to what are supposed to be reproducible [development / science] workflows,
GNU Make, for example, caches outputs according to their mtime. But, specifically, newer gen build systems like e.g. Blaze and Bazel do not use mtimes to cache build artifacts.
Container builders like `docker build` and `podman build` archives the filesystem after each Containerfile instruction completes; as layers named as a hash of the text of the Containerfile instruction e.g. hash(`RUN sh examplescript.sh`) -> "abcdef12342" which you can run like any other container image instance: `docker run --rm -it "abcdef12342" /bin/sh -c "find /"`
Caching implies additional test case parameters, debugging, and potential failure case unreproducibility:
"Flush the cache, Run it again, and Compare the output"
"Turn off the caching, Run it again, and Compare"
"What was supposed to have cleared the cache?"
"Why does it cache everything; how did the cache fill the disk?"
Encouraging students to understand the 1D wave equation
Wave equation: https://simple.wikipedia.org/wiki/Wave_equation https://en.wikipedia.org/wiki/Wave_equation
- "second order PDE partial differential equation in physics"
- Range: [-1,1]
Q12
Wave function: https://simple.wikipedia.org/wiki/Wave_function https://en.wikipedia.org/wiki/Wave_function
- quantum probability CDF Cumulative Distribution Function
- Range: [0,1] + [0,1]i
Bloch sphere / 'unit sphere': https://en.wikipedia.org/wiki/Bloch_sphere :
- Range: [0,1]x + [0,1]y + [0,1]i
> Wave function: https://simple.wikipedia.org/wiki/Wave_function
The formula for finding the wave function (i.e., the probability wave), is below:
i ℏ ∂/∂t Ψ(x, t) = Ĥ Ψ(x, t)
where i is the imaginary number, ψ (x,t) is the wave function, ħ is the reduced Planck
constant, t is time, x is position in space, Ĥ is a mathematical object known as the
Hamiltonian operator. The reader will note that the symbol ∂/∂t denotes that the partial
derivative of the wave function is being taken.
I love these kinds of Simple English pages where the author has evidently though that "simple" is supposed to mean "summary".https://schema.org/speakable :
> Indicates sections of a Web page that are particularly 'speakable' in the sense of being highlighted as being especially appropriate for text-to-speech conversion.
- [ ] [Simple] Wikipedia doesn't yet have a schema:speakable attribute on any of the schema:Articles,
but Simple Wikipedia's is interesting for reference
Regen Braking Algo for Parallel Hydraulic Hybrid Vehicles with Fuzzy Q-Learning
"Regenerative Braking Algorithm for Parallel Hydraulic Hybrid Vehicles Based on Fuzzy Q-Learning" (2023) https://www.mdpi.com/1996-1073/16/4/1895 https://scholar.google.com/scholar?cites=1652172894059893570... :
> Abstract: [...] By considering the impact of fuzzy controllers having better robustness, adaptability, and fault tolerance, a fuzzy control strategy is employed in this paper to accomplish the regenerative braking force distribution on the front axle. A regenerative braking model is created on the Simulink platform using the braking force distribution indicated above, and experiments are run under six specific operating conditions: New European Driving Cycle (NEDC), World Light-Duty Vehicle Test Cycle (WLTC), Federal Test Procedure 72 (FTP-72), Federal Test Procedure 75 (FTP-75), China Light-Duty Vehicle Test Cycle-Passenger (CLTC-P), and New York City Cycle (NYCC). The findings demonstrate that in six typical cycling road conditions, the energy saving efficiency of electric vehicles has greatly increased, reaching over 15%. The energy saving efficiency during the WLTC driving condition reaches 25%, and it rises to 30% under the FTP-72, FTP-75, and CLTC-P driving conditions. Furthermore, under the NYCC road conditions, the energy saving efficiency exceeded 40%. Therefore, our results verify the effectiveness of the regenerative braking control strategy proposed in this paper.
FWIU e.g. delivery vans and garbage trucks stop and start all day, with multiple $2k/job brake pad repairs per year. Is there a garbage truck sim too?
/? "hydraulic regenerative braking" system https://www.google.com/search?q=%22hydraulic+regenerative+br...
"Garbage Trucks Go Green" (2010) https://www.technologyreview.com/2010/08/03/201733/garbage-t... :
> A new kind of hybrid technology for heavy-duty vehicles uses hydraulics, rather than batteries, to store energy. Trucks using the system promise to be 30 percent more fuel-efficient than conventional vehicles.
"Ann Arbor Saves with Hydraulic Hybrid Recycling Trucks" (2011) https://www.government-fleet.com/79387/ann-arbor-saves-with-... :
> This regeneration of braking energy improves fuel economy by 15 percent, saving the City almost 1,800 gallons of fuel each year.
> The hydraulic regenerative braking system also led to savings in brake maintenance by increasing the life span of brakes. Normally, brakes are replaced several times per year, but in the year that Ann Arbor has been using the trucks, the brakes have not had to be replaced. This resulted in an annual savings of almost $12,000, [for four such 2010 vehicles] the Clean Energy Coalition stated.
Marginal gross annual savings per recycling truck:
(1800x + 12000y) / 4 = 450x + 3000y
Potential savings given an even more efficient hydraulic regenerative braking system in a recycling truck scenario: x_before * (1-0.15) = 1800
x_before * (1-0.314) = x_314
Wayland vs. X – Overview
I suffer from the limitations of both X11 and Wayland. Some things work in one and not the other, it's a trade off. In Wayland I can use different scaling for each monitor (in exchange for a LOT of RAM). But in X11 I can use Barrier to share my mouse / keyboard with my other computer. So I use one when I have a second monitor and have to restart my session when I use a second computer :'(
Please fix this!
It will be fixed on some Waylands with libei, but it will never be fixed on other waylands that find libei distasteful (like sway). And that's the waylands in a nutshell: many different things, never "wayland" always "waylands".
libei looks useful. But IDK why libei is necessary to run Barrier with Wayland?
For client systems, couldn't there just be a virtual /dev/input/XYZ that Barrier forwards events through
And for host systems, it looks like xev only logs input events when the window is focused.
Is xeyes still broken on Wayland, and how to fix it so that it would work with Barrier?
With Barrier, when the mouse cursor reaches a screen boundary, the keyboard and mouse input are then passed to a different X session on another box until the cursor again crosses a screen boundary rule.
Barrier is a fork of Synergy's open core: https://github.com/debauchee/barrier
libei: https://libinput.pages.freedesktop.org/libei/
> For client systems, couldn't there just be a virtual /dev/input/XYZ that Barrier forwards events through
You seem to be asking for kernel thing, not something Wayland. But let's assume you wanted a protocol on top of Wayland to do this:
Why can't arbitrary Wayland clients always see the mouse pointer, snoop on the keyboard, and inject keypresses? Because of security, by design.
Why is it taking a bunch of work to provide that capability in an access-controlled way? Because it's a bunch of work. For example, quoting libei: "However, the events are distinguishable inside the compositor to allow for fine-grained access control on which events may be emulated and when emulation is permitted."
Why the simplest explanation isn't always the best – PNAS
Occam's Razor: https://en.wikipedia.org/wiki/Occam%27s_razor :
> In philosophy, Occam's razor [Latin: novacula Occami] is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as the principle of parsimony or the law of parsimony [Latin: lex parsimoniae].
Balance puzzle: https://en.wikipedia.org/wiki/Balance_puzzle :
> A balance puzzle or weighing puzzle is a logic puzzle about balancing items—often coins—to determine which holds a different value, by using balance scales a limited number of times. These differ from puzzles that assign weights to items, in that only the relative mass of these items is relevant.
Logical connective: https://en.wikipedia.org/wiki/Logical_connective
Quantum logic > Quantum logic as the logic of observables > The logic of classical mechanics && The propositional lattice of a quantum mechanical system: https://en.wikipedia.org/wiki/Quantum_logic#Quantum_logic_as...
Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord :
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
Quantum nonlocality: https://en.wikipedia.org/wiki/Quantum_nonlocality ; the dog on the Moon ate my homework 10 monotonic light years ago instantaneously due to nonlocality
And then Quantum Causal Inference (and Quantum Statistical Mechanics) as a or the sufficient means with which to prove truth, epistemologically
And then emergent dynamics in quasiparticle fluid phenomena unpredicted by curl further increase the degree of the nonlocal, entangled, superfluid, nonlinear complex adaptive system; and again the simplest explanation was wrong; it was the butterfly flapping its wings that caused the nonlocal dog to eat the homework about the supercooled and so superfluidic helium in space
> Quantum logic
May be sufficient to describe all of the actual relations
Cirq docs > Named Topologies > LineTopology, TiltedSquareLattice https://quantumai.google/cirq/named_topologies
QISkit Lattice models > LineLattice, SquareLattice, HyperCubicLattice, TriangularLattice: https://qiskit.org/ecosystem/nature/tutorials/10_lattice_mod...
But how do we do CFD (Computational Fluid Dynamics) with QC with quantum logical decomposition with comparatively tiny lattices with current mortal quantum computers? Given that the known degrees of complexity in fluid problems with rotation and expansion - like Godel logic's Fluid solutions to GR - exceed the available topologies and counts EC error corrected qubits?
How complex could the relations between arbitrary things be, given terminology of Quantum discord; nonlocal entanglement and non-entanglement relations and Occam's Razor and fluids without emergence?
[deleted]
Paying Netflix $0.53/H, etc.
While the numbers might be a bit hit and miss, the comparison with individual creators is fascinating and definitely hits upon something that keeps me from subscribing to patreons/etc.
I don't have just one creator who I would consistently want to support the most, there are probably 10-20 who I would like to support, however I'm not going to pay 10x of them $5/h for content when YouTube is 1x $0.50/h (and has their content). To me, content creation is always going to need to be bundled at some level because I want diversity that no one creator is ever going to provide, and when bundling the price typically reduces because the bundling correctly recognises that consumption does not scale up.
> To me, content creation is always going to need to be bundled at some level
W3C Web Monetization does micropayments with <$0.01 tx fee, but not KYC or AML.
It is an open spec: https://webmonetization.org/specification/
Micropayments: https://en.wikipedia.org/wiki/Micropayment
Upcoming, Subscribing, and Word of Mouth get creators more viewers and thus ad revenue.
Show HN: WebGPU Particles Simulation
This is a small particles simulation that can run either with WebGPU (GPU Compute) or with the CPU, and it can be changed in real time
"WebGL Water" (2023) https://news.ycombinator.com/item?id=38370118 :
> Three.js interactive webgl particle wave simulator: https://threejs.org/examples/webgl_points_waves.html
> [Curl, Vorticity,]
From https://news.ycombinator.com/item?id=31049970 re: Deep Learning / CFS: Computational Fluid Dynamics:
> google/jax-cfd https://github.com/google/jax-cfd#other-awesome-projects lists "Other differentiable CFD codes compatible with deep learning"
WebGL Water is from 2010. Interestingly the author went on to make esbuild. Also one of my favorite utilities "diskitude" (just because it's 10 kb, not necessarily because it's the best disk analyzer).
Must've copied the year from the cited HN post.
Geiss (1998; now on GitHub) and Smoke (2002) https://www.geisswerks.com/
Smoke does 2D Navier-Stokes optionally as a wallpaper or a screensaver. There are current mobile apps that work as wallpapers.
ProjectM is an open source MilkDrop implementation, which supports input [audio] waveforms for music data visualization; IIRC it's already ported to WebGL but not yet WebGPU.
IDK how slow Navier-Stokes or Gross-Pitaevski would be in WASM with WebGL and or WebGPU
"Machine Learning for Fluid Dynamics Playlist" by Steve Brunton https://news.ycombinator.com/item?id=34574514
Reindeer can see UV light
Translating that abstract into English:
> The reindeer is the only mammal with a color-shifting retroreflector behind the retina and the only ruminant with a lichen-dominated diet. These puzzling traits coexist with eye fluid that transmit up to 60% of ultraviolet (UV) light, enough to excite the cones responsible for color vision. It is unclear why any day-active circum-Arctic mammal would benefit from UV visual sensitivity, but it could improve detection of UV-absorbing lichens against a background of UV-reflecting snows, especially during the extended twilight hours of winter. To explore this idea and advance our understanding of reindeer visual ecology, we recorded the reflectance spectra of several ground-growing, shrubby lichens in the diets of reindeer living in Cairngorms National Park, Scotland.
Ruminant (i.e.: animals that ruminate) is one word I didn't know in English yet but has no simpler synonym: the thing that cows do where they eat their food multiple times. Apparently reindeer are ruminants.
Lichen is another word that I didn't know other than from Minecraft. The actual definition is this (Merriam Webster): "any of numerous complex plantlike organisms made up of an alga or a cyanobacterium and a fungus growing in symbiotic association on a solid surface (such as on a rock or the bark of trees)".
Now, if someone could shed (UV?) light on why the paper's title is chosen the way it is (how is this useful to anyone? When you get this as search result, how're you gonna tell whether this is relevant to what you're looking for? Why not use the title that the HN submission¹ used?), I'd be rather interested
¹ "Reindeer can see UV light" at the time of writing
Ruminant: an herbivorous, even-toed, hoofed mammal (suborder Ruminantia and Tylopoda) that has a complex 3- or 4-chambered stomach. The word "ruminant" comes from the Latin ruminare, which means "to chew over again". Related to the English word ruminate, "think deeply about something"
KNF and JADAM have organic farming methods for ruminants.
FWIU if you feed land cows seaweed it reduces or eliminates their methane output.
I seem to be unfamiliar with all of your acronyms. Dictionary service for anyone else:
FWIU=From What I Understand; KNF=Korean Natural Farming; JADAM seems to be an uppercase-stylized brand name from what I can tell after some searching (https://en.jadam.kr about page: `He established JADAM (meaning "People that resemble nature") in 1991, and opened website www.JADAM.kr`)
Possible to detect an industrial civilization in geological record? (2018)
Ironworking at least produces identifiable traces in ice cores; which may actually be more hydrology and climate science than geology.
Ice core: https://en.wikipedia.org/wiki/Ice_core
For most of the history of earth there hasn't been permanent ice.
I don't think we even have any ice samples as old as three million years.
What?
Ice core > Ice core data > Dating: https://en.wikipedia.org/wiki/Ice_core#Dating :
> The dating of ice sheets has proved to be a key element in providing dates for palaeoclimatic records. According to Richard Alley, "In many ways, ice cores are the 'rosetta stones' that allow development of a global network of accurately dated paleoclimatic records using the best ages determined anywhere on the planet".[43]
/? ice core samples oldest: https://www.google.com/search?q=ice+core+samples+oldest :
- NSF Ice Core Facility > About Ice Cores: https://icecores.org/about-ice-cores#:~:text=The%20oldest%20.... :
> The oldest continuous ice core records extend to 130,000 years in Greenland, and 800,000 years in Antarctica.
- "Record-shattering 2.7-million-year-old ice core reveals start of the ice ages" (2017) https://www.science.org/content/article/record-shattering-27.... :
> Clues to ancient atmosphere found in bubbles trapped in Antarctic samples
- "World's oldest ice core could stretch back 5 million years" (2022) https://newatlas.com/environment/worlds-oldest-ice-core-5-mi....
I'm not sure why you posted most of those links, but it looks like I was a year or if date on my ice lore.
If a civilization develops stone working, metalworking, leaded gasoline, nuclear power, etc , that's all probably recorded in ice cores (that are apparently 5 million years old in bubbles).
What would be a more cost effective way to identify signs of prior civilization?
Read the emissions off in black holes that old, (or rather, black hole accretion disks, which are apparently modified Lorentzian attractors with superfluidity and Hawking radiation)
Fly there and perform multiple geologically and geographically covering miles-deep sediment sample studies,
Fly there and drill ice cores,
How does Base32 (or any Base2^n) work exactly?
Radix: https://en.wikipedia.org/wiki/Radix
- "Golden ratio base is a non-integer positional numeral system" (2023) https://news.ycombinator.com/item?id=37969716
Number systems > Classification: https://en.wikipedia.org/wiki/Number#Classification ; N natural numbers, Z[±] integers, Q rational/s, R Reals, C complex numbers (with complex conjugate exponents), Infinity or Infinities, ; C ⊆ R ⊆ Q ⊆ Z ⊆ N
***
i^4x ~= e^iπx
Also, perhaps this is a better representation for a continuum of reals: e^(x*yi*zπ)
But then there's no zero; only quantities approaching zero. e^(x*yi*zπ) * e^(a*bi*cπ)
But then there are still no negative numbers, so: sign * e^(x*yi*zπ) * e^(a*bi*cπ)
Where `sign` is in {-1,0,1}, or maybe just this would be the ultimate radix: sign * e^(x*yi*zπ)
But then represent infinity, or infinity_expr1 (because e.g. 1/x != 2/x except at x=0)Mystery of the quantum lentils: Are legumes exchanging secret signals?
What else varies with the lagged code?
(Edit)
From "Low current around roots boosts plant growth" (2023) https://news.ycombinator.com/item?id=38268128 :
> Is there nonlinearity due to the observer effect in this system?
> From how many meters away can a human walking in a forest be detected with such an organic signal network?
> FWIU mycorrhizae networks all broadcast on the same channel? Is it full duplex; are they transmitting and receiving simulatenously?
Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord :
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
From https://news.ycombinator.com/item?id=37226121#37226160 :
> ("This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity." https://news.ycombinator.com/item?id=37226121#37226160 )
https://westurner.github.io/hnlog/#story-37987061 ... Ctrl-F "plenoptic"; the plenoptic function describes all photons in a closed system
Intel proposes XeGPU dialect for LLVM MLIR
https://discourse.llvm.org/t/rfc-add-xegpu-dialect-for-intel... :
> XeGPU dialect models a subset of Xe GPU’s unique features focusing on GEMM performance. The operations include 2d load, dpas, atomic, scattered load, 1d load, named barrier, mfence, and compile-hint. These operations provide a minimum set to support high-performance MLIR GEMM implementation for a wide range of GEMM shapes. XeGPU dialect complements Arith, Math, Vector, and Memref dialects. This allows XeGPU based MLIR GEMM implementation fused with other operations lowered through existing MLIR dialects.
[deleted]
Misra C++:2023
Great royalty free alterative to MISRA C: [1] Embedded System development Coding Reference guide - Software Reliability Enhancement Center, JAPAN
[1] https://www.ipa.go.jp/publish/qv6pgp00000011mh-att/000065271...
awesome-safety-critical > Coding Guidelines: https://awesome-safety-critical.readthedocs.io/en/latest/
Rust SAST and DAST tools would be great for all, too.
From https://news.ycombinator.com/item?id=35565960 :
> Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools: https://news.ycombinator.com/item?id=24511280 https://analysis-tools.dev/tools?languages=cpp
Metasurface antenna to manipulate all fundamental characteristics of EM waves
"A universal metasurface antenna to manipulate all fundamental characteristics of electromagnetic wave" (2023) https://www.nature.com/articles/s41467-023-40717-9 :
> Abstract: Metasurfaces have promising potential to revolutionize a variety of photonic and electronic device technologies. However, metasurfaces that can simultaneously and independently control all electromagnetics (EM) waves’ properties, including amplitude, phase, frequency, polarization, and momentum, with high integrability and programmability, are challenging and have not been successfully attempted. Here, we propose and demonstrate a microwave universal metasurface antenna (UMA) capable of dynamically, simultaneously, independently, and precisely manipulating all the constitutive properties of EM waves in a software-defined manner. Our UMA further facilitates the spatial- and time-varying wave properties, leading to more complicated waveform generation, beamforming, and direct information manipulations. In particular, the UMA can directly generate the modulated waveforms carrying digital information that can fundamentally simplify the architecture of information transmitter systems. The proposed UMA with unparalleled EM wave and information manipulation capabilities will spark a surge of applications from next-generation wireless systems, cognitive sensing, and imaging to quantum optics and quantum information science.
Similar: "Photonic chip transforms lightbeam into multiple beams with different properties" (2022) https://news.ycombinator.com/item?id=36580151 :
"Universal visible emitters in nanoscale integrated photonics" (2022) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-7-8...
Though maybe also,
"Electrons turn piece of wire into laser-like light source" (2023) https://news.ycombinator.com/item?id=33490730 :
"Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2023) https://doi.org/10.21203/rs.3.rs-1572967/v1
Do black holes have singularities?
I think it's interesting that "black hole" is the least ambiguous phrase in the title of this paper. What even would be a "singularity," if we're talking about something beyond "this is where all our equations blow up to infinity?" What would it mean, then, for a black hole to "have" such a physical thing?
Other people have said already in the comments that many physicists insist that "singularities" aren't a physical thing, anyway. But, that doesn't really remove much of the ambiguity, because there's still the question of what exactly it means that the equations blow up to infinity.
Who knows? All I know is that I happened to be thinking about black holes at one point in grad school while I was also taking differential topology, and I realized that a rotating black hole would "smear out" a singularity at its "center" (whatever that means) due to frame dragging, resulting in a "ring singularity." And, lo and behold, there is such a thing, and it sort of works vaguely the way I thought it would: https://en.wikipedia.org/wiki/Ring_singularity
Mustn't Ring black holes also be fluid attractor system solutions?
https://news.ycombinator.com/item?id=38370118
https://westurner.github.io/hnlog/ Ctrl-F "n-body"
"Mathematicians Found 12,000 Solutions to the Notoriously Hard Three-Body Problem" (2023) https://news.ycombinator.com/item?id=37959364 https://westurner.github.io/hnlog/#comment-37959364 :
> Are all of the identified solutions (also) fluid attractor systems; and - if correct - shouldn't a theory of superfluid quantum gravity predict all of the n-body gravity solutions already discovered with non-fluidic numerical solutions? [...]
> A sufficient theory of quantum gravity must describe n-body gravity within Bose-Einstein Condensates and also quantum levitation.
...
> N-body gravity solutions with fluid vortices should predict all existing numerical n-body outcomes?
> That so many things in space look fluidic - how many spiral arms are there on a nebula, [or a vortex due to a drain] all existing visual representations of black holes look like fluids, merging neutron stars look like emergent patterns from curl, too
Shouldn't there be a corresponding solution with vorticity?
"Do Black Holes Have Singularities?" (2023) https://news.ycombinator.com/item?id=38501546 :
> SQS and SQG do purport to describe the interior topology of black holes.
DeepMind AI outdoes human mathematicians on unsolved problem
Code is available! They have a few different discovered solutions.
"Mathematical discoveries from program search with large language models" (2023) https://www.nature.com/articles/s41586-023-06924-6 :
> Abstract: Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements [1,2]. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches [3]. Applying FunSearch to a central problem in extremal combinatorics — the cap set problem — we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.
"DeepMind AI outdoes human mathematicians on unsolved problem" (2023) https://www.nature.com/articles/d41586-023-04043-w :
> Large language model improves on efforts to solve combinatorics problems inspired by the card game Set.
FM Based Long-Wave IR Detection and Imaging at Room Temperature
"Frequency Modulation Based Long-Wave Infrared Detection and Imaging at Room Temperature" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202309298 :
> Abstract: Detection of long wave infrared (LWIR) light at room temperature is a long-standing challenge due to the low energy of photons. A low-cost, high-performance LWIR detector or camera that operates under such conditions is pursued for decades. Currently, all available detectors operate based on amplitude modulation (AM) and are limited in performance by AM noises, including Johnson noise, shot noise, and background fluctuation noise. To address this challenge, a frequency modulation (FM)-based detection technique is introduced, which offers inherent robustness against different types of AM noises. The FM-based approach yields an outstanding room temperature noise equivalent power (NEP), response time, and detectivity (D). This result promises a novel uncooled LWIR detection scheme that is highly sensitive, low-cost, and can be easily integrated with electronic readout circuitry, without the need for complex hybridization.*
"Researcher discovers new technique for photon detection" (2023) https://phys.org/news/2023-12-technique-photon.html :
> "Our FM-based approach yields an outstanding room temperature noise equivalent power, response time and detectivity," Chanda says. "This general FM-based photon detection concept can be implemented in any spectral range based on other phase-change materials."
> "Our results introduce this novel FM-based detector as a unique platform for creating low-cost, high-efficiency uncooled infrared detectors and imaging systems for various applications such as remote sensing, thermal imaging and medical diagnostics," Chanda says. "We strongly believe that the performance can be further enhanced with proper industry-scale packaging."
SMERF: Streamable Memory Efficient Radiance Fields
We built SMERF, a new way for exploring NeRFs in real-time in your web browser. Try it out yourself!
Over the last few months, my collaborators and I have put together a new, real-time method that makes NeRF models accessible from smartphones, laptops, and low-power desktops, and we think we’ve done a pretty stellar job! SMERF, as we like to call it, distills a large, high quality NeRF into a real-time, streaming-ready representation that’s easily deployed to devices as small as a smartphone via the web browser.
On top of that, our models look great! Compared to other real-time methods, SMERF has higher accuracy than ever before. On large multi-room scenes, SMERF renders are nearly indistinguishable from state-of-the-art offline models like Zip-NeRF and a solid leap ahead of other approaches.
The best part: you can try it out yourself! Check out our project website for demos and more.
If you have any questions or feedback, don’t hesitate to reach out by email (smerf@google.com) or Twitter (@duck).
"Researchers create open-source platform for Neural Radiance Field development" (2023) https://news.ycombinator.com/item?id=36966076
NeRF Studio > Included Methods, Third-party Methods: https://docs.nerf.studio/#supported-methods
Neural Radiance Field: https://en.wikipedia.org/wiki/Neural_radiance_field
Ubuntu 24.04 LTS will enable frame pointers by default
Call stack > Structure > Stack and Frame pointers: https://en.wikipedia.org/wiki/Call_stack#Stack_and_frame_poi...
What do the Coding Guidelines listed in e.g. awesome-safety-critical say about Frame pointers? https://awesome-safety-critical.readthedocs.io/en/latest/#co...
(Edit)
/? "cert" "frame pointer" https://www.google.com/search?q=%22cert%22+%22frame+pointer%... :
- Stack buffer overflow > Exploiting stack buffer overflows: https://en.m.wikipedia.org/wiki/Stack_buffer_overflow :
> In figure C above, when an argument larger than 11 bytes is supplied on the command line foo() overwrites local stack data, the saved frame pointer, and most importantly, the return address
What about the Top 25?
/? site:cwe.mitre.org "frame pointer" https://www.google.com/search?q=site%3Acwe.mitre.org+%22fram... :
- CWE-121: Stack-based Buffer Overflow https://cwe.mitre.org/data/definitions/121.html
This is closer to a better approach for security, debuggability, and performance IMHO:
https://news.ycombinator.com/item?id=38138010 :
> gdb on Fedora auto-installs signed debuginfo packages with debug symbols; Fedora hosts a debuginfod server for their packages (which are built by Koji) and sets `DEBUGINFOD_URLS=`
> Without debug symbols, a debugger has to read unlabeled ASM instructions (or VM opcodes (or an LL IR)).
Low-frequency sound can reveal that a tornado is on its way
(1976) https://www.straightdope.com/21341329/how-to-detect-tornados...
Modern TVs, not so much. A quick glance doesn't show anyone working to make an RTLSDR tornado detector, which prolly means I just missed it. Seems like it shouldn't be too hard.
Looks like there are RPi infrasound monitors specifically for geologic seismology;
"Raspberry Shake", "RS&BOOM" https://manual.raspberryshake.org/boom.html#technical-specif... https://shop.raspberryshake.org/infrasound/
FWIU infrasound may indicate Earthquake, Fire, Tornado; and probably IDK Stampede?
Infrasound: https://en.wikipedia.org/wiki/Infrasound
FWIU the Phononic wave signal is embedded in the Photonic wave signal, and so it may be possible to create even less extensive infrasound sensors;
"Quantum light sees quantum sound: phonon/photon correlations" (2023) https://news.ycombinator.com/item?id=37793791
Artificial intelligence systems found to excel at imitation, but not innovation
https://news.ycombinator.com/item?id=12999516 :
"CogPrime: An Integrative Architecture for Embodied Artificial General Intelligence" (2012) > "Competencies and Tasks on the Path to Human-Level AI": https://wiki.opencog.org/w/CogPrime_Overview#A_CogPrime_Thou... :
> [Perception, Actuation, Memory, Learning, Reasoning, Planning, Attention, Motivation, Emotion, Modeling Self and Other, Social Interaction, Communication, Quantitative, Building/Creative ] http://wiki.opencog.org/w/CogPrime_Overview#Competencies_and...
And then, "A CogPrime Thought Experiment: Build Me Something I Haven’t Seen Before"
For the arts,
On creating something sufficiently novel,
EDA tools solve part of the problem in chip design, for example; furthermore sometimes with logic instead of imitation.
Will any LLM ever output a formally verified design and implementation without significant tree filtering, Even if trained solely on formally verified code?
Synthesis without understanding, and worse without ethics.
The Linux Scheduler: A Decade of Wasted Cores (2016) [pdf]
> Abstract: As a central part of resource management, the OS thread scheduler must maintain the following, simple, invariant: make sure that ready threads are scheduled on available cores. As simple as it may seem, we found that this invari- ant is often broken in Linux. Cores may stay idle for sec- onds while ready threads are waiting in runqueues. In our experiments, these performance bugs caused many-fold per- formance degradation for synchronization-heavy scientific applications, 13% higher latency for kernel make, and a 14- 23% decrease in TPC-H throughput for a widely used com- mercial database. The main contribution of this work is the discovery and analysis of these bugs and providing the fixes. Conventional testing techniques and debugging tools are in- effective at confirming or understanding this kind of bugs, because their symptoms are often evasive. To drive our in- vestigation, we built new tools that check for violation of the invariant online and visualize scheduling activity. They are simple, easily portable across kernel versions, and run with a negligible overhead. We believe that making these tools part of the kernel developers’ tool belt can help keep this type of bug at bay.
[deleted]
Natural gas migrates and pools under permafrost, methane emissions skyrocket
> Permafrost, ground that remains below zero degrees Celsius for two years or more, [...]
> Permafrost in the highlands is drier and more permeable, while permafrost in the lowlands is more ice-saturated. The rocks beneath are often fossil fuel sources, releasing methane which is sealed off by the permafrost. However, even where there is continuous permafrost, some geographical features may allow gas to escape. [...]
> An unexpectedly frequent finding: The scientists emphasized that gas accumulations were much more common than expected. Of 18 hydrocarbon exploration wells drilled in Svalbard, eight showed evidence of permafrost and half of these struck gas accumulations.
> "All the wells that encountered gas accumulations did so by coincidence—by contrast, hydrocarbon exploration wells that specifically target accumulations in more typical settings had a success rate far below 50%," said Birchall.
"Permafrost trapped natural gas in Svalbard, Norway" (2023) https://www.frontiersin.org/articles/10.3389/feart.2023.1277... :
> Permafrost is widespread in the High Arctic, including the Norwegian archipelago of Svalbard. The uppermost permafrost intervals have been well studied, but the processes at its base and the impacts of the underlying geology have been largely overlooked. More than a century of coal, hydrocarbon, and scientific drilling through the permafrost in Svalbard shows that accumulations of natural gas trapped at the base of permafrost are common. These accumulations exist in several stratigraphic intervals throughout Svalbard and show both thermogenic and biogenic origins. The gas, combined with the relatively young permafrost age, is evidence of ongoing gas migration throughout Svalbard. The accumulation sizes are uncertain, but one case demonstrably produced several million cubic metres of gas over 8 years. Heavier gas encountered in two boreholes on Hopen may be situated in the gas hydrate stability zone. While permafrost is demonstrably ice-saturated and acting as seal to gas in lowland areas, in the highlands permafrost is more complex and often dry and permeable. Svalbard shares a similar geological and glacial history with much of the Circum-Arctic, suggesting that sub-permafrost gas accumulations are regionally common. With permafrost thawing in the Arctic, there is a risk that the impacts of releasing of methane trapped beneath permafrost will lead to positive climatic feedback effects.
US to slash powerful planet-warming methane by nearly 80% from oil and gas
"US announces rule to slash powerful planet-warming methane by nearly 80% from oil and gas" (2023) https://www.cnn.com/2023/12/02/climate/cop28-methane-announc... :
> The rule will crack down on methane leaks from industry in several ways. In a major new development, it will end routine flaring of the natural gas that is a byproduct of drilling oil wells and will phase in a requirement for that gas to be captured instead of burned. The rule will also require stringent leak monitoring of oil and gas wells and compressors, and cut down on leaks from equipment like pumps, storage tanks and controllers.
> It will also rely on independent, third-party monitoring – using satellites and other remote-sensing technology – to find very large methane leaks.
US agency will not reinstate $900M subsidy for Starlink
Why should the taxpayer subsidize a business run by one of the world’s richest men?
The US Government has demanded that Starlink MUST CARRY and provide service to support foreign nations.
Foreign nations think that they're going to use Starlink to commit further genocide; that they run the show for this American company.
"MUST DENY"
If you cut off Internet service to emergency service personnel (with the red crosses) here, is that a war crime?
Transparent wood could soon find uses in smartphone screens, insulated windows
I wonder about its strength. Like, how thick would a piece of this transparent wood have to be, at 60' by 10', to withstand the pressure of 18,000 cubic feet of water?
A company developed "transparent aluminum" a few years ago [1]. It's actually a ceramic composite.
[1] https://hackaday.com/2018/04/03/whats-the-deal-with-transpar...
Gorilla Glass: https://en.wikipedia.org/wiki/Gorilla_Glass :
> Gorilla Glass faces varying competition from close equivalents, including AGC Inc.'s Dragontrail and Schott AG's Xensation and synthetic sapphire.
"A New Wonder Material Is 5x Lighter—and 4x Stronger—Than Steel" (2023) https://www.popularmechanics.com/science/a44725449/new-mater... :
"High-strength, lightweight nano-architected silica" (2023) https://www.sciencedirect.com/science/article/pii/S266638642...
[Zirconium] carbide ceramics
Silicon carbide: https://en.wikipedia.org/wiki/Silicon_carbide
2DPA-1 polyaramide: https://www.google.com/search?q=2DPA-1
https://news.mit.edu/2022/polymer-lightweight-material-2d-02... :
> However, in the new study, Strano and his colleagues came up with a new polymerization process that allows them to generate a two-dimensional sheet called a polyaramide. For the monomer building blocks, they use a compound called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can grow in two dimensions, forming disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.
> “Instead of making a spaghetti-like molecule, we can make a sheet-like molecular plane, where we get molecules to hook themselves together in two dimensions,” Strano says. “This mechanism happens spontaneously in solution, and after we synthesize the material, we can easily spin-coat thin films that are extraordinarily strong.”
> Because the material self-assembles in solution, it can be made in large quantities by simply increasing the quantity of the starting materials. The researchers showed that they could coat surfaces with films of the material, which they call 2DPA-1.
FWIU, melamine is somewhat fire-supressive due to the Nitrogen.
Most other melamine products are made with formaldehyde? There is formaldehyde-free plywood for lower VOCs.
But what about fire rating and recyclability after damage due to severe weather?
Gorilla Glass is hydrophobic and oleophobic IIRC
Additional features for [transparent wood and/or the above in a supportive lattice layer] windows:
Variable opacity,
Photovoltaic output, and consummate wiring to the panel,
Thermophotoviltaic output,
Thermoelectric output from the thermal gradient,
DisplayPort, HDMI, USB-C video input,
Contactless multitouch,
Mirror mode; variable reflectivity,
WiFi: Chromecast video/audio in, Miracast or similar audio/video in, Pipewire audio in/out, Opacity/Reflectance/Volume control, Smart assistant support,
Audio out,
Sensor data; temp, humidity, light level, brokenness, contactless multitouch
The best WebAssembly runtime may be no runtime
"Take existing C code, compile it to WebAssembly, transpile it back to C, and you get the same code, but sandboxed." Nice!
Just to clarify: Compared to wasm2c, w2c2 does not (yet) have sandboxing capabilities, so assumes the translated WebAssembly module is trustworthy. The main "goal" of w2c2 so far has been allowing to port applications and libraries to as many systems as possible.
https://gvisor.dev/docs/architecture_guide/platforms/ :
> gVisor requires a platform to implement interception of syscalls, basic context switching, and memory mapping functionality. Internally, gVisor uses an abstraction sensibly called Platform.
Chrome sandbox: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
Firefox sandbox: https://wiki.mozilla.org/Security/Sandbox
Chromium sandbox types summary: https://github.com/chromium/chromium/blob/main/docs/linux/sa...
Minijail: https://github.com/google/minijail :
> Minijail is a sandboxing and containment tool used in ChromeOS and Android. It provides an executable that can be used to launch and sandbox other programs, and a library that can be used by code to sandbox itself.
Chrome vulnerability reward amounts: https://bughunters.google.com/about/rules/5745167867576320/c...
Systemd has SystemCallFilter= to limit processes to certain syscall: https://news.ycombinator.com/item?id=36693366
Nerdctl: https://github.com/containerd/nerdctl
Nerdctl, podman, and podman-remote do rootless containers.
Flatpack field hospitals that can be airdropped to disaster zones
The red ResponsePod tents tare designed for temperature extremes, 100mph wind, and rain.
They're not very disposable.
Cardboard tents are more sustainable than single-use tents.
Med tents, sheets, and scrubs could be made of antimicrobial hemp fabric.
Pop-up Hub tents like ShiftPod, Gazelle, and the Coleman Instant are quick to set up and sturdier than replaceable and splintable tent poles.
- [ ] ENH: Tent pole splint and/or spares
Tunnel tents also have high ceilings.
Round structures like [geodesic] domes fare better in wind than sharp corners due to wind shearing force.
Geodesic dome hub kits work with local lumber or other.
Heaxayurt: https://en.wikipedia.org/wiki/Hexayurt :
> A hexayurt is a simplified disaster relief shelter design.[2] It is based on a hexagonal geodesic geometry adapted to construction from standard 4x8 foot sheets of factory made construction material, built as a yurt. [3] It was invented by Vinay Gupta. [4] Hexayurts are common at Burning Man. [5]
Is there a passive thermal roofline container modification for latrines and dispensaries? TODO link
- [ ] Heat pumps for disaster relief shipping containers
In searching for the shipping container by the ResponsePod folks, I just found an awning specifically for the area between multiple shipping containers in a corral configuration; prefabricated container shelter.
I couldn't find the link. It's a standard sized shipping container emergency response disaster relief dispensary with HVAC but not yet heat pumps FWIU.
- [ ] An open set of plans and modular interface specs for disaster relief shipping containers
Ask HN: Is there a Hacker News takeout to export my comments / upvotes, etc.?
Like the title says wondering if there is an equivalent of Google takeout for HN? Or how you guys are doing it?
Thanks.
There are few tests for this script which isn't packaged: https://github.com/westurner/dlhn/ https://github.com/westurner/dlhn/tree/master/tests https://github.com/westurner/hnlog/blob/master/Makefile
Ctrl-F of the one document in a browser tab works, but isn't regex search (or `grep -i -C`) without a browser extension.
Dogsheep / datasette has a SQLite query Web UI
HackerNews/API: https://github.com/HackerNews/API
Android adds ggml lib to AICore
From "PyTorch for WebGPU" (2023) https://news.ycombinator.com/item?id=36009478 :
> Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h
Cyborg cockroach could be the future of earthquake search and rescue
What about aerial infrared for (news,) helicopters and quadcopters?
How to cost that over for disaster relief
https://universemagazine.com/en/how-nasa-helps-find-people-t... :
> Scientists affiliated with NASA developed a device called FINDER (Finding Individuals for Disaster and Emergency Response) ten years ago, which can accomplish this task rapidly. FINDER is a microwave radar capable of sensing the smallest movements through the debris.
https://spinoff.nasa.gov/FINDER-Finds-Its-Way-into-Rescuers-...
Also, "Phonon Signatures in Photon Correlations" (2023) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
$7T annually invested in nature-negative activities
Is there a "suggest a solution" [to nature-negative business externalities] capability, and what could it be trained from?
https://www.globalreporting.org/how-to-use-the-gri-standards... :
> The GRI Standards help organizations understand their impacts on the economy, environment, and society - including those on human rights. This increases accountability and enhances transparency on their contribution to sustainable development.
Do (UN SDG GlobalGoals aligned) [GRI] Sustainability Reports help firms identify where their business and production methods are the problem, and how could they be improved?
> The findings are based on an analysis of global financial flows, revealing that private nature-negative finance flows amount to US$5 trillion annually, 140 times larger than the US$35 billion of private investments in nature-based solutions. The five industries channeling most of the negative financial flows – construction, electric utilities, real estate, oil and gas, and food and tobacco – represent 16% of overall investment flows in the economy but 43% of nature-negative flows associated with the destruction of forests, wetlands, and other natural habitats.
Lithium-ion battery recycling goes large
"Rinse and Repeat: An Easy New Way to Recycle Batteries is Here" (2023) https://newscenter.lbl.gov/2023/02/01/an-easy-new-way-to-rec... :
> Their product, called the Quick-Release Binder, makes it simple and affordable to separate the valuable materials in Li-ion batteries from the other components and recover them for reuse in a new battery. [...]
> The new Quick-Release Binder is made from two commercially available polymers, polyacrylic acid (PAA) and polyethylenimine (PEI), that are joined together through a bond between positively charged nitrogen atoms in PEI and negatively charged oxygen atoms in PAA. When the solid binder material is placed in alkaline water containing sodium hydroxide (Na+OH–), the sodium ion pops into the bond site, breaking the two polymers apart. The separated polymers dissolve into the liquid, freeing any electrode components embedded within.
> The binder can be used to make anodes and cathodes, and is about one-tenth the price of two of the most commonly used commercial binders.
'A-team' of math proves a critical link between addition and sets
The maths goes over my head, but this paragraph was very interesting:
> Tao then kicked off an effort to formalize the proof in Lean, a programming language that helps mathematicians verify theorems. In just a few weeks, that effort succeeded. Early Tuesday morning of December 5, Tao announced that Lean had proved the conjecture without any “sorrys” — the standard statement that appears when the computer can’t verify a certain step. This is the highest-profile use of such verification tools since 2021, and marks an inflection point in the ways mathematicians write proofs in terms a computer can understand. If these tools become easy enough for mathematicians to use, they might be able to substitute for the often prolonged and onerous peer review process, said Gowers.
I helped a very little with the formalization (I filled in some trivial algebraic manipulations, and I was there for some Lean technical support).
It's exciting to see how quickly the work was done, but it's worth keeping in mind that a top mathematician leading a formalization effort is very exciting, so he could very easily scramble a team of around 20 experienced Lean users.
There aren't enough experienced Lean users to go around yet for any old project, so Gowers's point about ease of use is an important one.
Something that was necessary for the success of this was years of development that had already been done for both Lean and mathlib. It's reassuring that mathlib is being developed in such a way that new mathematics can be formalized using it. Like usual though, there was plenty missing. I think this drove a few thousand more lines of general probability theory development.
> so he could very easily scramble a team of around 20 experienced Lean users
What have you to say about methods such as these:
"LeanDojo: Theorem Proving with Retrieval-Augmented Language Models" (2023) https://arxiv.org/abs/2306.15626 https://leandojo.org/
https://news.ycombinator.com/from?site=leandojo.org
( https://westurner.github.io/hnlog/#story-38435908 Ctrl-F "TheoremQA" (to find the citation and its references in my local personal knowledgebase HTML document with my comments archived in it), manually )
"TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/wenhuchen/TheoremQA#leaderboard (they check LLM accuracy with Wolfram Mathematica)
"Large language models as simulated economic agents (2022) [pdf]" https://news.ycombinator.com/item?id=34385880 :
> Can any LLM do n-body gravity? What does it say when it doesn't know; doesn't have confidence in estimates?
From https://news.ycombinator.com/item?id=38354679 :
> "LLMs cannot find reasoning errors, but can correct them" (2023) https://news.ycombinator.com/item?id=38353285
> "Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
That being said, guess and check and then develop a fitting casual explanation is or is not the standard practice of science, so
https://news.ycombinator.com/item?id=38124505 https://westurner.github.io/hnlog/#comment-38124505 :
> What does Mathlib have for SetTheory, ZFC, NFU, and HoTT?
> Do any existing CAS systems have configurable axioms?
Does LeanDojo have configurable axioms?
From https://news.ycombinator.com/item?id=38527844 :
> TIL i^4x == e^2iπx ... But SymPy says it isn't so (as the equality relation automated test assertion fails); and GeoGebra plots it as a unit circle and a line, but SymPy doesn't have a plot_complex() function to compare just one tool's output with another.
That's a lot of links to take in, and I don't do really anything with ML, but feel free to head over to https://leanprover.zulipchat.com/ and start a discussion in the Machine Learning for Theorem Proving stream!
My observation at the moment is that we haven't seen ML formalize a cutting-edge math paper and that it did in fact take a lot of experience to pull it off so quickly, experience that's not yet encoded in ML models. Maybe one day.
Something that I didn't mention is that Terry Tao is perhaps the most intelligent, articulate, and conscientious person I have ever interacted with. I found it very impressive how quickly he absorbed the Lean language, what goes into formalization, and how to direct a formalization project. He could have done this whole thing on his own I am sure. No amount of modern ML can replace him at the helm. However, he is such an excellent communicator that he could have probably gotten well-above-average results from an LLM. My understanding is that he used tools like ChatGPT to learn Lean and formalization, and my experience is that what you get from these tools is proportional to the quality of what you put into them.
Communication skills, Leadership skills, Research skills and newer tools for formal methods and theorem proving.
The example prompts in the "Teaching with AI" OpenAI blog post are paragraphs of solution specification; far longer than the search queries that an average bear would take the time to specify.
https://openai.com/blog/teaching-with-ai
https://blog.khanacademy.org/khan-academys-7-step-approach-t...
Is there yet an approachable "Intro to Arithmetic and beyond with Lean"? What additional resources for learning Lean and Mathlib were discovered or generated but haven't been added to the docs?
https://news.ycombinator.com/context?id=38522544 : AlphaZero self-play, LLMs and Lean, "Q: LLM and/or an RL agent trained on mathlib and tests" https://github.com/leanprover-community/mathlib/issues/17919... -> Proof Assistance SE: https://proofassistants.stackexchange.com/
Perhaps to understand LLMs and application, e.g. the Cuttlefish algorithm fills in an erased part of an image; like autocomplete; so, can autocomplete and guess and check (and mutate and crossover; EA methods and selection, too) test [math] symbolic expression trees against existing labeled observations that satisfy inclusion criteria, all day and night in search of a unified model with greater fitness?
> If these tools become easy enough for mathematicians to use, they might be able to substitute for the often prolonged and onerous peer review process,
List of long mathematical proofs: https://en.wikipedia.org/wiki/List_of_long_mathematical_proo... :
> This is a list of unusually long mathematical proofs. Such proofs often use computational proof methods and may be considered non-surveyable.
> As of 2011, the longest mathematical proof, measured by number of published journal pages, is the classification of finite simple groups with well over 10000 pages. There are several proofs that would be far longer than this if the details of the computer calculations they depend on were published in full.
Non-surveyable proof: https://en.wikipedia.org/wiki/Non-surveyable_proof :
> In the philosophy of mathematics, a non-surveyable proof is a mathematical proof that is considered infeasible for a human mathematician to verify and so of controversial validity.
I wonder how much of mathematics is locked away from us as it requires a level of intelligence we may never have. Why do we assume proofs should be simple little equations or even just a few pages? Elegance? Why should the universe be so elegant? Is it because elegant structures are more efficient and use less energy?
Edit: I know some proofs like Fermat's last theorem were like dozens of pages or more, but I realize that is still within the comprehension of a well trained and gifted human being.
Take for example the Standard Model Lagrangian (which is a sum of nonlinear fields and used to predict some but not all of particle physics (i.e. n-body gravity, superfluids, antimatter or not, Copenhagen interpretation or not, etc.)),
[ LaTeX rendering of the Standard Model Lagrangian, and other as-yet unintegrated equations ]
How elegant are these? Is it ever proven that they are of minimal complexity in their respective domains piece-wisely?
Learning of Entropy, I had hoped you know. But then that's just classical Shannon entropy, and not quite the types of quantum entropy described in for example the Quantum discord Wikipedia article, and then that's still not quite quantum fluidic complexity (with other field effects discounted, of course); so is there an elegant quantum fluidic thermodynamic basis for it all and emergence?
Quasiparticles display emergent behavior.
Virtual particles have or haven't mass independent of 2-body gravity.
Gravity alone sometimes produces mass, or photons at least.
Regardless, things have curl and Divergence.
And so Quantum Chaos: what can it predict? Can it can do quantum gravity effects in Superfluids at scale?
Progress in quantum CFD would require a different architecture to prove low error of a model for predictions in superfluids.
And so how many error-corrected qubits exist to simulate a gas fluid in space (in microgravity) is the limiting factor in checking the sufficiency of a grander unified model from here, too.
And also for how long qubits can be stored; we can save save a integer and a float for longer than human timescale with error corrected distributed storage networks, but we can't store the output wave function(s*) from quantum computer simulations for more than a second.
So prove it means QC, and that's not what I see here.
NetworkX – Network Analysis in Python
Igraph and cugraph, and graph tool are far superior for a wide variety of reasons
/? "networkx" "igraph" "cugraph" site:github.com inurl:awesome https://www.google.com/search?q=%22networkx%22+%22igraph%22+... :
- https://github.com/johnhany/awesome-list#graph lists a few Tensorflow and Pytorch + graphs applications
CuGraph docs > List of Supported and Planned Algorithms: https://docs.rapids.ai/api/cugraph/stable/graph_support/algo...
https://github.com/rapidsai/cugraph#news :
> NEW! nx-cugraph, a NetworkX backend that provides GPU acceleration to NetworkX with zero code change. :
pip install nx-cugraph-cu11 --extra-index-url https://pypi.nvidia.com
export NETWORKX_AUTOMATIC_BACKENDS=cugraph
I would be interested in one that is properly type annotated. None of the options here (NetworkX, igraph, cugraph) are.
https://news.ycombinator.com/item?id=36922924 :
> pytype (Google) [1], PyAnnotate (Dropbox) [2], and MonkeyType (Instagram) [3] all do dynamic / runtime PEP-484 type annotation type inference [4] to generate type annotations.
Hypothesis generates tests from type annotations; and icontract and pycontracts do runtime type checking.
DNA-folding nanorobots can manufacture limitless copies of themselves
Self-assembly of nanoparticles: https://en.wikipedia.org/wiki/Self-assembly_of_nanoparticles
Molecular assembler: https://en.wikipedia.org/wiki/Molecular_assembler
Reversible optical data storage below the diffraction limit (2023)
"Reversible optical data storage below the diffraction limit" (2023) https://www.nature.com/articles/s41565-023-01542-9 :
> Abstract: Colour centres in wide-bandgap semiconductors feature metastable charge states that can be interconverted with the help of optical excitation at select wavelengths. The distinct fluorescence and spin properties in each of these states have been exploited to show storage of classical information in three dimensions, but the memory capacity of these platforms has been thus far limited by optical diffraction. Here we leverage local heterogeneity in the optical transitions of colour centres in diamond (nitrogen vacancies) to demonstrate selective charge state control of individual point defects sharing the same diffraction-limited volume. Further, we apply this approach to dense colour centre ensembles, and show rewritable, multiplexed data storage with an areal density of 21 Gb inch–2 at cryogenic temperatures. These results highlight the advantages for developing alternative optical storage device concepts that can lead to increased storage capacity and reduced energy consumption per operation.
"Optical data storage breakthrough increases capacity of diamonds by circumventing the diffraction limit" (2023) https://phys.org/news/2023-12-optical-storage-breakthrough-c... :
> [...] This is possible by multiplexing the storage in the spectral domain.
> "It means that we can store many different images at the same place in the diamond by using a laser of a slightly different color to store different information into different atoms in the same microscopic spots," said Delord, a postdoctoral research associate at CCNY. "If this method can be applied to other materials or at room temperature, it could find its way to computing applications requiring high-capacity storage." [...]
> "What we did was control the electrical charge of these color centers very precisely using a narrow-band laser and cryogenic conditions," explained Delord. "This new approach allowed us to essentially write and read tiny bits of data at a much finer level than previously possible, down to a single atom."
> Optical memory technologies have a resolution defined by what's called the "diffraction limit," that is, the minimum diameter that a beam can be focused to, which approximately scales as half the light beam wavelength (for example, green light would have a diffraction limit of 270 nm).
> "So, you cannot use a beam like this to write with a resolution smaller than the diffraction limit because if you displace the beam less than that, you would impact what you already wrote. So normally, optical memories increase storage capacity by making the wavelength shorter (shifting to the blue), which is why we have 'Blu-ray' technology," said Delord.
https://news.ycombinator.com/item?id=35617859 :
>> https://thedebrief.org/impossible-photonic-breakthrough-scie... :
>> For decades, that [Abbe diffraction] limit has operated as a sort of roadblock to engineering materials, drugs, or other objects at scales smaller than the wavelength of light manipulating them. But now, the researchers from Southampton, together with scientists from the universities of Dortmund and Regensburg in Germany, have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
>> According to that research, the key to confining light below the previous impermeable Abbe diffraction limit was accomplished by “storing a part of the electromagnetic energy in the kinetic energy of electric charges."
"Real-space nanophotonic field manipulation using non-perturbative light–matter coupling" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-1-1...
"Optical tweezers"
> What differentiates the CCNY optical storage approach from others is that it circumvents the diffraction limit by exploiting the slight color (wavelength) changes existing between color centers separated by less than the diffraction limit.
> "By tuning the beam to slightly shifted wavelengths, it can be kept at the same physical location but interact with different color centers to selectively change their charges—that is to write data with sub-diffraction resolution," said Monge, a postdoctoral fellow at CCNY who was involved in the study as a Ph.D. student at the Graduate Center, CUNY.
> Another unique aspect of this approach is that it's reversible. "One can write, erase, and rewrite an infinite number of times," Monge noted
Curl 8.2.0 supports –ca-native and –proxy-ca-native with OpenSSL 3.2 Windows
From "curl: add --ca-native and --proxy-ca-native" https://github.com/curl/curl/pull/11049#issuecomment-1528118... :
> It looks like according to their CHANGES for OpenSSL 3.1 they've added SSL_CERT_URI and for OpenSSL 3.2 they've added SSL_CERT_PATH and are going to deprecate SSL_CERT_DIR (which could do both but had some parsing problem, still I don't get why they would deprecate it for paths). [...]
> curl reads SSL_CERT_DIR (note it's ignored for [Schannel,]) and sets that as the path. I don't know if OpenSSL is now reading the environment itself but the URI is org.openssl.winstore:// not capieng. If you have a master build then try SSL_CERT_URI=org.openssl.winstore:// curl ... and if that doesn't work try curl --capath "org.openssl.winstore://" ...
"OpenSSL Announces Final Release of OpenSSL 3.2.0" https://news.ycombinator.com/item?id=38392887 https://github.com/openssl/openssl/blob/openssl-3.2.0/NEWS.m... :
> Support for using the Windows system certificate store as a source of trusted root certificates
> This is not yet enabled by default and must be activated using an environment variable. This is likely to become enabled by default in a future feature release
openssl/openssl > "Add support for Windows CA certificate store" https://github.com/openssl/openssl/pull/18070/files
How should OS System Cert Store(s) be supported on Linux platforms with OpenSSL and e.g. Curl?
PEP-0543 had TLSConfiguration(..., trust_store=DEFAULT:TrustStore) https://peps.python.org/pep-0543/
class TrustStore() https://peps.python.org/pep-0543/#trust-store
And a CipherSuite() class with params and a heading for each of a number of cipher suites; OpenSSL (*), SecureTransport (MacOS,), SChannel (Windows), NSS (Firefox,); tlsdb https://peps.python.org/pep-0543/#cipher-suites
Curl 8_2_0 release: https://github.com/curl/curl/releases/tag/curl-8_2_0
Curl 8.2.0 changes.html: https://curl.se/changes.html#8_2_0
Why does Gnome fingerprint unlock not unlock the keyring?
> And that's the difference. When you perform TouchID authentication, the secure enclave can decide to release a secret that can be used to decrypt your keyring. We can't easily do this under Linux because we don't have an interface to store those secrets.
> The secret material can't just be stored on disk - that would allow anyone who had access to the disk to use that material to decrypt the keyring and get access to the passwords, defeating the object.
I'll provide a broader explanation: you cannot encrypt your key with a fingerprint.
A genuine "throw away the key" lock is only possible when the decryption key is completely erased from memory during the screen lock, and you cannot use your fingerprint or face image as a key by itself.
A screen lock using stored secret is inherently incapable of providing encryption at rest. It's like a password on a sticky note.
A "throw away the key" is only possible with a password, or a smartcard.
If you trust the hardware then it's entirely possible to tie the release of an encryption key to a fingerprint validated by that hardware. Hardware-backed keys are widely used (the entire WebAuthn ecosystem is predicated upon them being trustworthy), and having that hardware validate a fingerprint rather than merely physical presence is an improvement.
https://news.ycombinator.com/item?id=33311523 :
> [WebAuthn, TPM, U2F/FIDO2, Seahorse,]
> tpm-fido: https://github.com/psanford/tpm-fido :
>> tpm-fido is FIDO token implementation for Linux that protects the token keys by using your system's TPM. tpm-fido uses Linux's uhid facility to emulate a USB HID device so that it is properly detected by browsers.
TPM > TPM software libraries: https://en.wikipedia.org/wiki/Trusted_Platform_Module#TPM_so...
TPM > Virtualization; virtual TPM devices: https://en.wikipedia.org/wiki/Trusted_Platform_Module#Virtua...
WebAuthn: https://en.wikipedia.org/wiki/WebAuthn
Dutch astronomers prove last piece of gas feedback-feeding loop of black hole
I don't understand this article at all. Is it some kind of discovery that gas that exists somewhere can be attracted to a supermassive black hole, or a body with mass of any type? I don't see the relevance to that the gas was once ejected by the black hole; it's well known that mass attracts.
Also:
> Supermassive black holes at the centers of galaxies have long been known to emit enormous amounts of energy. This causes the surrounding gas to heat up and flow far away from the center. This, in turn, makes the black hole less active and lets cool gas, in theory, flow back.
This is a very strange way to word that the immense friction in the accretion disks of black holes creates immense heat and light, is it not? As far as I know, the actual energy black holes emit in the form of hawking radiation is rather very minute and undetectably low with current censors.
I think what they mean is that matter falling into black hole emits A LOT of energy.
One of less well known facts about black holes is that they are best known method to convert mass into energy. Black hole can be used to recover up to 40% of falling mass as energy.
Also, technically, Hawking radiation is not the only way to get the energy out of black hole. You can extract energy from a rotating black hole. Normally, rotational energy contributes to the total mass of the black hole, but you can extract that energy, robbing black hole of some of its rotational energy and consequently, reducing its total mass. Essentially, it can be used to accelerate objects (this is called Penrose process).
When two black holes collide, a portion of their mass is radiated as gravitational waves. This has been verified by LIGO showing that the resulting black hole has less mass than the sum of mass of black holes before collision.
I am also pretty sure that a charged black hole could act as a fantastic battery, except there is no known mechanism that could create a significantly charged one.
Does the OT potentially confirm models of superfluid quantum space that have Bernoulli's, low pressure and vorticity, Gross-Pitaevski,?
https://news.ycombinator.com/item?id=38370118 :
> > "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017) :
>> [...] Vorticity is interpreted as spin (a particle's internal motion). Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity*
Also, https://news.ycombinator.com/item?id=38009426 :
> Can distorted photonic crystals help confirm or reject current theories of superfluid quantum gravity?
"Closing the feedback-feeding loop of the radio galaxy 3C 84" (2023) https://www.nature.com/articles/s41550-023-02138-y
Intuitive guide to convolution
IMO 3blue1brown presents this in a very easy to understand way: https://www.youtube.com/watch?v=KuXjwB4LzSA
I wonder if we will ever be at a stage where LLM can generate videos like that as an ELI5
https://github.com/360macky/generative-manim :
> Generative Manim is a prototype of a web app that uses GPT-4 to generate videos with Manim. The idea behind this project is taking advantage of the power of GPT-4 in programming, the understanding of human language and the animation capabilities of Manim to generate a tool that could be used by anyone to create videos. Regardless of their programming or video editing skills.
"TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/wenhuchen/TheoremQA#leaderboard
How do you score memory retention and video watching comprehension? The classic educators' optimization challenge
"Khan Academy’s 7-Step Approach to Prompt Engineering for Khanmigo" https://blog.khanacademy.org/khan-academys-7-step-approach-t...
"Teaching with AI" https://openai.com/blog/teaching-with-ai
Photomath: https://en.wikipedia.org/wiki/Photomath
"Pix2tex: Using a ViT to convert images of equations into LaTeX code" > latex2sympy: https://news.ycombinator.com/item?id=38138020
SymPy Beta is aWASM fork of SymPy Gamma: https://github.com/eagleoflqj/sympy_beta
SymPy Gamma: https://github.com/sympy/sympy_gamma
TIL i^4x == e^2iπx
import unittest
import sympy as sy
test = unittest.TestCase()
x = sy.symbols("x")
# with test.assertRaises(AssertionError):
test.assertEqual(sy.I**(4*x), sy.E**(2*sy.I*sy.pi*x))
https://www.wolframalpha.com/input/?i=I%5E%284x%29+%3D%3D+e%...S2n-TLS – A C99 implementation of the TLS/SSL protocol
"Continuous formal verification of Amazon s2n" (2018) https://link.springer.com/chapter/10.1007/978-3-319-96142-2_...
https://scholar.google.com/scholar?cites=2686812922904040715...
But formal methods (and TLA+ for distributed computation) don't eliminate side channels.
There have been some attempts at formally verifying lack of timing attacks (to the extent allowed by hardware). This is the one I know the most about: https://dl.acm.org/doi/pdf/10.1145/3314221.3314605 but there are likely others
Also in around side channel topic for example:
E. Prouff and M. Rivain, Masking against Side-Channel Attacks: A Formal Security Proof, EUROCRYPT 2013, LNCS 7881
S. Dziembowski and K. Pietrzak, "Leakage-Resilient Cryptography, 10.1109/FOCS.2008.56.
Side-channel: https://en.wikipedia.org/wiki/Side-channel_attack
Time complexity > Constant time: https://en.wikipedia.org/wiki/Time_complexity#Constant_time
"Masking against side-channel attacks: A formal security proof" (2013) https://link.springer.com/chapter/10.1007/978-3-642-38348-9_... https://scholar.google.com/scholar?cites=1479355492097437276...
"Leakage-Resilient Cryptography" (2008) https://ieeexplore.ieee.org/abstract/document/4690963 https://scholar.google.com/scholar?cites=5581902451405085906...
"FaCT: A DSL for Timing-Sensitive Computation" https://dl.acm.org/doi/pdf/10.1145/3314221.3314605 citations in gscholar: https://scholar.google.com/scholar?cites=1570199926308101856...
Show HN: Demo of Agent Based Model on GPU with CUDA and OpenGL (Windows/Linux)
Demo of agent based model on GPU with CUDA and OpenGL (Windows/Linux)
Agent instances on GPU memory Uses SSBO for instanced objects (with GLSL 450 shaders) CUDA OpenGL interops Renders with GLFW3 window manager Dynamic camera views in OpenGL (pan,zoom with mouse) Libraries installed using vcpkg
(https://github.com/KienTTran/ABMGPU)
Could this work in WebGL and/or WebGPU (and/or WebNN) with or without WASM in a browser?
https://stackoverflow.com/questions/48228192/webgl-compute-s...
https://github.com/conda-forge/glfw-feedstock/blob/main/reci...
pyglfw: https://github.com/conda-forge/pyglfw-feedstock/blob/main/re...
- [ ] glfw recipe for emscripten-forge: https://github.com/emscripten-forge/recipes/tree/main/recipe...
Emscripten porting docs > OpenGL ES 2.0/3.0 *, glfw: https://emscripten.org/docs/porting/multimedia_and_graphics/...
WebGPI API > GPUBuffer: https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API
gpuweb/gpuweb: https://github.com/gpuweb/gpuweb
https://news.ycombinator.com/item?id=38355444 :
> It actually looks like pygame-web (pygbag) supports panda3d and harfang in WASM
Harfang and panda3d do 3D with WebGL, but FWIU not yet agents in SSBO/VBO/GPUBuffer.
SSBO: Shader Storage Buffer Object: https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Ob...
/? WebGPU compute: https://www.google.com/search?q=webgpu+compute
"WebGPU Compute Shader Basics" https://webgpufundamentals.org/webgpu/lessons/webgpu-compute...
Basically possible. You have to remove all Cuda code and port C++ code to use glfw with specific library for web. The shader can be reused.
Inoculating soil with mycorrhizal fungi can increase plant yield: study
Something I thought was really cool is that in Korean Natural Farming [1] there is a technique where you go to the forest near your farm and gather a bunch of decomposing leaves and other matter from the forest floor, then take it at home and throw it in a tub of I think starchy water to feed the microorganisms in the material you collected. You incubate this for a while and then spread this water over your soil. The motivation for this practice is the idea that your local forest has self-selected to grow well in your particular location, and those bacteria will be very helpful. I think it's an excellent innovation. It is both very simple for poor farmers all over the world to do and based on good scientific principles.
Mycorrhiza in soil help plant roots absorb nutrients.
Mycorrhiza: https://en.wikipedia.org/wiki/Mycorrhiza
Leaf mold: https://en.wikipedia.org/wiki/Leaf_mold
KNF > Indigenous microorganisms: https://en.wikipedia.org/wiki/Korean_natural_farming#Indigen...
FWIU JWA is very similar to Castille soap?
From https://news.ycombinator.com/item?id=37171603 :
> A (soap-like) surfactant like JADAM Wetting Agent (JWA) which causes the applied treatments to stick to the plants might reduce fertilizer runoff levels; but Nitrogen-based fertilizer alone does not regenerate all of the components of topsoil. https://www.google.com/search?q=jadam+jwa
https://youtube.com/@JADAMORGANIC
> Mycorrhizae fungus in the soil help get nutrients to plant roots, and they need to be damp in order to prevent soil from turning to dirt due to solar radiation and oxidation. https://youtube.com/@soilfoodwebschool
Yield and Soil Fertility are valuable criteria to optimize for; with multi-criteria optimization.
Crop yield: https://en.wikipedia.org/wiki/Crop_yield
Soil fertility > Soil depletion: https://en.wikipedia.org/wiki/Soil_fertility#Soil_depletion
GDlog: A GPU-accelerated deductive engine
Datalog related things really seem to have momentum these days.
Interesting that the application areas seem so different. This talks mostly about specialized source code analysis applications, whereas eg in Clojure circles it's used in normal application databases. I wonder if there'd be a way to use this as a backend for eg XTDB or Datalevin.
Yeah, there are a ton of substantively different approaches to modern Datalogs, targeting different applications.
To start off: Datalog is distinguished from traditional SQL in its focus on heavily-recursive reachability-based reasoning. With respect to expressivity, you can see Datalog as CDCL/DPLL restricted to boolean constraint propagation (i.e., Horn clauses). Operationally, you can think of this as: tight range-indexed loops which are performing insertion/deduplication into an (indexed) relation-backing data structure (a BTree/trie/etc...). In SQL, you don't know the query a-priori, so you can't just index everything--but in Datalog, you know all of the rules up-front and can generate indices for everything. This ubiquitous indexing enables the state-of-the-art work we see with Datalog in static analysis (DOOP, cclyzer), security (ddisasm), etc...
Our group targets tasks like code analysis and these big graph problems because we think they represent the most computationally-complex, hard problems that we are capable of doing. The next step here is to scale our prototypes (a handful of rules) to large, realistic systems--some potential applications of that are, e.g., raw feature extraction for binaries when you do ML over binary corpuses (which otherwise require, e.g., running IDA) on the GPU (rather than IDA on the CPU), medical reasoning (accelerating MediKanren), and (hopefully) probabilistic programming (these neuro-symbolic applications).
By contrast, I think work which takes a more traditional Databases approach (CodeQL, RDFox, ...) focus a little less on ubiquitous high-performance range-indexed insertion in a tight loop, and focus a little more on supporting robust querying and especially operating on streams. There is some very cool related work there in differential dataflow (upon which differential Datalog is built). There is a solver there named DDlog (written in Rust) which takes that approach. Our in-house experiments show that DDlog is often a constant factor slower than Souffle on GPUs, and we did not directly compare against DDlog in this paper--I expect the results would be roughly similar to Souffle.
"Introduction to Datalog" re: Linked Data https://news.ycombinator.com/context?id=34808887
pyDatalog/examples/SQLAlchemy.py: https://github.com/baojie/pydatalog/blob/master/pyDatalog/ex...
GH topics > datalog: https://github.com/topics/datalog
datalog?l=rust: https://github.com/topics/datalog?l=rust ... Cozo, Crepe
Crepe: https://github.com/ekzhang/crepe :
> Crepe is a library that allows you to write declarative logic programs in Rust, with a Datalog-like syntax. It provides a procedural macro that generates efficient, safe code and interoperates seamlessly with Rust programs.
Looks like there's not yet a Python grammar for the treeedb tree-sitter: https://github.com/langston-barrett/treeedb :
> Generate Soufflé Datalog types, relations, and facts that represent ASTs from a variety of programming languages.
Looks like roxi supports n3, which adds `=>` "implies" to the Turtle lightweight RDF representation: https://github.com/pbonte/roxi
FWIW rdflib/owl-rl: https://owl-rl.readthedocs.io/en/latest/owlrl.html :
> simple forward chaining rules are used to extend (recursively) the incoming graph with all triples that the rule sets permit (ie, the “deductive closure” of the graph is computed).
ForwardChainingStore and BackwardChainingStore implementations w/ rdflib in Python: https://github.com/RDFLib/FuXi/issues/15
Fast CUDA hashmaps
Gdlog is built on CuCollections.
GPU HashMap libs to benchmark: Warpcore, CuCollections,
https://github.com/NVIDIA/cuCollections
https://github.com/NVIDIA/cccl
https://github.com/sleeepyjack/warpcore
/? Rocm HashMap
DeMoriarty/DOKsparse: https://github.com/DeMoriarty/DOKSparse
/? SIMD hashmap
Google's SwissTable: https://github.com/topics/swisstable
rust-lang/hashbrown: https://github.com/rust-lang/hashbrown
CuPy has array but not yet hashmaps, or (GPU) SIMD FWICS?
NumPy does SIMD: https://numpy.org/doc/stable/reference/simd/
google/highway: https://github.com/google/highway
xtensor-stack/xsimd: https://github.com/xtensor-stack/xsimd
GH topics > HashMap: https://github.com/topics/hashmap
Yeah, IncA (compiling to Souffle) and Ascent (I believe Crepe is not parallel, though also a good engine) are two other relevant cites here. Apropos linked data, our group has an MPI-based engine which is built around linked facts (subsequently enabling defunctionalization, ad-hoc polymorphism, etc..), which is very reminiscent of the discussion in the first link of yours: https://arxiv.org/abs/2211.11573
TIL about Approximate Reasoning.
"Approximate Reasoning for Large-Scale ABox in OWL DL Based on Neural-Symbolic Learning" (2023) > Parameter Settings of the CFR [2023 ChunfyReasoner] and NMT4RDFS [2018] in the Experiments. https://www.researchgate.net/figure/Parameter-Settings-of-th...
"Deep learning for noise-tolerant RDFS reasoning" (2018) > NMT4RDFS: http://www.semantic-web-journal.net/content/deep-learning-no... :
> This paper documents a novel approach that extends noise-tolerance in the SW to full RDFS reasoning. Our embedding technique— that is tailored for RDFS reasoning— consists of layering RDF graphs and encoding them in the form of 3D adjacency matrices where each layer layout forms a graph word. Each input graph and its entailments are then represented as sequences of graph words, and RDFS inference can be formulated as translation of these graph words sequences, achieved through neural machine translation. Our evaluation on LUBM1 synthetic dataset shows 97% validation accuracy and 87.76% on a subset of DBpedia while demonstrating a noise-tolerance unavailable with rule-based reasoners.
NMT4RDFS: https://github.com/Bassem-Makni/NMT4RDFS
...
A human-generated review article with an emphasis on standards; with citations to summarize:
"Why do we need SWRL and RIF in an OWL2 world?" [with SPARQL CONSTRUCT, SPIN, and now SHACL] https://answers.knowledgegraph.tech/t/why-do-we-need-swrl-an...
https://spinrdf.org/spin-shacl.html :
> From SPIN to SHACL In July 2017, the W3C has ratified the Shapes Constraint Language (SHACL) as an official W3C Recommendation. SHACL was strongly influenced by SPIN and can be regarded as its legitimate successor. This document explains how the two languages relate and shows how basically every SPIN feature has a direct equivalent in SHACL, while SHACL improves over the features explored by SPIN
/? Shacl datalog https://www.google.com/search?q=%22shacl%22+%22datalog%22
"Reconciling SHACL and Ontologies: Semantics and Validation via Rewriting" (2023) https://scholar.google.com/scholar?q=Reconciling+SHACL+and+O... :
> SHACL is used for expressing integrity constraints on complete data, while OWL allows inferring implicit facts from incomplete data; SHACL reasoners perform validation, while OWL reasoners do logical inference. Integrating these two tasks into one uniform approach is a relevant but challenging problem.
"Well-founded Semantics for Recursive SHACL" (2022) [and datalog] https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A169...
"SHACL Constraints with Inference Rules" (2019) https://arxiv.org/abs/1911.00598 https://scholar.google.com/scholar?cites=1685576975485159766...
Datalog > Evaluation: https://en.wikipedia.org/wiki/Datalog#Evaluation
...
VMware/ddlog: Differential datalog
> Bottom-up: DDlog starts from a set of input facts and computes all possible derived facts by following user-defined rules, in a bottom-up fashion. In contrast, top-down engines are optimized to answer individual user queries without computing all possible facts ahead of time. For example, given a Datalog program that computes pairs of connected vertices in a graph, a bottom-up engine maintains the set of all such pairs. A top-down engine, on the other hand, is triggered by a user query to determine whether a pair of vertices is connected and handles the query by searching for a derivation chain back to ground facts. The bottom-up approach is preferable in applications where all derived facts must be computed ahead of time and in applications where the cost of initial computation is amortized across a large number of queries.
From https://community.openlinksw.com/t/virtuoso-openlink-reasoni... https://github.com/openlink/virtuoso-opensource/issues/660 :
> The Virtuoso built-in (rule sets) and custom inferencing and reasoning is backward chaining, where the inferred results are materialised at query runtime. This results in fewer physical triples having to exist in the database, saving space and ultimately cost of ownership, i.e., less physical resources are required, compared to forward chaining where the inferred data is pre-generated as physical triples, requiring more physical resources for hosting the data.
FWIU it's called ShaclSail, and there's a NotifyingSail: org.eclipse.rdf4j.sail.shacl.ShaclSail: https://rdf4j.org/javadoc/3.2.0/org/eclipse/rdf4j/sail/shacl...
"GDlog: A GPU-Accelerated Deductive Engine" (2023) https://arxiv.org/abs/2311.02206 :
> Abstract: Modern deductive database engines (e.g., LogicBlox and Soufflé) enable their users to write declarative queries which compute recursive deductions over extensional data, leaving their high-performance operationalization (query planning, semi-naïve evaluation, and parallelization) to the engine. Such engines form the backbone of modern high-throughput applications in static analysis, security auditing, social-media mining, and business analytics. State-of-the-art engines are built upon nested loop joins over explicit representations (e.g., BTrees and tries) and ubiquitously employ range indexing to accelerate iterated joins. In this work, we present GDlog: a GPU-based deductive analytics engine (implemented as a CUDA library) which achieves significant performance improvements (5--10x or more) versus prior systems. GDlog is powered by a novel range-indexed SIMD datastructure: the hash-indexed sorted array (HISA). We perform extensive evaluation on GDlog, comparing it against both CPU and GPU-based hash tables and Datalog engines, and using it to support a range of large-scale deductive queries including reachability, same generation, and context-sensitive program analysis . Our experiments show that GDlog achieves performance competitive with modern SIMD hash tables and beats prior work by an order of magnitude in runtime while offering more favorable memory footprint.
Towards accurate differential diagnosis with large language models
Differential diagnosis > Machine differential diagnosis: https://en.wikipedia.org/wiki/Differential_diagnosis
CDSS: Clinical Decision Support System: https://en.wikipedia.org/wiki/Clinical_decision_support_syst...
Treatment decision support: https://en.wikipedia.org/wiki/Treatment_decision_support :
> Treatment decision support consists of the tools and processes used to enhance medical patients’ healthcare decision-making. The term differs from clinical decision support, in that clinical decision support tools are aimed at medical professionals, while treatment decision support tools empower the people who will receive the treatments
AI in healthcare: https://en.wikipedia.org/wiki/Artificial_intelligence_in_hea...
[deleted]
Paper vs. devices: Brain activation differences during memory retrieval (2021)
Algorithmically and physically-biologically, Spreading activation has a (constant?) decay term: https://en.wikipedia.org/wiki/Spreading_activation
An easy-sounding problem yields numbers too big for our universe
The concept of reachability is pretty interesting in general as well.
"Can a spaceship with a certain delta-v reach another planet within an n-body system? (And if so, what is the fastest-to target/most resource preserving acceleration schedule?)" - apparently necessitates brute force, practically not computable on long time scales due to the chaos inherent in n-body systems (https://space.stackexchange.com/questions/64392/escaping-ear..., https://en.wikipedia.org/wiki/N-body_problem)
"Can a math proof be reached (within a certain number of proof steps) from the axioms?" - equivalent to the halting problem in most practical systems (https://math.stackexchange.com/questions/3477810/estimating-...)
"Can a demoscene program with a very limited program size visualize (or codegolf program output) something specific?" - asking for nontrivial properties like this usually requires actually running each program, and there are unfathomably many short programs (https://www.dwitter.net/ is a good example of this)
"In cookie-clicker games, is it possible to go above a certain score within a certain number of game ticks using some sequence of actions?" - in all but the simplest and shortest games (like https://qewasd.com), this is at least not efficiently (optimally) solvable using MILP and the like, as the number of possible action sequences increases exponentially
And yet, despite these being really hard (or in the general case, impossible) problems, humans use some heuristics to achieve progress
Quantum Discord: https://en.wikipedia.org/wiki/Quantum_discord
Quantum nonlocality: https://en.wikipedia.org/wiki/Quantum_nonlocality
Butterfly effect -> Quantum chaos: https://en.wikipedia.org/wiki/Quantum_chaos
-> Perturbation theory: https://en.wikipedia.org/wiki/Perturbation_theory_(quantum_m...
But then entropy in fluids,
Satisfiability > Model Theory: https://en.wikipedia.org/wiki/Satisfiability
Goal programming: https://en.wikipedia.org/wiki/Goal_programming
And, ultimately,
Self play (AlphaZero, ) https://en.wikipedia.org/wiki/Self-play
"Q: LLM and/or an RL agent trained on [Lean mathlib] and tests" https://github.com/leanprover-community/mathlib/issues/17919
Unsupervised speech-to-speech translation from monolingual data
Looking forward to the debate about real-time translators censoring or altering people's speech.
Also the debate about whether all human speech need be piped through such a preemptive filter. (actually not looking forward to this one). Suddenly everything that anyone says will be couched with "it is important to consult a professional to ensure safety and compliance with local regulations".
> Looking forward to the debate about real-time translators censoring or altering people's speech.
[Obama's] "Anger translator" (2012) https://www.youtube.com/results?sp=mAEA&search_query=anger+t...
A citizen ostensibly forfeits their right to sue for defamation when they become a public figure; but counter-non-fraud isn't fraud either then eh.
Say "that's not enhanced" just like the old one please.
Eye-safe laser technology to diagnose traumatic brain injury in minutes
"Window into the mind: Advanced handheld spectroscopic eye-safe technology for point-of-care neurodiagnostic" (2023) https://www.science.org/doi/10.1126/sciadv.adg5431
Tiny black holes could theoretically be used as a source of power: study
"Using black holes as rechargeable batteries and nuclear reactors" (2023) https://arxiv.org/abs/2210.10587 :
> Abstract: This paper proposes physical processes to use a Schwarzschild black hole as a rechargeable battery and nuclear reactor. As a rechargeable battery, it can at most transform 25\% of input mass into available electric energy in a controllable and slow way. We study its internal resistance, efficiency of discharging, maximum output power, cycle life and totally available energy. As a nuclear reactor, it realizes an effective nuclear reaction `α particles+black hole→positrions+black hole` and can transform 25\% mass of α-particle into the kinetic energy of positrons. This process amplifies the available kinetic energy of natural decay hundreds of times. Since some tiny sized primordial black holes are suspected to have an appreciable density in dark matters, the result of this paper implies that such black-hole-originated dark matters can be used as reactors to supply energy.
Isn't there a potential for net relative displacement and so thus couldn't (microscopic) black holes be a space drive?
Is there a known inverse transformation for Hawking radiation, and isn't there such radiation from all things?
Don't black holes store a copy of everything, like reflections from water droplets?
PBS Spacetime estimates that there are naturally occurring microscopic black holes every 30 km on Earth. https://news.ycombinator.com/item?id=33483002 https://westurner.github.io/hnlog/#comment-33483002
And ER=EPR: https://twitter.com/westurner/status/964069567290073089
What are the critical conditions for naturally-ocurring and lab or particle-collider made black holes? Are there safety concerns, and how would that be perceived?
> Are there safety concerns, and how would that be perceived?
What scale black hole would affect Auroras and e.g. the ionosphere and greater magnetosphere?
Aurora > Causes: https://en.wikipedia.org/wiki/Aurora
Clang now makes binaries an original Pi B+ can't run
[flagged]
It sounds like clang running on the RPi 1 generates code that doesn't run. Usually compilers default to targeting whatever ISA it's running on but that doesn't seem to be the case here.
Usually compilers default to the ISA they were built on. In the case of Clang this is controlled by LLVM_DEFAULT_TARGET_TRIPLE cmake option - maybe a weird mix of options occurred where Clang for armv6 was built on armv7 but the default triple was not adjusted correctly.
Current docs about this option:
> LLVM target to use for code generation when no target is explicitly specified. It defaults to “host”, meaning that it shall pick the architecture of the machine where LLVM is being built. If you are building a cross-compiler, set it to the target triple of your desired architecture.
raspberrypi.com/documentation/computers/linux_kernel.html#cross-compiling-the-kernel: https://www.raspberrypi.com/documentation/computers/linux_ke...
89luca89/distrobox: https://github.com/89luca89/distrobox #quick-start
89luca89/distrobox/blob/main/docs/useful_tips.md#using-a-different-architecture: https://github.com/89luca89/distrobox/blob/main/docs/useful_...
lukechilds/dockerpi: https://github.com/lukechilds/dockerpi : RPi 1, (2,3,) in QEMU emulating ARM64 on x86_64
E.g. the Fedora Silverblue rpm-ostree distro has "toolbox" by default because most everything should be in a container
containers/toolbox: https://github.com/containers/toolbox
From https://containertoolbx.org/distros/ :
> Distro support: By default, Toolbx creates the container using an OCI image called `<ID>-toolbox:<VERSION-ID>`, where <ID> and <VERSION-ID> are taken from the host’s `/usr/lib/os-release`. For example, the default image on a Fedora 36 host would be `fedora-toolbox:36`.
> This default can be overridden by the `--image` option in `toolbox create`, but operating system distributors should provide an adequately configured default image to ensure a smooth user experience.
The compiler arch flags might should be correctly specified in a "toolbox" container used for cross-compilation, too.
There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Ask HN: Can qubits be written to crystals as diffraction patterns?
Coherence time is the operative limit to SOTA QC Quantum Computing.
Holographic data storage already solves for binary digital data storage.
Aren't diffraction patterns due to lattice irregularities effective wave functions?
Presumably holography would have already solved for quantum data storage if diffraction is a sufficient analog if a wave function?
No
What is the limit?
Is a diffraction pattern a wave function?
E.g. a scratched microscope slide is all wave functions.
Can (crystal,) lattices be constructed and/or modified to store diffraction patterns with sufficient coherence over time?
Do Black Holes Have Singularities?
It was an interesting read, mostly though it says what we already know: the mathematics of black holes often produce singularities, but we have no evidence to support the notion that they exist. The thing is whatever is behind the event horizon of a black hole might as well not exist, we can't report observations of it ever. Even in principle.
If someday we know what's inside the event horizon, it will emerge from a robust theory of quantum gravity that can be tested at a lower limit.
Wow, I'm like a broken record today. [1]
SQS and SQG do purport to describe the interior topology of black holes.
Shouldn't it be possible to infer the state and relative position of matter/information/energy in a black hole from the Hawking radiation and/or the post-end-stage positions after "dissolution" of such phenomena in the quantum foam?
There's no positive proof of the irreversibility of such thermodynamic transformations.
How many possible subsequent positions of matter could there be after a microscopic or supermassive black hole reaches "critical condition 2"?
From https://news.ycombinator.com/item?id=38452488 :
> Isn't there a potential for net relative displacement and so thus couldn't (microscopic) black holes be a space drive?
> * Is there a known inverse transformation for Hawking radiation, and isn't there such radiation from all things? *
> Don't black holes store a copy of everything, like reflections from water droplets?
> PBS Spacetime estimates that there are naturally occurring microscopic black holes every 30 km on Earth.
Electricity flows like water in 'strange metals,' and physicists don't know why
"Shot noise in a strange metal" (2023) https://www.science.org/doi/10.1126/science.abq6100 :
> Abstract: Strange-metal behavior has been observed in materials ranging from high-temperature superconductors to heavy fermion metals. In conventional metals, current is carried by quasiparticles; although it has been suggested that quasiparticles are absent in strange metals, direct experimental evidence is lacking. We measured shot noise to probe the granularity of the current-carrying excitations in nanowires of the heavy fermion strange metal YbRh2Si2. When compared with conventional metals, shot noise in these nanowires is strongly suppressed. This suppression cannot be attributed to either electron-phonon or electron-electron interactions in a Fermi liquid, which suggests that the current is not carried by well-defined quasiparticles in the strange-metal regime that we probed. Our work sets the stage for similar studies of other strange metals.
Strange metal -> Fermi liquid theory > Non-Fermi liquid: https://en.wikipedia.org/wiki/Fermi_liquid_theory#Non-Fermi_... :
> The term non-Fermi liquid, also known as "strange metal", [20] is used to describe a system which displays breakdown of Fermi-liquid behaviour.
Google DeepMind's new AI tool helped create more than 700 new materials
"Millions of new materials discovered with deep learning" (2023) https://deepmind.google/discover/blog/millions-of-new-materi...
"Scaling deep learning for materials discovery" (2023) https://www.nature.com/articles/s41586-023-06735-9
Optical effect advances quantum computing with atomic qubits to a new dimension
Does anybody know how the coupling between the individual qubits is achieved/configured?
You can find the paper here, maybe that helps: https://arxiv.org/abs/1902.05424
"Scalable multilayer architecture of assembled single-atom qubit arrays in a three-dimensional Talbot tweezer lattice" (2023) https://arxiv.org/abs/1902.05424
Paperless-Ngx v2.0.0
I haven't been using it too much yet but I am really impressed by paperless-ngx so far. It just works(TM) and the auto-tagging functionality is surprisingly good, even with just a few documents in it.
Does anyone have a good scanner recommendation though? I am eyeing the Brother ADS-1700W since it seems to be recommended often, but I would really like to use the "scan to webhook" feature (it's 2023 after all) instead of SMTP or whatever else are the options I would have with the Brother.
Recommendation: https://www.quickscanapp.com/
I am using iPhone as a scanner and it automatically scans, OCRs, uploads and ingests to the paperless-ngx instance, even remotely using tailscale.
The iPhone camera is more than good enough for scanning documents.
I don't have an iPhone, but on Android there is the "Paperless Mobile" app (https://github.com/astubenbord/paperless-mobile), which can be used to scan as well. There are just some documents that I would prefer to have in proper and consistent "document scanner"-quality; I am always having a hard time with lighting using those phone scanners (although Paperless Mobile is one of the better ones I have used).
Would a document capture camera with a [ring] light also work?
Those still have the speed disadvantage of a phone camera and need more space than a compact document scanner, I'd imagine. I guess a ring light for my phone would be an improvement; using the builtin flash usually leads to very uneven lighting in the scan.
The Nineteenth-Century Banjo
"Tales from the Acoustic Planet, Vol. 3: Africa Sessions" (2009) [2] is the Soundtrack for The Bela Fleck "Throw Down Your Heart" (2009) [1] rockumentary
[2] https://en.wikipedia.org/wiki/Tales_from_the_Acoustic_Planet...
Banjo: https://en.wikipedia.org/wiki/Banjo
Bluegrass; traditional and progressive feature the banjo: https://en.wikipedia.org/wiki/Bluegrass_music :
> These divisions center on the longstanding debate about what constitutes "Bluegrass Music". A few traditional bluegrass musicians do not consider progressive bluegrass to truly be "bluegrass", some going so far as to suggest bluegrass must be [...]
Having enjoyed "electro blues", for some years I had been searching on YT for "electro bluegrass" without success but today it seems the genre is finally populated. (although from what I have found there's still plenty of opportunity to discover the proper cross between high lonesome and EDM)
Miniaturized technique to generate precise wavelengths of visible laser light
https://news.ycombinator.com/item?id=36580174 :
> "Universal visible emitters in nanoscale integrated photonics" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-7-8... :
>> Abstract: Visible wavelengths of light control the quantum matter of atoms and molecules and are foundational for quantum technologies, including computers, sensors, and clocks. The development of visible integrated photonics opens the possibility for scalable circuits with complex functionalities, advancing both science and technology frontiers. We experimentally demonstrate an inverse design approach based on the superposition of guided mode sources, allowing the generation and complete control of free-space radiation directly from within a single 150 nm layer Ta2O5, showing low loss across visible and near-infrared spectra [...]
Electrocaloric material makes refrigerant-free solid-state fridge scalable
This is just more efficient Peltier coolers right? Or is this some other effect?
Peltier effect generates spatially-separated hot and cold sides. Electrocaloric effect generates temporally-separated hot and cold periods. The Peltier effect is simpler to harness into a refrigeration unit (put the cold stuff on the cold side, dissipate heat from the hot side), but has lower potential efficiency.
Now we just need to combine it with thermal transistors* on the front and back sides to gate and pump the heat in one direction. Conduct -> Cool -> Insulate -> Heat -> Conduct -> Cool... (while doing the opposite on the heat-sinking side, of course)
(*from 3 weeks ago on HN) https://news.ycombinator.com/item?id=38259991
Perhaps a black hole could do some of the Heat phase, at least. From https://news.ycombinator.com/item?id=38450636 :
"Using black holes as rechargeable batteries and nuclear reactors" (2023) https://arxiv.org/abs/2210.10587
Is this the same one as last week?
https://news.ycombinator.com/item?id=38359089 :
> "High cooling performance in a double-loop electrocaloric heat pump" (2023) https://www.science.org/doi/10.1126/science.adi5477
> Electrocaloric effect: https://en.wikipedia.org/wiki/Electrocaloric_effect
Innovative Method to Efficiently Harvest Low-Grade Heat for Energy
"Enhancing Efficiency of Low-Grade Heat Harvesting by Structural Vibration Entropy in Thermally Regenerative Electrochemical Cycles" (2023) https://doi.org/10.1002/adma.202303199 :
> Abstract: The majority of waste-heat energy exists in the form of low-grade heat (<100 °C), which is immensely difficult to convert into usable energy using conventional energy-harvesting systems. Thermally regenerative electrochemical cycles (TREC), which integrate battery and thermal-energy-harvesting functionalities, are considered an attractive system for low-grade heat harvesting. Herein, the role of structural vibration modes in enhancing the efficacy of TREC systems is investigated. How changes in bonding covalency, influenced by the number of structural water molecules, impact the vibration modes is analyzed. It is discovered that even small amounts of water molecules can induce the A1g stretching mode of cyanide ligands with strong structural vibration energy, which significantly contributes to a larger temperature coefficient (ɑ) in a TREC system. Leveraging these insights, a highly efficient TREC system using a sodium-ion-based aqueous electrolyte is designed and implemented. This study provides valuable insights into the potential of TREC systems, offering a deeper understanding of the intrinsic properties of Prussian Blue analogs regulated by structural vibration modes. These insights open up new possibilities for enhancing the energy-harvesting capabilities of TREC systems.
Charlie Munger has died
One of the greats of investing. And a value investor. It's all about the profits, not the growth.
Munger is gone, Bogle is gone, Buffett is 93. Who takes up the mantle of value investing now?
You have to remember that Bogle/Munger/Buffet all gained prominence when value investing wasn't a thing and investing of any kind was wildly out of reach for the common man. Today anyone can go online and buy VTI in minutes. Every financial advisor and 401k plan recommends index funds by default, and it is how the vast majority of people and organizations store their wealth. It doesn't need any more cheerleaders or icons. It had simply become synonymous with investing at large.
Value investing is “an investment paradigm that involves investing in stocks that are overlooked by the market and are being traded below their true worth”.
Correct me if wrong, but I don’t think index funds come under that paradigm.
It depends on how the index is constructed. A market cap index cannot be value investing. A market sector index is almost surely not value investing (unless that entire sector is undervalued).
An index constructed specifically using value measures as the criteria for inclusion can be (at least arguably so).
Click on "value indexes" here: https://www.crsp.org/indexes/ to see some underlying value indexes, and funds like this one track the Large Cap version of it: https://fundresearch.fidelity.com/mutual-funds/summary/92290... (perhaps not surprising, the fund's largest holding is Berkshire B shares)
XBRL filings have the information needed to screen with value investing criteria. GFinance's old stock screener's UI was great.
https://github.com/openlink/Virtuoso-RDFIzer-Mapper-Scripts/...
/? query XBRL https://www.google.com/search?q=query+xbrl
https://github.com/topics/xbrl
But then also a fund or an index fund or an Index ETF wouldn't be complete without ethical review for the sustainable competitive advantage given e.g. GRI+#GlobalGoal sustainability reports.
When you own enough of a company to bring in a new team.
- [ ] ENH: pandas_datareader: add XBRL support from one or more APIs
https://pandas-datareader.readthedocs.io/en/latest/remote_da...
Microsoft open-sources ThreadX
This was "Azure RTOS", bought by Microsoft in haste after Amazon acquired FreeRTOS.
Bill Lamie left to start PX5 and work on a new lightweight embedded RTOS and took most of the talent with him. If Microsoft is doing this, they're pretty much walking away from their roadmap for Azure RTOS and IoT nodes along those lines.
I call it a win, ThreadX had a lot more ecosystem behind it than FreeRTOS ever did. And it does run on things other than Raspberry Pis. Renesas used to give it away for free if you bought their SoCs.
But are there devs for the acquired platform now?
The article says ThreadX used to be what Intel Management Engine ME ran on? How do I configure the ME / AMT VNC auth in there?
ThreadX has been around for 25 years. You licensed it and put it in your embedded system. It's in a lot of products.
But how many people agreed to sign an NDA in order to cluster fuzz such a critical low-level firmware binary blob component?
Is this like another baseband processor?
The development of ThreadX has nothing to do with all with RPi and VideoCore. It's a software component that was used to develop a larger architecture.
Is it like another Intel ME, phone modem firmware, etc? Absolutely!
Everything from your x64 CPU to microSD to credit cards(not the readers; readers run antiquated Android and soon known-bad Chromium) runs some form of weird slow proprietary RTOS. It is what it is. I bet it takes ~century with help of superhuman AI to make those run an open and verified code. The situation is improving too, slowly, because using buggy proprietary code is not a goal, but means.
It's ok to be disgusted about the status quo, but that is not necessarily worth your time; IIRC one of original complaints by RMS on the state of software freedom that lead to Free Software Movement was about some HP printer running buggy custom OS. Even the point people said enough is enough goes back that far.
Gosh, there's so much wrong here.
> weird slow proprietary
They're often not weird, a simple single task runner with a few libraries to handle common tasks and cryptographic operations. Very simple, lightweight, and they generally share a common high level architecture (there's not much variation in an RTOS)
They're often not slow, they're minimalist OSes - barely qualifying as an OS if at all - designed to run a single task, with time guarantees, and to get out of the way. In fact, if it's a single task you need to run, they're faster than any general purpose OS - by design!
They're often not proprietary - a handful of RTOS with huge market penetration used in billions of devices (and now ThreadX) - are open source and have permissive licenses. What IS often proprietary about them are BSPs, but that's a whole separate issue. Yes, there are a lot of proprietary ones out there, but as a blanket statement, it's simply not true.
> readers run antiquated Android
Many use a stripped down version of AOSP, which has become a de facto standard BSP, yes. But many, many others do not (usually a flavor of embedded linux, or an RTOS).
> about some HP printer running buggy custom OS
It was a Xerox printer, and it was because he was frustrated from adding existing job management and notification features he had written to the new printer.
The IME ran on Minix.
You've got that reversed. IME runs MINIX now, it used to run on ThreadX.
Linux (1991) started as a fork of MINIX (1987) by Tanenbaum.
History of Linux: https://en.wikipedia.org/wiki/History_of_Linux
MINIX: https://en.wikipedia.org/wiki/Minix
Redox OS: https://en.wikipedia.org/wiki/Redox_(operating_system) :
> Redox is a Unix-like microkernel operating system written in the programming language Rust, which has a focus on safety, stability, and performance. [4][5][6] Redox aims to be secure, usable, and free.
> Linux (1991) started as a fork of MINIX (1987) by Tanenbaum.
That is not true and never was.
It was in part bootstrapped on Minix but it contains no Minix code at all and was built with GNU tools.
No, that's definitely a fork (or a clone); fairly with significant differences, including the regression to a macrokernel.
That the MINIX code was replaced before release does not make it not a fork.
Nope.
Fork: take the existing code, make your own version and start modifying. That does not apply here.
Torvalds did not take any Minix code; one of the reasons he did his own was that the licence agreement on Minix prevented distribution of modified versions. At the time Freax/Linux got started, people were distributing patch sets to Minix to add 286 memory management, 386 handling and so on, because they could not distribute modified versions.
The Linux kernel started out as 100% new original code. I was there; I watched the mailing lists and the USEnet posts as it happened. It's the year I started paying for my own personal online account and email, after 4Y in the industry.
The origins of Torvalds' kernel were as a homegrown terminal emulator. He wanted it to be able to do downloads in the background while he worked in a different terminal session. This required timeslicing. He tried and found it was complicated, so he started implementing a very simple little program that time-sliced between 2 tasks, one printing "AAAAAA..." to the console and the other printing "BBBBB..."
This is all documented history, which it seems you have not read.
You are wrong.
Furthermore:
> including the regression to a macrokernel.
This indicates that you are not aware of the differences between Minix 1, 2 and 3.
Minix 3 (2005) is a microkernel.
Minix 1 (1987) was not and does not support an MMU. It runs on the 8086 and 68000 among other things.
Linux (1991) was originally a native 80386 OS, the first x86-32 device and a chip that was not yet on sale the year that Minix 1 was first published.
Summary:
Linux is not a fork of Minix and is unrelated to Minix code.
Intel Management Engine: https://en.wikipedia.org/wiki/Intel_Management_Engine
Designing a SIMD Algorithm from Scratch
SIMD is pretty intuitive if you’ve used deep learning libraries, NumPy, array based or even functional languages
NumPy roadmap: https://numpy.org/neps/roadmap.html :
> Improvements to NumPy’s performance are important to many users. We have focused this effort on Universal SIMD (see NEP 38 — Using SIMD optimization instructions for performance) intrinsics which provide nice improvements across various hardware platforms via an abstraction layer. The infrastructure is in place, and we welcome follow-on PRs to add SIMD support across all relevant NumPy functions
"NEP 38 — Using SIMD optimization instructions for performance" (2019) https://numpy.org/neps/nep-0038-SIMD-optimizations.html#nep3...
NumPy docs > CPU/SIMD Optimizations: https://numpy.org/doc/stable/reference/simd/index.html
std::simd: https://doc.rust-lang.org/std/simd/index.html
"Show HN: SimSIMD vs SciPy: How AVX-512 and SVE make SIMD nicer and ML 10x faster" (2023-10) https://news.ycombinator.com/item?id=37808036
"Standard library support for SIMD" (2023-10) https://discuss.python.org/t/standard-library-support-for-si...
Automatic vectorization > Techniques: https://en.wikipedia.org/wiki/Automatic_vectorization#Techni...
SIMD: Single instruction, multiple data: https://en.wikipedia.org/wiki/Single_instruction,_multiple_d...
Category:SIMD computing: https://en.wikipedia.org/wiki/Category:SIMD_computing
Vectorization: Introduction: https://news.ycombinator.com/item?id=36159017 :
> GPGPU > Vectorization, Stream Processing > Compute kernels: https://en.wikipedia.org/wiki/General-purpose_computing_on_g...
Model Correctly Predicts High-Temperature Superconducting Properties
"Superconductivity studied by solving ab initio low-energy effective Hamiltonians for carrier doped CaCuO2, Bi2Sr2CuO6, Bi2Sr2CaCu2O8, and HgBa2CuO4," https://link.aps.org/doi/10.1103/PhysRevX.13.041036
Building a Small REPL in Python
But then what about tests?
https://github.com/4dsolutions/python_camp/pull/4/files :
class TestCalculatorREPL(unittest.TestCase):
from unittest.mock import patch
@patch("sys.stdin", StringIO("1"))
@patch("sys.stdout", new_callable=StringIO)
def test__():
pass
Ooh, this is interesting. Thanks for the tip. The rest of the project is well-tested but I've been cowboy developing the REPL.
The `hanging-punctuation property` in CSS
I’ve been tending towards the opinion that features like this are a bad idea: that we might be better served by a general “this is prose” signal that triggers things like Knuth-Plass line-breaking, partially-hung punctuation (not full, please not full, it’s awful), conservative hyphenation, maybe even spacing and stretching tweaks to reduce the need of hyphenation (like the CTAN microtype package provides), whatever else the user agent supports. Bundle it all up in an all/auto/none property that defaults to auto, and let browsers choose heuristics for normal content. (OK, so I wouldn’t limit it quite that hard, I’d make it a little more like font-variant, but I do believe the default should be heuristic-based rather than disabled.)
> a general “this is prose” signal that triggers things like […] conservative hyphenation
At the least, that requires specifying the language of the text, probably even subtle differences such as those between UK and US english (although those can probably be handled very conservatively most of the time, as long words are fairly rare in English, compared to, say, Dutch or German)
`hyphens: auto` is already language-aware. (It’s just that currently it either does nothing or too much, because the first-fit line breaking algorithm is lousy for hyphenation.)
My toddler loves planes, so I built her a radar
This is cool in concept, but FlightRadar24 has a built-in Augmented Reality feature that works really well.
https://www.flightradar24.com/blog/show-us-your-best-augment...
Also, if I were to build my own local copy, I'd use an RTLSDR to get the ADSB packets direct and base my app on tar1090. https://github.com/wiedehopf/tar1090
MSFS (MS Flight Simulator) has real-time Flight and Weather data and works in Steam's Proton fork of WINE on Linux.
FWICS there are third-party open source tools for adding live Flight data and logical behaviors to flight simulator applications.
https://fslivetrafficliveries.com/user-guide/ :
> FSLTL is a free standalone real-time online traffic overhaul and VATSIM model-matching solution for MSFS.
(... Til about FlyPadOS3 EFB: An EFB is intended primarily for cockpit/flightdeck or cabin use. For large and turbine aircraft, FAR 91.503 requires the presence of navigational charts on the airplane. If an operator's sole source of navigational chart information is contained on an EFB, the operator must demonstrate the EFB will continue to operate throughout a decompression event, and thereafter, regardless of altitude. https://docs.flybywiresim.com/fbw-a32nx/feature-guides/flypa...)
https://twinfan.gitbook.io/livetraffic/ :
> LiveTraffic is a plugin for the flight simulator X-Plane to show real-life traffic, based on publicly available live flight data, as additional planes within X-Plane. [...]
> I spent an awful lot of time dealing with the inaccuracies of the data sources, see [Limitations]. There are only timestamps and positions. Heading and speed is point-in-time info but not a reliable vector to the next position. There is no information on pitch or bank angle, or on gear or flaps positions. There is no info where exactly a plane touched or left ground. There are several data feeders, which aren't in synch and contradict each other.
...
"Google Earth 3D Models Now Available as Open Standard (GlTF)" (2023) ; land, buildings: https://news.ycombinator.com/item?id=35896176
https://developers.google.com/maps/documentation/tile/3d-til... :
> Photorealistic 3D Tiles are a 3D mesh textured with high resolution imagery. They offer high-resolution 3D maps in many of the world's populated areas. They let you power next-generation, immersive 3D visualization experiences to [...]
GMaps WebGL overlay API: https://developers.google.com/maps/documentation/javascript/...
...
From "GraphCast: AI model for weather forecasting" (2023) https://news.ycombinator.com/item?id=38267794 :
> TIL about Raspberry-NOAA and pywws in researching and summarizing for a comment on "Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle" (2023) https://news.ycombinator.com/item?id=38158091
...
"Show HN: I wrote a multicopter simulation library in Python" (2023) https://news.ycombinator.com/item?id=38255362 :
> [ X-Plane Plane Maker, Juno: New Origins (and also Hello Engineer), MS Flight Simulator cockpits are built with MSFS Avionics Framework which is React-based, [Multi-objective gym + MuJoCo] for drone simulation, cfd and helicopters ]
...
"DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2020) https://github.com/Code-and-Response/DroneAid https://news.ycombinator.com/item?id=22707347 ... https://github.com/Call-for-Code/Project-Catalog
Global talks to cut plastic waste stall as industry and environment groups clash
What are some of the solutions to plastic pollution?
(Edit)
What are some solutions to the internal and external costs of plastic production, distribution, consumption, and waste?
Solutions to cut down on pollution or clean up the environment for all the plastic pollution accumulated for decades?
Gated suppression of light-driven proton transport through graphene electrodes
Perhaps tangentially,
"Cheap proton batteries compete with lithium on energy density" (2023) https://news.ycombinator.com/item?id=36926123 :
"Enhancement of the performance of a proton battery" (2023) https://www.sciencedirect.com/science/article/abs/pii/S03787... :
> Abstract: The present paper reports on experiments to improve theoretical understanding of the basic processes underlying the operation of a ‘proton battery’ with activated carbon as a hydrogen storage electrode. Design changes to enhance energy storage capacity and power output have been identified and investigated experimentally. Key changes made were heating of the overall cell to 70 °C, and replacement of the oxygen-side gas diffusion layer with a much thinner titanium-fibre sheet. A very substantial increase in reversible hydrogen storage capacity to 2.23 wt%H (598 mAh g−1, 882 J g−1) was achieved. This capacity is nearly three times that of the earlier design, and more than double the highest electrochemical hydrogen storage using an acidic electrolyte previously reported. It is hypothesised that the main cause of the major gain in storage is an enhanced water formation reaction on the O-side through reduced flooding. In addition, an alternative mode of discharging a proton battery has been discovered that allows direct generation of hydrogen gas from the hydrogenated carbon material, by a ‘hydrogen-pump’ type of reaction. The hydrogen gas evolved is high purity, and thus may ultimately create opportunities for use of this storage technology in hydrogen supply chains for fuel cell vehicles. [And probably other applications as well]
(Edit) A combined proton battery + graphene proton hydrolysis unit should probably keep the hydrolysis module as a separate part for replaceability?
"Gate-controlled suppression of light-driven proton transport through graphene electrodes" (2023) https://www.nature.com/articles/s41467-023-42617-4 :
> Abstract: Recent experiments demonstrated that proton transport through graphene electrodes can be accelerated by over an order of magnitude with low intensity illumination. Here we show that this photo-effect can be suppressed for a tuneable fraction of the infra-red spectrum by applying a voltage bias. Using photocurrent measurements and Raman spectroscopy, we show that such fraction can be selected by tuning the Fermi energy of electrons in graphene with a bias, a phenomenon controlled by Pauli blocking of photo-excited electrons. These findings demonstrate a dependence between graphene’s electronic and proton transport properties and provide fundamental insights into molecularly thin electrode-electrolyte interfaces and their interaction with light.
"Graphene proton transport could revolutionize renewable energy" (2023) https://interestingengineering.com/science/graphene-proton-t... :
> Scientists have found a way to speed up proton transport across graphene using light. The innovation could open up new avenues to producing green hydrogen.
AI system self-organises to develop features of brains of complex organisms
If I understand this correctly, they added a property to a NN of distance, and set the training to penalize increased distance between nodes to simulate a physical constraint, under these conditions 'hubs' emerged which facilitated connections across distance, the other observation was that "the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations".
The work suggests that existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.
> existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.
From https://news.ycombinator.com/item?id=38334538#38336861 :
> Which NN architectures could be sufficient to simulate the entire human brain with spreading activation in 11 dimensions?
Vtracer: Next-Gen Raster-to-Vector Conversion
Maybe Facebook's Segment Anything could replace the first clustering step?
I had a similar idea the other day after fighting with inkscape tracing! The problem with auto tracing is lack of content awareness so it's just shapes and colors leading to strange objects that require lots of tinkering.
I'm going to try it: Use segment anything to get object masks, Trace each object separately and combine from there!
> Comparing to Potrace which only accept binarized inputs (Black & White pixmap), VTracer has an image processing pipeline which can handle colored high resolution scans. tl;dr: Potrace uses a O(n^2) fitting algorithm, whereas vtracer is entirely O(n).
What is the Big-O of the algorithm with Segment Anything or other segmentation approaches?
Potrace: https://en.wikipedia.org/wiki/Potrace
The Ctrl-L to Simplify Inkscape feature attempts to describe the same path with fewer points/bezier curves.
Could this approach also help with 3d digitization?
TIL about https://github.com/fogleman/primitive from "Comparison of raster-to-vector conversion software" https://en.wikipedia.org/wiki/Comparison_of_raster-to-vector... which does already list vtracer (2020)
visioncortex/vtracer: https://github.com/visioncortex/vtracer
Vector graphics https://en.wikipedia.org/wiki/Vector_graphics
Rotoscoping: https://en.wikipedia.org/wiki/Rotoscoping
Sprite (computer graphics) https://en.wikipedia.org/wiki/Sprite_(computer_graphics)
E.g. pygame-web can do SVG sprites; so that you don't have to do pixel art and sprite scaling just works.
2.5D: https://en.wikipedia.org/wiki/2.5D
3D scanning: https://en.wikipedia.org/wiki/3D_scanning
"Why Cities: Skylines 2 performs poorly" (2023) ... No AutoLOD Level of Depth https://news.ycombinator.com/item?id=38160089
Wavetale is a 3D game with extensive and visually impressive vector graphics.
AI is currently just glorified compression
Relevant paper: https://arxiv.org/abs/2311.13110
"White-Box Transformers via Sparse Rate Reduction" (2023) ; https://arxiv.org/abs/2311.13110 https://scholar.google.com/scholar?cites=1536453281127121652... :
> Abstract: In this paper, we contend that the objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a mixture of low-dimensional Gaussian distributions supported on incoherent subspaces. The quality of the final representation can be measured by a unified objective function called sparse rate reduction. From this perspective, popular deep networks such as transformers can be naturally viewed as realizing iterative schemes to optimize this objective incrementally. Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens. This leads to a family of white-box transformer-like deep network architectures which are mathematically fully interpretable. Despite their simplicity, experiments show that these networks indeed learn to optimize the designed objective: they compress and sparsify representations of large-scale real-world vision datasets such as ImageNet, and achieve performance very close to thoroughly engineered transformers such as ViT. Code is at [ https://github.com/Ma-Lab-Berkeley/CRATE.
"Bad numbers in the “gzip beats BERT” paper?" (2023) https://news.ycombinator.com/context?id=36766633
"78% MNIST accuracy using GZIP in under 10 lines of code" (2023) https://news.ycombinator.com/item?id=37583593
Ask HN: Name names and thank open source maintainers of small projects!
Are there some small open source projects you really love? Take some time to thank the maintainers today. If they have a discussion forum where they welcome feedback, go thank them there. If there's nothing like that you can find, do it here. Post a comment. That's all it takes!
Mention the project(s) you love, why you love them, name the authors/maintainers you want to thank and thank them!
Try to find examples of small open source projects that you benefit from. Utilities, libraries, services, games, amusement software, everything counts!
The big projects like curl, Linux, etc. get a lot of limelight from everyone. Let us try to appreciate the small projects too that you benefit. I'm sure the authors/maintainers will appreciate this gesture!
Happy Thanksgiving!
WebGL Water
The illustrations here could probably also be so modeled: https://physics.aps.org/articles/v16/196 https://news.ycombinator.com/item?id=38369731
Newer waveguide approaches for example with dual or additional beams could also be so visualized.
Three.js interactive webgl particle wave simulator: https://threejs.org/examples/webgl_points_waves.html
From https://news.ycombinator.com/item?id=38028794 re: a new ultrasound wave medical procedure:
> "Quantum light sees quantum sound: phonon/photon correlations" (2023) https://news.ycombinator.com/item?id=37793765 ; the photonic channel actually embeds the phononic field
Phonon: https://en.wikipedia.org/wiki/Phonon :
> Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves.[2] However, photons are fundamental particles that can be individually detected, whereas phonons, being quasiparticles, are an emergent phenomenon. [3]
> The study of phonons is an important part of condensed matter physics. They play a major role in many of the physical properties of condensed matter systems, such as thermal conductivity and electrical conductivity, as well as in models of neutron scattering and related effects.
Electron behavior is also fluidic in Superfluids (e.g. Bose-Einstein Condensates).
SQS Superfluid Quantum Space
"Can we make a black hole? And if we could, what could we do with it?" (2022) https://news.ycombinator.com/item?id=31383784 :
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017) :
> [...] Vorticity is interpreted as spin (a particle's internal motion). Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity.
And it's n-body and fluidic.
Curl, Spin, and Vorticity;
Vorticity: https://en.wikipedia.org/wiki/Vorticity
From https://news.ycombinator.com/item?id=31049970 https://westurner.github.io/hnlog/#comment-31049970 ... CFD, jax-cfd, :
> Thus our best descriptions of emergent behavior in fluids (and chemicals and fields) must presumably be composed at least in part from quantum wave functions that e.g. Navier-Stokes also fit for; with a fitness function.
From "Light and gravitational waves don't arrive simultaneously" https://news.ycombinator.com/item?id=38056295 :
> TLDR; In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics).
Show HN: Neum AI – Open-source large-scale RAG framework
Over the last couple months we have been supporting developers in building large-scale RAG pipelines to process millions of pieces of data.
We documented our approach in an HN post (https://news.ycombinator.com/item?id=37824547) a couple weeks ago. Today, we are open sourcing the framework we have developed.
The framework focuses on RAG data pipelines and provides scale, reliability, and data synchronization capabilities out of the box.
For those newer to RAG, it is a technique to provide context to Large Language Models. It consists of grabbing pieces of information (i.e. pieces of news articles, papers, descriptions, etc.) and incorporating them into prompts to help contextualize the responses. The technique goes one level deeper in finding the right pieces of information to incorporate. The search for relevant information is done through the use of vector embeddings and vector databases.
Those pieces of news articles, papers, etc. are transformed into a vector embedding that represents the semantic meaning of the information. These vector representations are organized into indexes where we can quickly search for the pieces of information that most closely resembles (from a semantic perspective) a given question or query. For example, if I take news articles from this year, vectorize them, and add them to an index, I can quickly search for pieces of information about the US elections.
To help achieve this, the Neum AI framework features:
Starting with built-in data connectors for common data sources, embedding services and vector stores, the framework provides modularity to build data pipelines to your specification.
The connectors support pre-processing capabilities to define loading, chunking and selecting strategies to optimize content to be embedded. This also includes extracting metadata that is going to be associated to a given vector.
The generated pipelines support large scale jobs through a high throughput distributed architecture. The connectors allow you to parallelize tasks like downloading documents, processing them, generating embedding and ingesting data into the vector DB.
For data sources that might be continuously changing, the framework supports data scheduling and synchronization. This includes delta syncs where only new data is pulled.
Once data is transformed into a vector database, the framework supports querying of the data including hybrid search using the available metadata added during pre-processing. As part of the querying process, the framework provides capabilities to capture feedback on retrieved data as well as run evaluations against different pipeline configurations.
Try it out and if interested in chatting more about this shoot us an email founders@tryneum.com
DAIR.AI > Prompt Engineering Guide > Technics > Retrieval Augmented Generation (RAG) https://www.promptingguide.ai/techniques/rag
SEC charges Kraken for operating as an unregistered securities exchange
I feel like this won't end up going well for the SEC. This whole methodology of telling exchanges "I don't know, you figure it out" when they ask for clarity and then turning around and suing them for not figuring it out is extremely shaky legal ground.
Matt Levine I think put it best: if Bernie Madoff were to go to the SEC and ask them "I don't know how to run a Ponzi scheme legally, can you please update the guidelines to make it easier", of course the SEC is going to refuse. And the situation with cryptocurrencies seems to be broadly similar: there's actually a pretty clear answer as to what would need to be done to be fully above-the-board and legal, it's just not what the cryptocurrency people want, so they want the SEC to make it easier for them.
And given that we've seen exchange after exchange fail to perform basic tasks like "don't commingle customer funds," I have a hard time feeling any sympathy for cryptocurrency companies here.
> And given that we've seen exchange after exchange fail to perform basic tasks like "don't commingle customer funds," I have a hard time feeling any sympathy for cryptocurrency companies here.
There is nothing per se nefarious about co-mingling customer funds, provided that you are otherwise compliant with the law.
Banks, for instance, don't just co-mingle customer funds, they invest those funds on their own behalf and reap the profits for themselves. Sometimes a bank will share a portion of its profit with its customers, in the form of intest; more often, the bank pays little or no interest, and actually charges the customer fees. A bank will risk its customers' money, and its customers will pay for that privilege.
Kraken has been operating under the money transmitter licensing scheme for a decade, and like banks, money transmitters don't have any legal requirement to segregate customer funds—although they do have the responsibility to maintain sufficient cash balances or liquid investments to cover all of what they owe to customers.
Whether Kraken is breaking the law is something that will likely be decided by a court. The SEC asserts that Kraken has broken the law, but it is not up to the SEC to decide—it is up to the courts.
It is not illegal to operate a spot commodities exchange without approval from the SEC, unless those commodities are also the kinds of securities the trading of which require SEC approval. It is also not illegal to operate as an unlicensed broker-dealer of non-security commodities, or as an unlicensed custodian of non-security commodities. The SEC only has jurisdiction over securities.
It is not in the slightest bit clear yet that the crypto tokens for sale on Kraken are in fact securities of any kind. The SEC asserts that they are, but at this point it is just an assertion. The SEC will have to win in court.
> For what its worth, banks don't just co-mingle customer funds, they invest those funds on their own behalf and reap the profits for themselves.
And that is why banks are regulated, must register, follow certain rules etc.
> Kraken has been operating under the money transmitter licensing scheme for a decade
Yeah, but that doesn't give them a license to operate a securities exchange, or a bank. How many other money transmitters (that are not registered securities exchanges or banks) have 'tokens for sale' like Kraken?
> How many other money transmitters (that are not registered securities exchanges or banks) have 'tokens for sale' like Kraken?
Quite a few. In fact, all of the major centralized exchanges operating in the US are authorized to do so because they are licensed money transmitters (with the exception of some that might be operating under the NYS "Trust" licensing scheme), and none of them are broker-dealers regulated by the SEC, because until very recently the SEC has taken the position that broker-dealers are not allowed to sell crypto assets.
For years the money transmitter licensing scheme was understood to be the correct (and sufficient) licensing scheme under which a crypto exchange could legally operate in the United States. It is only since the beginning of the Biden administration that the SEC has taken the position that crypto exchanges have an obligation to register with the SEC. The Ripple lawsuit was filed at the tail end of the Trump administration (after the election), but it was not targeted at exchanges. It's also worth noting that the judge in the Ripple case found that exchange-traded XRP tokens are not securities. In other words, the SEC's assertion of authority over exchange-traded Ripple tokens was explicitly denied.
There is good reason to believe that SEC will be unsuccessful in asserting even broader authority over all tokens that are available on Kraken.
In order to justify such a specific claim against Kraken to be heard, isn't it necessary to first qualify that such assets are themselves securities?
Am I crazy to call this vexatious harassment?
If P then Q:
If {x,y,z} are securities, [ then {Exchanges a, B, and C} have provided securities exchange services of assets {x,y,z} without the requisite license are thus owe a civil fine. ]
But how is a suit against Exchange A the appropriate forum to hear whether assets {x, y, or z} are securities?
Given that - presumably - assets {x,y,z} are not yet ruled to be securities, there was not sufficient cause or standing to make a claim of bad faith or intent to provide exchange services for unregistered securities.
Exchange A operated in good faith, pursued the requisite state and federal procedures for assessing whether or not such assets were securities, and specifically does not intend to sell securities.
Should there be an is_this_a_security() function of a US government regulatory agency, defendants would be required to request such review before listing said specific types of assets.
Uhh, what? I've always considered Kraken to be the most rules-abiding exchange - more so than Coinbase. They're quite shrewd in what they're willing to list.
I wonder if SEC is charging Coinbase soon, too?
I mean, Kraken is not a registered securities exchange.
They offer many cryptocurrencies.
The SEC has indicated that it considers most, probably all, cryptocurrencies to a be securities.
The writing has been on the wall for a while -- Coinbase got a Wells letter, basically a "lawsuit is coming" warning, months ago.
The Coinbase Wells letter was in reference to their interest bearing offerings
Presumably Coinbase and Kraken had to register as banks to offer FDIC-insured accounts (and debit cards)?
Non-Security Deposits are interest-bearing products that are not securities.
Non-Security Deposits: CD Certificates of Deposit, MMA Money Market Accounts, Treasury Bills, Savings accounts, Checking Accounts
Do banks require SEC registration to offer interest-bearing Non-Security Deposit products?
Have banks ever been required to qualify interest-bearing products as securities contracts, after qualifying each product for list in each US State of operation?
They absolutely are not banks or none of this would be happening.
This also is about unregistered securities and not interest bearing accounts although that too is an issue because they're not banks.
edit: They're Money Services Businesses and that's it. They might have some state-level lending licenses but I'm fuzzy on that.
FDIC: https://en.wikipedia.org/wiki/Federal_Deposit_Insurance_Corp... :
> The Federal Deposit Insurance Corporation (FDIC) is a United States government corporation supplying deposit insurance to depositors in American commercial banks and savings banks.
https://www.sifma.org/resources/general/firms-guide-to-the-c.... :
> Any broker-dealer that is a member of a national securities exchange or Financial Industry Regulatory Authority (FINRA) and handles orders must report to CAT. Eligible securities include NMS stocks, listed options, and over-the-counter (OTC) equity securities.
Interledger Protocol works with any type of ledger, has a defined messaging spec, and has multi-hop audit trails: https://westurner.github.io/hnlog/#comment-36503888
Coinbase and Kraken are not registered as banks, and do not offer FDIC insured accounts.
> Do banks require SEC registration to offer interest-bearing Non-Security Deposit products?
No, because they are registered and regulated as banks.
Whether USD deposits have FDIC protection (250K (x2 *) or the balance of the account, whichever is lower; since the Great Recession [1] (before that it was 100K per account))
[1] https://en.wikipedia.org/wiki/Federal_Deposit_Insurance_Corp...
In 1999, GLBA [2] changed the 1933 Glass-Steagall rule [3] that had prevented banks from investing Savings deposits in order to ensure that they would have enough to prevent another run. (As depicted in "It's a Wonderful Life" (1946); Clarence the angel or Mr. Potter's Potterville)
I'm not sure that it's anywhere explicitly stated that the banks' socialist FDIC corporation justified allowing investing of savings deposits. They created a large shared prepaid credit line for themselves in order to operate safely.
Banks invest in non-securities; without any agreement for future performance.
Banks invest in treasuries, which are tokenizable non-security deposits.
(Some time later, the dotcom boom busted and the US went to war/oil/defense instead of clean energy (like the solar panels that were on the roof until 1980 (due to the oil crisis CPI hostage situation, when it became necessary to defensively meddle in the ME with blowback left for Obama to handle, and not pay for)))
[2] https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bl...
[3] https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall_legisla...
https://help.coinbase.com/en/coinbase/other-topics/other/cli... :
> How is client cash stored at Coinbase? The vast majority of Coinbase client cash is stored in FDIC-insured bank accounts and U.S. government money market funds to keep it safe and liquid. Like all assets on Coinbase, we hold client cash 1:1 and your assets are your assets.
https://support.kraken.com/hc/en-us/articles/360001372126-Ar... :
> Are balances stored on Kraken insured? Cryptocurrency exchanges do not qualify for deposit insurance programs because exchanges are not savings institutions. Exchanges are not even meant to be cryptocurrency wallets.
https://www.investopedia.com/kraken-vs-coinbase-5120700 says that Kraken ended staking services in the US in February 2023.
There is yet no FDIC protection for any stablecoin, and yet no CBDC (just FedNow), but US banks are specifically allowed to provide crypto custody services.
Show HN: New visual language for teaching kids to code
Pickcode is a new language and editor for getting kids started with coding. The code editing experience is totally structured, where you select choices from menus rather than typing. I made Pickcode after experiences teaching kids both block coding (Scratch, App Inventor) and Python. To me, block coding is too far removed from regular coding for kids to make the connection. Pickcode provides a much clearer transition path for students to Python/JS/Java. Our target market is middle/early high school kids, and that’s who we’ve tested the product with during development.
On the site, you can do tutorials to make chatbots, animated drawings, and 2D games. We have a full Intro to Pickcode course, as well as an Intro to Python course where you make regular console programs with a regular text editor. There are 30 or so free lessons accessible with an account, and the rest are paywalled for $5/month.
For professional programmers, the editor is probably pretty frustrating to use (no vim keybindings!), but I hope it’s at least interesting to play with from a UI perspective. If you have kids aged 10-14, I’d love any feedback you have from trying it out with them. I love talking to users, reach out at charlie@pickcode.io!
awesome-python-in-education > Interactive Environments: https://github.com/quobit/awesome-python-in-education#intera...
'Electrocaloric' heat pump could transform air conditioning
> The use of environmentally damaging gases in air conditioners and refrigerators could become redundant if a new kind of heat pump lives up to its promise. A prototype, described in a study published last week in Science [1], uses electric fields and a special ceramic instead of alternately vaporizing a refrigerant fluid and condensing it with a compressor to warm or cool air.
"High cooling performance in a double-loop electrocaloric heat pump" (2023) https://www.science.org/doi/10.1126/science.adi5477
[deleted]
Electrocaloric effect: https://en.wikipedia.org/wiki/Electrocaloric_effect
An equation co-written with AI reveals monster rogue waves form 'all the time'
> Until the new study, many experts believed the majority of rogue waves formed when two waves combined into a single, massive mountain of water. Based on the new equation, however, it appears the biggest influence is owed to “linear superposition.” First documented in the 1700’s, such situations occur when two wave systems cross paths and reinforce one another, instead of combining. This increases the likelihood of forming massive waves’ high crests and deep troughs. Although understood to exist for hundreds of years, the new dataset offers concrete support for the phenomenon and its effects on wave patterns.
AI Proxy to swap in any LLM while using OpenAI's SDK
How does this compare to LocalAI? https://github.com/mudler/LocalAI
The AI Proxy from the post is for using multiple local and remote LLMs over the OpenAI API (along with API key management apparently), while LocalAI is only for using local LLMs.
promptfoo and ChainForge do multi-LLM comparisons and benchmarking: https://news.ycombinator.com/item?id=37447885
NVK reaches Vulkan 1.0 conformance
Can anyone ELI5 this for me?
Does this mean NVidia GPUs don't need the proprietary driver anymore? Does this put NVidia on par with Radeon via amdgpu/RADV regarding OSS Linux support?
Yes, this is indeed a replacement for the proprietary driver. However, Vulkan 1.0 is the most basic version of Vulkan. Right now we are at 1.3, plus there are many Vulkan extensions which need to be implemented aside from the core version.
In other words, this is a good start, but with only Vulkan 1.0 you won't be able to use something like DXVK, for running DirectX games with Proton/Wine.
Vulkan > History: https://en.wikipedia.org/wiki/Vulkan
Ask HN: What might Aaron Schwartz have said about AI today?
I only had the pleasure of meeting Aaron once, at the YC open house after the first startup school in Cambridge. I was pitching sort of a competitor to Infogami and he helpfully whipped out his Sidekick and showed me a bunch of stuff. For the first and only time in my life, I was immediately struck by the thought of “now this is a kid who understands things.” His later work only reaffirmed my view and, though I could only watch from afar, he was critical to building a different kind of world. His blog was always insightful and a source of value.
Often, since the announcement of ChatGPT, I’ve wondered “what might Aaron have thought about this?”
Perhaps those of you who had the good fortune to know him better might share anything he might have said about AI or knowledge silos or the nature of information or free will or anything related?
Trying to find a link to the story of Aaron et. al (with declared intent) generating fraudulent ScholarlyArticles, submitting them to journals, and measuring the journal acceptance rate.
I see US vs Aaron, but no link to the SchoarlyArticle about - was it markov chains in like 2007 - submission of ScholarlyArticles and journal acceptance rates.
I mean, a reddit submission with markdown from nbconvert is basically a ScholarlyArticle if there's review and an IRB or similar.
Ask HN: What's the state of the art for drawing math diagrams online?
I'm interested in having high-quality math diagrams on a personal website. I want the quality to be comparable to TikZ, but the workflows are cumbersome and it doesn't integrate with MathJax/KateX.
Ideally I would be able to produce the diagrams in JS with KaTeX handling rendering the labels, but this doesn't seem to exist (I'm a software engineer so I'm wondering if I should try to make it...). Nice features also include having the diagram being controllable by JS or animatable, but that's not a requirement.
What are other people using?
Things I've considered:
TikZ options:
* TikZ exported to SVG
* Writing the TikZ in something else, e.g. I found this library PyTikZ which is old but I could update things to it, that way at least I don't have to wrangle TikZ's horrible syntax much myself. I could theoretically write a JS version of this.
* Maybe the same thing, JS -> TikZ, but also run TikZ in WebAssembly so that the whole thing lives in the browser.
* Writing TikZ but ... having ChatGPT do it so I don't have to learn to antiquated syntax.
Non-TikZ options:
* InkScape
* JSXGraph, but it isn't very pretty
* ???
Thanks for your help!
Somewhat related but focused on animated diagrams, Manim: https://github.com/3b1b/manim From 3Brown1Blue, used in their maths videos.
Also maybe Mathbox. https://github.com/unconed/mathbox From Steve Wittens / Acko.net. ( See also https://acko.net/blog/mathbox2/ )
Manim-sideview VSCode extension w/ snippets and live preview: https://github.com/Rickaym/Manim-Sideview
Manim example gallery: https://docs.manim.community/en/stable/examples.html
From https://news.ycombinator.com/item?id=38019102 re: Animated AI, ManimML:
> Manim, Blender, ipyblender, PhysX, o3de, [FEM, CFD, [thermal, fluidic,] engineering]: https://github.com/ManimCommunity/manim/issues/3362
It actually looks like pygame-web (pygbag) supports panda3d and harfang in WASM, too; so manim with pygame for the web.
What we need is a modernized toolchain for Asymptote[0] that can run in the browser in realtime like MathJax, it has much nicer syntax than TikZ.
ipython-asymptote [1][2] probably supports Jupyter Retro (now built on the same components as JupyterLab) but not yet JupyterLite with the pyodide WASM kernel:
emscripten-forge builds things with emscripten to WASM packages. [4]
JupyterLite supports micropip (`import micropip; await micropip.install(["pandas",])`).
Does micromamba work in JupyterLite notebooks?
"DOC: How to work with emscripten-forge in JupyterLite" https://github.com/emscripten-forge/recipes/issues/699
[1] https://github.com/jrjohansson/ipython-asymptote/tree/master
[2] examples: https://notebook.community/jrjohansson/ipython-asymptote/exa...
LLMs cannot find reasoning errors, but can correct them
If this is the case, then just run it X times till error rate drops near 0. AGI solved.
This is called (Algorithmic) Convergence; does the model stably converge upon one answer which it believes is most correct? After how much resources and time?
Convergence (evolutionary computing) https://en.wikipedia.org/wiki/Convergence_(evolutionary_comp...
Convergence (disambiguation) > Science, technology, and mathematics https://en.wikipedia.org/wiki/Convergence#Science,_technolog...
OpenAI staff threaten to quit unless board resigns
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
They could make ChatGPT++
“Microsoft Chat 365”
Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.
It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional
Does it do logical reasoning or inference before presenting text to the user?
That's a lot of waste heat.
(Edit) with next word prediction just is it,
"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285
"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
[deleted]
The human brain builds structures in 11 dimensions, discover scientists
Which NN architectures could be sufficient to simulate the entire human brain with spreading activation in 11 dimensions?
- citing the same paper: https://news.ycombinator.com/item?id=18218504
NetworkX does clique identification [1] in memory, and it looks like CuGraph does not yet have a parallel implementation [2]
[1] https://networkx.org/documentation/stable/reference/algorith...
[2] CuGraph docs > List of Supported and Planned Algorithms: https://docs.rapids.ai/api/cugraph/stable/graph_support/algo...
Cryptographers solve decades-old privacy problem
It's an exciting time to be working in homomorphic encryption!
Homomorphic encryption and zero knowledge proofs are the most exciting technologies for me for the past bunch of years (assuming they work (I'm not qualified enough to know)).
Having third parties compute on encrypted, private data, and return results without being able to know the inputs or outputs is pretty amazing.
Can you give an example of a useful computation someone would do against encrypted data?
Trying to understand where this would come in handy.
Grading students' notebooks on their own computers without giving the answers away.
https://news.ycombinator.com/item?id=37981190 :
> How can they be sure what's using their CPU?
Firefox, Chrome: <Shift>+<Escape> to open about:processes
Chromebook: <Search>+<Escape> to open Task Manager
An automatic indexing system for Postgres
> A fundamental decision we've made for the pganalyze Indexing Engine is that we break down queries into smaller parts we call "scans". Scans are always on a single table, and you may be familiar with this concept from reading an EXPLAIN plan. For example, in an EXPLAIN plan you could see a Sequential Scan or Index Scan, both representing a different scan method for the same scan on a given table.
Sequential scan == Full table scan: https://en.wikipedia.org/wiki/Full_table_scan
Yes, and a neat thing about indexes: sometimes it’s faster to do a sequential scan than load an index into memory.
[deleted]
Show HN: Open-source tool for creating courses like Duolingo
I'm launching UneeBee, an open-source tool for creating interactive courses like Duolingo:
GitHub repo: https://github.com/zoonk/uneebee Demo: https://app.uneebee.com/
It's pretty early-stage, so there's a lot of things to improve. Everything on this project is going to be public, so you can check the roadmap on GitHub too: https://github.com/orgs/zoonk/projects/11
I'm creating this project because I love Duolingo and I wanted the same kind of experience to learn other things as well.
But I think this could be useful to other people too. I'll soon launch three products using UneeBee:
- Wikaro: Focused on enterprise. It allow companies to have their own white-label Duolingo. I think this is going to be great for onboarding and internal training.
- Educasso: Focused on schools. It will allow teachers to easily create interactive lessons, compliant to local school curriculum. I want to make it in a way that saves teacher's time, so they focus more on their students rather than lesson planning.
- Wisek: Marketplace for interactive courses where creators will be able to earn money creating those courses.
I'm not sure this is going to work out but, worst case scenario, I'll have products that I can use myself because I'm a terrible learner using traditional ways. Interactive learning is super useful to me, so I hope it will be to other people too.
If you have some spare time, please give me your brutal feedback. I really want to improve this product, so no need to be nice - just let me know your thoughts. :)
PS. I'm also launching it on Product Hunt: https://www.producthunt.com/posts/uneebee
Notes from for LitNerd (YC S21) re: IPA, "Duolingo's language notes all on one page", Sozo's vowel and consonant videos, Captionpop synced YouTube videos with subtitles in multiple languages,: https://news.ycombinator.com/item?id=28309645
Spaced repetition and Active recall testing like or with Mnemosyne or Anki probably boost language retention like they increase flashcard recall: https://en.wikipedia.org/wiki/Anki_(software)
ENH: Generate Anki decks with {IPA symbols, Greek letters w/ LaTeX for math and science, Nonregional (Midland American) English, }
Google translate has IPA for some languages.
"The English Pronunciation / International Phonetic Alphabet Anki Deck" https://www.towerofbabelfish.com/ipa-anki-deck/
"IPA Spanish & English Vowels & Consonants" https://ankiweb.net/shared/info/3170059448
I don't know Elixir and so the hypothetical contribution barrier for nontrivial commits includes learning Erlang / Elixir.
The LearnXinYminutes tuts are succinct and on github for PRs to fix typos, language learning sequence reorderings, and or additions with comments
LearnXinYMinutes > Elixir: https://learnxinyminutes.com/docs/elixir/
FWIW, elixir is now my favorite language. Learning elixir is one of the things I thank past self for doing!
How the gas turbine conquered the electric power industry
Nit pick:
The article explains why many years have passed from the development of efficient steam turbines until the development of efficient gas turbines, due to the differences between the Rankine Cycle and the Brayton cycle.
Even if the Americans like to name the Joule cycle as the Brayton cycle, the American name does not have any justification.
While George B. Brayton has patented his engine in 1872, James Prescott Joule has published already in 1851, 21 years earlier, a scientific paper (“On the Air-Engine”) describing what is named now as the Joule cycle, a.k.a. the Brayton cycle.
While the Brayton patent contained very little information, the paper published by Joule was very important and it described in great detail how to design an engine using this thermodynamic cycle and why this is useful.
Moreover, in 1859, 13 years before Brayton, William John Macquorn Rankine has published a very influential manual (“A Manual of the Steam Engine and other Prime Movers”), where all the thermodynamic cycles used in engines known at that time were classified, and he already referred to this cycle as Joule’s cycle.
Therefore there is no doubt about the priority of the term "Joule cycle" over "Brayton cycle".
An interesting fact that is usually not mentioned in most manuals is that at equal maximum temperature and maximum pressure (which are typically limited by the materials used for the engine), the Joule cycle is more efficient than either the Atkinson cycle or the Otto cycle, so the fact that it is the easiest thermodynamic cycle to approximate in a gas turbine is favorable for its efficiency.
https://news.ycombinator.com/item?id=33431427 :
> FWIU, heat engines are useful with all thermal gradients: pipes, engines, probably solar panels and attics; "MIT’s new heat engine beats a steam turbine in efficiency" (2022) https://www.freethink.com/environment/heat-engine
"Thermophotovoltaic efficiency of 40%" (2022) https://www.nature.com/articles/s41586-022-04473-y https://scholar.google.com/scholar?cites=1419736444024563175...
"Capturing Light From Heat at 40% Efficiency, NREL Makes Big Strides in Thermophotovoltaics" (2022) https://www.nrel.gov/news/program/2022/capturing-light-from-.... :
> The 41%-efficient TPV device is a tandem cell—a photovoltaic device built out of two light-absorbing layers stacked on top of each other and each optimized to absorb slightly different wavelengths of light. The team achieved this record efficiency through the usage of high-performance cells optimized to absorb higher-energy infrared light when compared to past TPV designs. This design builds on previous work from the NREL team.
> Another crucial design feature leading to the high efficiency is a highly reflective gold mirror at the back of the cell. Much of the emitted infrared light has a longer (less energetic) wavelength than what the cell's active layers can absorb. This back surface reflector bounces 93% of that unabsorbed light back to the emitter, where it is reabsorbed and reemitted, improving the overall efficiency of the system. Further improvements to the reflectance of the back reflector could drive future TPV efficiencies close to or above 50%.
Thermoelectric effect: https://en.wikipedia.org/wiki/Thermoelectric_effect
Thermophotovoltaic energy conversion *: https://en.wikipedia.org/wiki/Thermophotovoltaic_energy_conv...
Thermophotonics: https://en.wikipedia.org/wiki/Thermophotonics
Gas turbine: https://en.wikipedia.org/wiki/Gas_turbine :
> gross thermal efficiency exceeds 60%. [100] (2011)
GE-7HA https://www.ge.com/news/press-releases/ha-technology-now-ava... (2017) :
> that its largest and most efficient gas turbine, the HA, is now available at more than 64 percent efficiency in combined cycle power plants, higher than any other competing technology today.
How do TPV operating and lifecycle costs differ from gas turbine's costs?
TODO; though/also - after a gas turbine or solid-state TPV cell array - you have to store electricity, which is lossy and inefficient:
An electric motor's efficiency is not necessarily the same as its generator efficiency in reverse.
Gravitational Potential Energy
CAES Compressed Air Energy Storage:
Solar thermal energy > https://en.wikipedia.org/wiki/Solar_thermal_energy :
> Electrical conversion efficiency: Of all of these technologies the solar dish/Stirling engine has the highest energy efficiency. A single solar dish-Stirling engine installed at Sandia National Laboratories National Solar Thermal Test Facility (NSTTF) produces as much as 25 kW of electricity, with a conversion efficiency of 31.25%. [66]
Szilard-Chalmers MOST process: https://news.ycombinator.com/item?id=34027647 ...18 years at what conversion efficiency?
A PCIe Coral TPU Finally Works on Raspberry Pi 5
An HBM3E HAT would or would not yet make TPUs more useful with a Raspberry Pi 5?
Jetson Nano (~$149)
Orin Nano (~$499, 32 tensor cores, 40 TOPS)
AGX Orin (200-275 TOPS)
NVIDIA Jetson > Origins: https://en.wikipedia.org/wiki/Nvidia_Jetson#Versions
TOPS for NVIDIA [Orin] Nano [AGX] https://connecttech.com/jetson/jetson-module-comparison/
Coral Mini-PCIe ($25; ? tensor cores, 4 TOPS (int8); 2 TOPS per watt)
TPUv5 (393 TOPS)
Tensor Processing Unit (TPU) https://en.wikipedia.org/wiki/Tensor_Processing_Unit
AI Accelerator > Nomenclature: https://en.wikipedia.org/wiki/AI_accelerator
NVIDIA DLSS > Architecture: https://en.wikipedia.org/wiki/Deep_learning_super_sampling#A... :
> DLSS is only available on GeForce RTX 20, GeForce RTX 30, GeForce RTX 40, and Quadro RTX series of video cards, using dedicated AI accelerators called Tensor Cores. [23][28] Tensor Cores are available since the Nvidia Volta GPU microarchitecture, which was first used on the Tesla V100 line of products.[29] They are used for doing fused multiply-add (FMA) operations that are used extensively in neural network calculations for applying a large series of multiplications on weights, followed by the addition of a bias. Tensor cores can operate on FP16, INT8, INT4, and INT1 data types.
Vision processing unit: https://en.wikipedia.org/wiki/Vision_processing_unit
Versatile Processor Unit (VPU)
Wikidata, with 12B facts, can ground LLMs to improve their factuality
Can it though?
LLM's are currently trained on actual language patterns, and pick up facts that are repeated consistently, not one-off things -- and within all sorts of different contexts.
Adding a bunch of unnatural "From Wikidata, <noun> <verb> <noun>" sentences to the training data, severed from any kind of context, seems like it would run the risk of:
- Not increasing factual accuracy because there isn't enough repetition of them
- Not increasing factual accuracy because these facts aren't being repeated consistently across other contexts, so they result in a walled-off part of the model that doesn't affect normal writing
- And if they are massively repeated, all sorts of problems with overtraining and learning exact sentences rather than the conceptual content
- Either way, introducing linguistic confusion to the LLM, thinking that making long lists of "From Wikidata, ..." is a normal way of talking
If this is a technique that actually works, I'll believe it when I see it.
(Not to mention the fact that I don't think most of the stuff people are asking LLM's for isn't stuff represented in Wikidata. Wikidata-type facts are already pretty decently handled by regular Google.)
Well that's not actually how it works - they are just getting a model (WikiSP & EntityLinker) to write a query that responds with the fact from Wikidata. Did you read the post or just the headline?
Besides, let's not forget that humans are also trained on language data, and although humans can also be wrong, if a human memorised all of Wikidata (by reading sentences/facts in 'training data') it would be pretty good in a pub-quiz.
Also, we obviously can't see anything inside how OpenAI train GPT, but I wouldn't be surprised if sources with a higher authority (e.g. wikidata) can be given a higher weight in the training data, and also if sources such as wikidata could be used with reinforcement learning to ensure that answers within the dataset are 'correctly' answered without hallucination.
Ah, I did misunderstand how it worked, thanks -- I was looking at the flow chart and just focusing on the part that said "From Wikidata, the filming location of 'A Bronx Tale' includes New Jersey and New York" that had an arrow feeding it into GTP-3...
I'm not really sure how useful something this simple is, then. If it's not actually improving the factual accuracy in the training of the model itself, it's really just a hack that makes the whole system even harder to reason about.
The objectively true data part?
Also there's Retrieval Augmented Generation (RAG) https://www.promptingguide.ai/techniques/rag :
> For more complex and knowledge-intensive tasks, it's possible to build a language model-based system that accesses external knowledge sources to complete tasks. This enables more factual consistency, improves reliability of the generated responses, and helps to mitigate the problem of "hallucination".
> Meta AI researchers introduced a method called Retrieval Augmented Generation (RAG) to address such knowledge-intensive tasks. RAG combines an information retrieval component with a text generator model. RAG can be fine-tuned and its internal knowledge can be modified in an efficient manner and without needing retraining of the entire model.
> RAG takes an input and retrieves a set of relevant/supporting documents given a source (e.g., Wikipedia). The documents are concatenated as context with the original input prompt and fed to the text generator which produces the final output. This makes RAG adaptive for situations where facts could evolve over time. This is very useful as LLMs's parametric knowledge is static.
> RAG allows language models to bypass retraining, enabling access to the latest information for generating reliable outputs via retrieval-based generation.
> Lewis et al., (2021) proposed a general-purpose fine-tuning recipe for RAG. A pre-trained seq2seq model is used as the parametric memory and a dense vector index of Wikipedia is used as non-parametric memory (accessed using a neural pre-trained retriever). [...]
> RAG performs strong on several benchmarks such as Natural Questions, WebQuestions, and CuratedTrec. RAG generates responses that are more factual, specific, and diverse when tested on MS-MARCO and Jeopardy questions. RAG also improves results on FEVER fact verification.
> This shows the potential of RAG as a viable option for enhancing outputs of language models in knowledge-intensive tasks.
So, with various methods, I think having ground facts in the process somehow should improve accuracy.
[deleted]
Is Delaware the cheapest place to incorporate?
I am living in Taiwan and want to create a startup. The business will be mostly open source and likely to have low to no revenue.
I see that US states like Colorado have no franchise tax. But I also saw posts here that Delaware is usually ultimately cheaper.
What is the recommendation for a company to manage an open source project? Sure it might be worth money, but likely not, so I would like to keep money tight.
thanks!
There are many Open Source Software foundations that specialize in stewarding open source intellectual property and also open governance. Linux Foundation, Apache Software Foundation, [...]
Very few software licenses accept liability, including open source software licenses. Is that conscionable? Service Level Agreements (99% uptime and ZenDesk email-in customer support or better etc) cost money.
E.g. LegalZoom (no affiliation) has affiliate attorneys in many states, including Delaware.
It may or may not be common for open source software projects to register their trademark and/or DBA (Doing Business As) in each state: of operation, of labor law applicability (especially if there are remote workers).
GitHub (now Microsoft owned) supports FUNDING.yml files to display sponsor buttons for projects: https://docs.github.com/en/repositories/managing-your-reposi...
"Sponsors is expanding" (2023-10) https://github.blog/2023-10-03-sponsors-is-expanding/ :
> GitHub Sponsors now supports 103 regions!
E.g. WebMonetization.org supports the W3C Interledger spec (ILP Protocol), which can connect traditional and digital asset ledgers. GitHub supports a number of ~payments/donations providers but not yet any w/ Interledger FWICS?
> Did you know? We recently launched the ability for self-serve enterprise customers to allow member organizations to easily create sponsorships. Today, more than nine in 10 companies use open source software in at least some capacity. Knowing this, we enhanced our invoice process for organizations, making it easier for organizations to sign up and request invoicing as a payment method for sponsorships.
> Additionally, we are making it easier for self-serve enterprise customers to grant their member organization permission to create sponsorships
From the GH Sponsors FAQ re a Matching Fund https://github.blog/2019-06-12-faq-with-the-github-sponsors-... :
> Can’t people just steal money from the matching fund?: We have a rigorous vetting process for the sponsored developers who receive the match. If you happened to see the application form at github.com/sponsors, you’ll notice we ask a lot of questions that support this process. We’re also introducing more measures—including an extensive identity verification and antifraud program in partnership with Stripe—as we grow the program this summer.
YouTube may face criminal complaints in EU for using ad-block detection scripts
Why do you pretend that you are entitled to free video storage and bandwidth from YouTube? You haven't been sleighted. That service costs money to provide.
No, you don't have a right to free service either. Do you pay your other bills while you demand free rendered services from these companies?
Having a disability or similar does not entitle you to an unsubsidized Times Square with no ads.
(NASA is running their own streaming network and competing; it can be done. EU should try to run competing free video streaming businesses before shoving preferred American companies around with anti-competitive claims. EU haven't run a video hosting business that's been prevented from competing by the success, existence, and approved mergers and acquisitions of American media companies; and so EU video hosting businesses haven't and can't have been anti-conpetitively disadvantaged.)
I also run ad blockers for various justifiable reasons; but I don't tell myself that I have a right to free shtuff.
How the heck can you require only Netflix to host 30% local EU content and also demand free video streaming service with no ads?
> Why do you pretend that you are entitled to free video storage and bandwidth from YouTube?
Because they offered it for free.
YouTube can close the doors any time. If they want my money, they can make a service offering that meets my needs. They could charge content providers for bandwidth and storage and meter it with assisted ad-support networks. They could charge a price I'm willing to pay.
But they don't and I will not accept any argument that consuming resources they put into the public sphere for free use means I am under any moral obligation to either give them money or facilitate them making money off of my traffic.
The only time ads worked was when Google made them an unobtrusive part of search. They dominate literally every piece of software I use now. I'm sorry but I say burn it all to the ground. I will either pay for or build its replacement.
You don't want ad-blockers? Shut it down. I was doing the internet before there was a need for them.
> Because they offered it for free.
The Information Service Provider offered it for "free with ads".
Do you otherwise support paying creators for their work, if not through YouTube's system for compensating creators?
My Service Provider License Agreement states that I am able to circumvent any ad technology when using my own devices. It is free. The "with ads" part is someone else's opinion.
If you don't want ad-blockers, shut it down.
You're not going to shut them down, it sounds like they shut you down. Just pony up the $100/year. It's the cheapest television has ever been. This is hacker news, not hobo news. If you've been using the Internet this long, then you should remember the outrageous amounts of money people were paying for cable television back in the 90's. And that still had ads.
Most content creators are either doing it for free, or getting a percentile of a cent on the dollar. As far as I'm concerned, that makes it communication infrastructure, not service provision. Youtube is providing the infrastructure, the creators are providing the service.
As I think that infrastructure should be publically owned, I'm happy to do my bit for nationalization, and use adblock.
Youtube may very well charge for whatever they want. But pretending to offer the service for free and looking at your information (in the form of browser capabilities and other PII), is like you entering a store and the owner going through your wallet just because you want to browse around
I really want to understand the psychology of the people who show up and comment like the OP did.
Does he work for Google? For Youtube? More than a few people here on HN must, right? But there are so many like him, it can't be the explanation all the time. Does he worship big tech companies to the point that if he shills for them, he believes that career success will rain down on him, like some sort of occupational cargo culting thing? Is he some amateur Deviant Art person, who shills for obscene copyright maximalism for similar reasons? Is it just that he does the one sort of 4chan-level shitposting that people can get away with on HackerNews, for the same reason the 4chan people do their thing there (whatever that is)?
It's weird. It's both the one topic you can see that stuff here on, where it isn't wished away to the cornfield. And the people doing it seem confident they can get away with it.
> the psychology of
Here's this: "I hate ads. My time is valuable. I want free things. What I do is justifiable. Especially when you consider moral relativism and their practices."
And this: "I hate war, but the corporate media of the 2000s sold it to me and then didn't pay the bill; so we need citizen media."
I don't think we're entitled to free video streaming; maybe because I remember how much it costs to host an .mp4 without HLS on a shared hosting account without nginx-rtmp-module (C and ffmpeg, not yet Rust) and how long it takes to encode video without buying custom hardware video encoding accelerator cards and low-volume ASIC/FPGA TLS load balancer accelerators because now video over HTTPS, and because I don't want to pull media from your MediaGoblin tube site.
I don't support artists suing fans listening from the streets.
Artists are entitled to proceeds from their work if that's how they want to run the show.
I do support paying artists with the audio fingerprinting that YouTube pioneered.
(As an artist and a visual artist - it doesn't matter what kind - I don't want to ruin YouTube with payout demands; but if musical artists are due their cut for their plays, then visual artists are too. No musical artists have yet stood up for the plight of visual artists. Nobody has yet determined how to pay everyone on a production with a smart contract that gives them their fair cut for their contribution to the collaborative ari project.)
It costs money to encode, host, and moderate video, live video, comments, and live chats.
It costs money to stream video.
Good content costs money to create, in an ad hominem-d influencer-affected landscape devoid of critical thinking and media literacy.
Artists don't get paid when you stream for free without ads or premium subscription.
How can we pay artists and content producers and privately bootstrapped infrastructure if the marginal cost of a stream is not offset by the marginal returns of a stream?
Content creators have real costs.
--
Copyleft is my decision as an artist. Open Source is my decision as a developer who can't donate their services to charity.
--
Anti-competitive anti-competition context:
What's not fair, anticompetitively? Tying, Bundling, Exclusive agreements, Price fixing, Colluding cartels (for non-essential commodities), Bribery, Kickbacks, Becoming a lobbyist without waiting a fair amount of time first,
What is fair? Selling to the highest bidder. Approved mergers and acquisitions. Strategies against hostile corporate takeover. Taking the bank's money and your creditworthiness and bootstrapping. Penny-pinching to scale and gain market share. Appeasing shareholders/owners. Charging people when they use hours of free services per week.
I'm interested that music is your hook here and ensuring artists are paid for their work (which I also agree with), but you stated before that you also use ad blockers, so how do you support those non video/audio content producers whose written or other media you consume with ad blockers? (I assume you agree that content creation is also a worthwhile endeavour, otherwise you wouldn't be bothering to read the online content that requires you to use ad blocking.)
What is different to you between these scenarios?
I can otherwise support artists and their lifestyles or not by subscribing, buying their music, tickets to live shows, jam cruise, merch, and free word of mouth; free mentions
That they've agreed to sell their content on a network with ads (which in particular enables low income folks to be fans) does not entitle me to free shtyff from them; though I may also have a justified medical reason for tuning out ads entirely unless they're funny.
That's not the issue here. The issue is that YT is (arguably) using illegal measures to enforce their own rights. Just because someone infringes on your rights doesn't make it legal for you to infringe back on theirs.
You do not have the right to stream video for free from YouTube.com.
That they offer an ad-supported service does not entitle us to an ad-free service.
Limiting playback without ads is not beyond the rights of the information service provider.
I'm not contesting the legitimacy of YT fighting ad blockers. I'm merely pointing out that the claim here is that they are doing so using illegal means (by using what can be legally seen as spyware without the user's consent, which is illegal in the EU, according to the plaintiff).
In other words, the issue is not that YT fights back, it's how they do it.
Maybe it's a lack of technical understanding.
Are you familiar with keggerator systems with usage quotas? ("Free as in beer")
How can a computerized keggerator system (or a bartender) limit a person to a specific number of drafts from the tap? Is that spyware, or what you agree to when you draw from their kegs?
This is a 1 dimensional view of things.
Maybe I can simplify it for you with this question:
Does Google have the ability to legally compel you to only use chrome?
Im guessing you would say no- Antitrust and whatnot. So the next followup is, Does Google have the ability to tell blind people they cannot use screen readers? Or that people on linux cant browse the site in lynx?
Again, I am guessing the answer is no- Theres anti competitive, antitrust and 230c reasons why. Legally, ad blocking is fine to do. Heres a great article going over it : https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?art...
HOWEVER- I disagree with the premise that Youtube cant try to stop adblockers- They just need to do so in a way that doesnt target specifically target a user. Twitch did a system where they would not send the video stream to you until the advertisement was done playing (which was embedded in the feed itself)- So if you blocked the ad somehow, you would just look at a black screen for 15-30 seconds. This, in my opinion would be completely compliant.
> would not send the video stream to you until the advertisement was done playing
Is this really what consumers prefer?
Logged-in users necessarily carry state in some way such that they are identifiable as a logged-in user. "Session cookies" (and 'super cookies' etc) are standard practice for tracking which users are logged in.
YouTube does not and has not required login to view creators' videos and shorts.
And now don't they - just YouTube, hopefully - have to require login for their TOS to be a recognized agreement that authorizes determining whether the user is logged in and not stealing hours of free services.
Nobody claim to be entitled to free video, just a reasonable amount of advertising. Which is not what I (for example) get when I go on YouTube.
Google already has a definitive solution: close down the free access under a registration+payment (reasonable one). Why they are still serving free content? Why they don't get the moneys from the viewers directly?
I think that the reason is that they are inflating the number of viewers to everyone (content creator, stakeholders, etc.). You can fake viewers, you can't fake revenue. So, Google wants us to get more advertising so that they can claim that the number of views in ads increased, to earn more.
Why are you/they trying to force YouTube to require login to view video?!
Isn't this about privacy!? How can free video plays have privacy if login is required to prevent freeloading hours of free service that others pay for?
That would be a significant pivot away from free video that democratizes video, and from video URLs that people share to walled garden video URLs.
I won't mind to watch some ADs from time to time. However, these guys have gotten too greedy with their advertisements and "buy premium" shit popping up every now and then. Heck, my experience without a blocker was that I had 2 freaking 15-second inserts every 5 minutes of video.
I think a lot of that is the (musical) artists/businesspeople demanding a higher - most reasonable - cut, so we all get more ads.
Just think how expensive YouTube would be if artists started demanding royalties for works that match their video fingerprints (in addition to the payouts according to audio fingerprints that YouTube pioneered).
Encoding, moderation, storage, and bandwidth cost money. I'm sure the YouTube financials and margin are posted.
A video streaming service can afford to operate without ads only if: _____.
You understand this if you've ever tried to host (multiply-reencoded) video on a shared hosting service with a bandwidth quota for $10-$20 a month.
https://WebMonetization.org/ is one proposed solution to advertising-supported media.
AI chemist finds molecule to make oxygen on Mars after sifting through millions
From the article:
> The AI chemist used a robot arm to collect samples from the Martian meteorites, then it employed a laser to scan the ore. From there, it calculated more than 3.7 million molecules it could make from six different metallic elements in the rocks — iron, nickel, manganese, magnesium, aluminum and calcium.
> Within six weeks, without any human intervention, the AI chemist selected, synthesized and tested 243 of those different molecules.
I expected something less automated. It took a few time reading this to see that the robotic research labs from computer games I used to play are now here.
"Automated synthesis of oxygen-producing catalysts from Martian meteorites by a robotic AI chemist" (2023) https://www.nature.com/articles/s44160-023-00424-1 :
> Living on Mars requires the ability to synthesize chemicals that are essential for survival, such as oxygen, from local Martian resources. However, this is a challenging task. Here we demonstrate a robotic artificial-intelligence chemist for automated synthesis and intelligent optimization of catalysts for the oxygen evolution reaction from Martian meteorites. The entire process, including Martian ore pretreatment, catalyst synthesis, characterization, testing and, most importantly, the search for the optimal catalyst formula, is performed without human intervention. Using a machine-learning model derived from both first-principles data and experimental measurements, this method automatically and rapidly identifies the optimal catalyst formula from more than three million possible compositions. The synthesized catalyst operates at a current density of 10 mA cm−2 for over 550,000 s of operation with an overpotential of 445.1 mV, demonstrating the feasibility of the artificial-intelligence chemist in the automated synthesis of chemicals and materials for Mars exploration.
Terraforming: https://en.wikipedia.org/wiki/Terraforming
Ethics of terraforming: https://en.wikipedia.org/wiki/Ethics_of_terraforming
Terraforming of Mars: https://en.wikipedia.org/wiki/Terraforming_of_Mars :
> Mars doesn't have an intrinsic global magnetic field, but the solar wind directly interacts with the atmosphere of Mars, leading to the formation of a magnetosphere from magnetic field tubes.[14] This poses challenges for mitigating solar radiation and retaining an atmosphere.
> The lack of a magnetic field, its relatively small mass, and its atmospheric photochemistry, all would have contributed to the evaporation and loss of its surface liquid water over time.[15] Solar wind–induced ejection of Martian atmospheric atoms has been detected by Mars-orbiting probes, indicating that the solar wind has stripped the Martian atmosphere over time. For comparison, while Venus has a dense atmosphere, it has only traces of water vapor (20 ppm) as it lacks a large, dipole-induced, magnetic field.[14][16][15] Earth's ozone layer provides additional protection. Ultraviolet light is blocked before it can dissociate water into hydrogen and oxygen. [17]
Oxygen evolution: https://en.wikipedia.org/wiki/Oxygen_evolution
Fast Symbolic Computation for Robotics
https://github.com/sympy/sympy/issues/9479 suggests that multivariate inequalities are still unsolved in SymPy, though it looks like https://github.com/sympy/sympy/pull/21687 was merged in August. This probably isn't yet implemented in C++ in SymForce yet?
Is my toddler a stochastic parrot?
Language acquisition > See also: https://en.wikipedia.org/wiki/Language_acquisition
Phonological development: https://en.wikipedia.org/wiki/Phonological_development
Imitation > Child development: https://en.wikipedia.org/wiki/Imitation#Child_development
https://news.ycombinator.com/item?id=33800104 :
> "The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-Step, Lasting Change for You and Your Child" https://www.google.com/search?kgmid=/g/11h7dr5mm6&hl=en-US&q...
> "Everyday Parenting: The ABCs of Child Rearing" (Kazdin, Yale,) https://www.coursera.org/learn/everyday-parenting
> Re: Effective praise and Validating parenting [and parroting]
US surgeons perform first whole eye transplant
Pretty incredible, though I am doubtful of the optic nerve regeneration because of the absolutely insane density of the nerve fiber. Seems like something that will be beyond the grasp of science for the foreseeable future, but the possibility of the unexpected is exciting.
> I am doubtful of the optic nerve regeneration because of the absolutely insane density of the nerve fiber. Seems like something that will be beyond the grasp of science for the foreseeable future
It's been done quite successfully in mice [0]. Last I checked, it was being tested on primates. The method relies on activating the Yamanaka factors used in stem cell research.
Your link is about gene therapy in the eyes of mice, and is specifically a method designed as an alternative to transplant:
> “This new approach, which successfully reverses multiple causes of vision loss in mice without the need for a retinal transplant, represents a new treatment modality in regenerative medicine.”
And that's just retinal transplant, much less whole-eye transplant.
The link provided is also about about a method to produce optic nerve regeneration, regardless of whether there has been a transplant or not. Unless you have a reason to believe that it would not work in the case of a transplant.
Retina or optic nerve: how do the regenerative methods differ?
Visual system > System overview: https://en.wikipedia.org/wiki/Visual_system :
> Mechanical: Together, the cornea and lens refract light into a small image and shine it on the retina. The retina transduces this image into electrical pulses using rods and cones. The optic nerve then carries these pulses through the optic canal. Upon reaching the optic chiasm the nerve fibers decussate (left becomes right). The fibers then branch and terminate in three places. [1][2][3][4][5][6][7]
>Neural: Most of the optic nerve fibers end in the lateral geniculate nucleus (LGN).
https://news.ycombinator.com/item?id=36912925 , ... :
- "Direct neuronal reprogramming by temporal identity factors" (2023) https://www.pnas.org/doi/10.1073/pnas.2122168120#abstract
- "Retinoid therapy restores eye-specific cortical responses in adult mice with retinal degeneration" (2022) https://www.cell.com/current-biology/fulltext/S0960-9822(22)...
- "Genetic and epigenetic regulators of retinal Müller glial cell reprogramming" (2023) https://www.sciencedirect.com/science/article/pii/S266737622...
- https://en.wikipedia.org/wiki/Tissue_nanotransfection#Techni... Ctrl-F "neurons"
Regeneration in humans > Induced regeneration: https://en.wikipedia.org/wiki/Regeneration_in_humans#Induced...
Thermal transistors handle heat with no moving parts
"Test Processor With New Thermal Transistors Cools Chip Without Moving Parts" https://www.tomshardware.com/news/test-processor-with-new-th... :
> Compared to normal cooling methods, the experimental transistors were 13 times better.
"Electrically gated molecular thermal switch" (2023) https://www.science.org/doi/10.1126/science.abo4297 :
> Abstract: Controlling heat flow is a key challenge for applications ranging from thermal management in electronics to energy systems, industrial processing, and thermal therapy. However, progress has generally been limited by slow response times and low tunability in thermal conductance. In this work, we demonstrate an electronically gated solid-state thermal switch using self-assembled molecular junctions to achieve excellent performance at room temperature. In this three-terminal device, heat flow is continuously and reversibly modulated by an electric field through carefully controlled chemical bonding and charge distributions within the molecular interface. The devices have ultrahigh switching speeds above 1 megahertz, have on/off ratios in thermal conductance greater than 1300%, and can be switched more than 1 million times. We anticipate that these advances will generate opportunities in molecular engineering for thermal management systems and thermal circuit design.
>Can switch at 1MHz
>can be switched "more than 1 million times"
Seems like longevity is a potential issue. Definitely could be useful for a few applications (especially temperature control), though I'm not really sure about a pure cooling application.
Firmware Software Bill of Materials (SBoM) Proposal
Back in the day, we used to embed strings into the translation units that would report the original name and version of the file. One could use the 'strings' command to get detailed information about what files were used in a binary and which version! DVCS (git) broke that, so most people don't remember.
Knowing know which version of a file made it into a binary still doesn't really help you, though. The compiler used (if any), the version of the compiler and linker, and even the settings / flags used affect the output and -- in some cases -- could convert an otherwise secure program into something exploitable.
A Software BoM sounds like a "first step" towards documenting a supply chain, but I'm not sure it's in the right direction.
This feels like this might actually be a use-case for a blockchain or a Merkle Tree.
Consider: A file exists in a git repository under a hash, which theoretically (excluding hash collisions) uniquely identifies a file. Embed the file hashes in the executable along with a repository URL and you essentially know which files were used to build a file. Sign the executable to ensure it's not tampered with, then upload the hash of the executable to a block chain.
If your executable is a compiler, then when someone else builds an executable then they can embed the hash of the compiler into the executable to link the binary back to the specific compiler build that made the binary. The compiler could even include into the binary the flags used to modify the compiler behavior.
>This feels like this might actually be a use-case for a blockchain or a Merkle Tree.
A few years ago, a similar idea for firmware binary security[0] had been explored by Google as a possible application of their Trillian[1] distributed ledger, which is based on Merkle Trees.
I don't know if they've advanced adoption of Trillian for firmware, however, the website lists Go packaging[2], Certificate Transparency[3], and SigStore[4] as current applications.
[0] https://github.com/google/trillian-examples/tree/master/bina...
[2] https://go.googlesource.com/proposal/+/master/design/25530-s...
Sigstore artifact signature verification may be part of a SLSA secure software supply chain workflow.
slsa-framework/slsa-github-generator > Generate [signed] provenance metadata : https://github.com/slsa-framework/slsa-github-generator#gene... :
> Supply chain Levels for Software Artifacts, or SLSA (salsa), is a security framework, a check-list of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure in your projects, businesses or enterprises.
> SLSA defines an incrementally-adoptable set of levels which are defined in terms of increasing compliance and assurance. SLSA levels are like a common language to talk about how secure software, supply chains and their component parts really are.
The impossible Quantum Drive that defies known laws of physics reached space
> “I don’t know of any other purely electric drives ever tested in space,” Mansell told The Debrief, including the controversial EMDrive, which, he noted, relies on a completely different technology but also claims to produce thrust without propellant. “If so, this will be the first time a purely electric, “non-conventional” drive will have ever been tested in space!”
Nvidia H200 Tensor Core GPU
The H200 GPU die is the same as the H100, but its using a full set of faster 24GB memory stacks:
https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-acc...
This is an H100 141GB, not new silicon like the Nvidia page might lead one to believe.
It is remarkable how much GPU compute is limited by memory speed.
What would make [HBM3E] GPU memory faster?
High Bandwidth Memory > HBM3E: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#HBM3E
Compared to HBM3, you mean?
The memory makers bump up the speed the memory itself is capable of through manufacturing improvements. And I guess the H100 memory controller has some room to accept the faster memory.
More technically, I suppose.
Is the error rate due to quantum tunneling at so many nanometers still a fundamental limit to transistor density and thus also (G)DDR and HBM performance per unit area, volume, and charge?
https://news.ycombinator.com/item?id=38056088 ; a new QC and maybe in-RAM computing architecture like HBM-PM: maybe glass on quantum dots in synthetic DNA, and then still wave function storage and transmission; scale the quantum interconnect
Is melamine too slow for >= HBM RAM?
My understanding is that while quantum tunneling defines a fundamental limit to miniaturization of silicon transistors we are still not really near that limit. The more pressing limits are around figuring out how to get the EUV light to consistently draw denser and denser patterns correctly.
From https://news.ycombinator.com/item?id=35380902 :
> Optical tweezers: https://en.wikipedia.org/wiki/Optical_tweezers
> "'Impossible' photonic breakthrough: scientist manipulate light at subwavelength scale" https://thedebrief.org/impossible-photonic-breakthrough-scie... :
>> But now, the researchers from Southampton, together with scientists from the universities of Dortmund and Regensburg in Germany, have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined
FWIU, quantum tunneling is regarded as error to be eliminated in digital computers; but may be a sufficient quantum computing component: cause electron-electron wave function interaction and measure. But there is zero or 1 readout in adjacent RAM transistors. Lol "Rowhammer for qubits"
"HBM4 in Development, Organizers Eyeing Even Wider 2048-Bit Interface" (2023) https://news.ycombinator.com/item?id=37859497
Low current around roots boosts plant growth
Fungal mycelial networks can form an underground network capable of transmitting electricity from plant to plant.
I wonder if we tune into just the right frequency plants are already using for communication, we can send more “grow, please” signals.
There is a Swiss startup which is doing something like this. They have created a plant sensor that taps into the electrical signals of plants, and use AI to develop an understanding of plant communication. The use case seems to be early diagnosis of stress rather than manipulation of plants, but who knows some day the same understanding can be used to 'control' plants: https://vivent.ch/
Is there nonlinearity due to the observer effect in this system?
From how many meters away can a human walking in a forest be detected with such an organic signal network?
FWIU mycorrhizae networks all broadcast on the same channel? Is it full duplex; are they transmitting and receiving simulatenously?
Show HN: I wrote a multicopter simulation library in Python
* [Documentation](https://multirotor.readthedocs.io/en/latest/)
* [Source code](https://github.com/hazrmard/multirotor)
* [Demo/Quickstart](https://multirotor.readthedocs.io/en/latest/Quickstart.html)
There are many simulation libraries out there. For example AirSim using Unreal Engine, several implementations in Unity3D, Matlab toolboxes. I wanted a simple hackable codebase with which to experiment.
So, I wrote this. Propellers, motors, batteries, airframe are their own components and can be mixed and matched. The code lets you create any number of propellers, and an optimization function learns a PID controller for that vehicle. Additionally, there are convenience functions to visualize in 3D and sensor measurements.
Please let me know what you think :)
Could it output drones for existing sims?
X-Plane Plane Maker: https://developer.x-plane.com/manuals/planemaker/
Juno: New Origins (and also Hello Engineer)
MS Flight Simulator cockpits are built with MSFS Avionics Framework which is React-based: https://docs.flightsimulator.com/html/Introduction/SDK_Overv...
https://news.ycombinator.com/item?id=37619564 :
> [Multi-objective gym + MuJoCo] for drone simulation
> Idea: Generate code like BlenderGPT to generate drone rover sim scenarios and environments like the Moon and Mars
https://news.ycombinator.com/item?id=36052833 :
> awesome finite element analysis https://www.google.com/search?q=awesome+finite+element+analy...
Also: awesome-cfd
https://news.ycombinator.com/item?id=31049608 :
> Numerical methods in fluid mechanics: https://en.wikipedia.org/wiki/Numerical_methods_in_fluid_mec...
Re: X-plane output. As it stands, no. But I believe a translation script can be made to output the vehicle's properties into a different format. Currently it is a python dataclass.
Thank you for linking to other resources. I will take a look at them.
Np. Interesting field.
"How to create an aircraft [for MS Flight Simulator]" https://docs.flightsimulator.com/html/mergedProjects/How_To_...
Gymnasium w/ MuJoCo or similar would probably be most worthwhile in terms of research,
GlTF: https://en.m.wikipedia.org/wiki/GlTF
/? gltf msfs https://www.google.com/search?q=gltf+msfs
https://github.com/AsoboStudio/glTF-Blender-IO-MSFS :
> Microsoft Flight Simulator glTF 2.0 Importer and Exporter for Blender
MSFS docs on the Blender plugin: https://docs.flightsimulator.com/html/Asset_Creation/Blender...
There must be fluid simulation in MSFS and XPlane because they model helicopter flight characteristics.
I don't think either model things like solar thermal effect upon wings' material properties yet.
* Documentation: https://multirotor.readthedocs.io/en/latest/
* Source code: https://github.com/hazrmard/multirotor
* Demo/Quickstart: https://multirotor.readthedocs.io/en/latest/Quickstart.html
GraphCast: AI model for weather forecasting
To call this impressive is an understatement. Using a single GPU, outperforms models that run on the world's largest super computers. Completely open sourced - not just model weights. And fairly simple training / input data.
> ... with the current version being the largest we can practically fit under current engineering constraints, but which have potential to scale much further in the future with greater compute resources and higher resolution data.
I can't wait to see how far other people take this.
It builds on top of supercomputer model output and does better at the specific task of medium term forecasts.
It is a kind of iterative refinement on the data that supercomputers produce — it doesn’t supplant supercomputers. In fact the paper calls out that it has a hard dependency on the output produced by supercomputers.
"BLD,ENH: Dask-scheduler (SLURM,)," https://github.com/NOAA-EMC/global-workflow/issues/796
Dask-jobqueue https://jobqueue.dask.org/ :
> provides cluster managers for PBS, SLURM, LSF, SGE and other [HPC supercomputer] resource managers
Helpful tools for this work: Dask-labextension, DaskML, CuPY, SymPy's lambdify(), Parquet, Arrow
GFS: Global Forecast System: https://en.wikipedia.org/wiki/Global_Forecast_System
TIL about Raspberry-NOAA and pywws in researching and summarizing for a comment on "Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle" (2023) https://news.ycombinator.com/item?id=38158091
Future is quantum: universities look to train engineers for an emerging industry
I don't see how you can really get a decent grasp of quantum with an undergrad. The standard American physics curriculum has some quantum in sophomore year with Modern Physics, and then Quantum in Junior/Senior year. But you can't exactly skip mechanics, E&M and all of the mathematics (Calc 1-3, Diff EQ, Partial Diff EQ, Linear Algebra) you need a background in. So you pretty much need 2 years of prep to really start learning. Even if you add some specific technology courses around the engineering, how do you get around this undergraduate program not being a Physics or Applied Physics degree, without throwing the baby out with the bathwater?
You can do applied quantum logic in an afternoon (with e.g. colab and cirq, qiskit, and/or tequila) but then how much math is necessary; what is a "real conjugate"?
In the same way you an "do ML" without knowing linear algebra and probability theory. Such people can barely extend anything, let alone design new models from scratch.
E.g. Quantum embedding isn't yet taught to undergrads, and can be quickly explained to folks interested in the field, who might not be deterred by laborious newspaper summarizations, and who might pursue this strategic and critical skill.
How many ways are there to roll a 6-sided die with qubits and quantum embedding?
It took years for tech to completely and entirely rid itself of the socially-broken nerd stereotypes that pervaded early digital computing as well.
How can we get enough people into QIS Quantum fields to supply demand for new talent?
How many people need to design new models from scratch though
"When are we ever going to need maths?" said the high schooler.
You use the skills you have.
That's not true, many people willfully eject information that they have no use for immediately upon leaving school.
While I somewhat regret selling most of my college textbooks back, I feel that cramming for non-applied tests and quizzes was something I needed to pay them for me to do.
TIL about memory retention; spaced repetition interval training and projects with written communications components in application
> Instead [of the Bohr model], Morello uses a real-world example in his teaching — a material called a quantum dot, which is used in some LEDs and in some television screens. “I can now teach quantum mechanics in a way that is far more engaging than the way I was taught quantum mechanics when I was an undergrad in the 1990s,” he says.
> Morello also teaches the mathematics behind quantum mechanics in a more computer-friendly way. His students learn to solve problems using matrices that they can represent using code written for the Python programming language, rather than conventional differential equations on paper.
From https://news.ycombinator.com/item?id=30782678 :
>> This "Quantum Computing for Computer Scientists" video https://youtu.be/F_Riqjdh2oM explains classical and quantum operators as just matrices. What are other good references?
Unfortunately the QuantumQ game doesn't yet have the matrix forms of the quantum logical operators in the (open source) game docs.
Would be a helpful resource, in addition to the Quantum logic wikipedia page and numpy and/or SymPy without cirq:
A Manim presentation demonstrating that quantum logical operator matrices are Bloch sphere rotations, are reversible, and why we restrict operators to the category of unitary transformations
> His colleagues at the UNSW are also developing laboratory courses to give students hands-on experience with the hardware in quantum technologies. For example, they designed a teaching lab to convey the fundamental concept of quantum spin, a property of electrons and some other quantum particles, using commercially available synthetic diamonds known as nitrogen vacancy centres
Some blue LEDs contain sapphire, which is even more macrostae entanglable than diamonds. Lol: https://news.ycombinator.com/item?id=36356444
"The Qubit Game (2022)" https://news.ycombinator.com/item?id=34574791 :
> Additional Q12 (K12 QIS Quantum Information Science) ideas?:
Cirq and other QIS libraries can implement _repr_svg_ e.g. for nice quantum circuit diagrams from code: https://github.com/quantumlib/Cirq/issues/2313
A Manim walkthrough that flies from top-down to low flyover with the wave states at each point in the circuit would be neat. Do classical circuit simulators simulate backwards, nonlinear flow of current?
Research achieves photo-induced superconductivity on a chip
> Their work, now published in Nature Communications, also shows that the electrical response of photo-excited K3C60 is not linear, that is, the resistance of the sample depends on the applied current. This is a key feature of superconductivity, validates some of the previous observations and provides new information and perspectives on the physics of K3C60 thin films.
"Superconducting nonlinear transport in optically driven high-temperature K3C60" (2023) https://www.nature.com/articles/s41467-023-42989-7 :
> Abstract: Optically driven quantum materials exhibit a variety of non-equilibrium functional phenomena, which to date have been primarily studied with ultrafast optical, X-Ray and photo-emission spectroscopy. However, little has been done to characterize their transient electrical responses, which are directly associated with the functionality of these materials. Especially interesting are linear and nonlinear current-voltage characteristics at frequencies below 1 THz, which are not easily measured at picosecond temporal resolution. Here, we report on ultrafast transport measurements in photo-excited K3C60. Thin films of this compound were connected to photo-conductive switches with co-planar waveguides. We observe characteristic nonlinear current-voltage responses, which in these films point to photo-induced granular superconductivity. Although these dynamics are not necessarily identical to those reported for the powder samples studied so far, they provide valuable new information on the nature of the light-induced superconducting-like state above equilibrium Tc. Furthermore, integration of non-equilibrium superconductivity into optoelectronic platforms may lead to integration in high-speed devices based on this effect.
Autonomous lab discovers best-in-class quantum dot in hours instead of years
> The goal in this study was to find the doped perovskite quantum dot with the highest "quantum yield," or the highest ratio of photons the quantum dot emits (as infrared or visible wavelengths of light) relative to the photons it absorbs (via UV light).
"Smart Dope: A Self-Driving Fluidic Lab for Accelerated Development of Doped Perovskite Quantum Dots," (2023) https://onlinelibrary.wiley.com/doi/10.1002/aenm.202302303
> Abstract: Metal cation-doped lead halide perovskite (LHP) quantum dots (QDs) with photoluminescence quantum yields (PLQYs) higher than unity, due to quantum cutting phenomena, are an important building block of the next-generation renewable energy technologies. However, synthetic route exploration and development of the highest-performing QDs for device applications remain challenging. In this work, Smart Dope is presented, which is a self-driving fluidic lab (SDFL), for the accelerated synthesis space exploration and autonomous optimization of LHP QDs. Specifically, the multi-cation doping of CsPbCl3 QDs using a one-pot high-temperature synthesis chemistry is reported. Smart Dope continuously synthesizes multi-cation-doped CsPbCl3 QDs using a high-pressure gas-liquid segmented flow format to enable continuous experimentation with minimal experimental noise at reaction temperatures up to 255°C. Smart Dope offers multiple functionalities, including accelerated mechanistic studies through digital twin QD synthesis modeling, closed-loop autonomous optimization for accelerated QD synthetic route discovery, and on-demand continuous manufacturing of high-performing QDs. Through these developments, Smart Dope autonomously identifies the optimal synthetic route of Mn-Yb co-doped CsPbCl3 QDs with a PLQY of 158%, which is the highest reported value for this class of QDs to date. Smart Dope illustrates the power of SDFLs in accelerating the discovery and development of emerging advanced energy materials.
Munich court tells Netflix to stop using H.265 video coding to stream UHD
I support open source alternatives to H.265, which is fairly advantageous.
I don't think governments are good at regulating technical standards for industry.
And so this is a wash.
This isn't the German government trying to regulate technical standards of an industry. This is a German court upholding a patent. Which you can of course argue all you want around but I failed to see the relevancy of your comment here.
Cathode-Retro: A collection of shaders to emulate the display of an NTSC signal
Seems like a good spot to mention https://github.com/Swordfish90/cool-retro-term
Cool Retro Terminal is a nice accessory for when doing recording or screenshots - cause it looks cool. Can't use it as my daily driver tho.
And enough settings in there you can make it look like your favourite old one.
A similar theme for JupyterLab/JupyterLite would be cool
jupyterlab_miami_nights is real nice, too https://anaconda.org/conda-forge/jupyterlab_miami_nights
DI's Synthwave station somewhat matches the decade: https://www.di.fm/synthwave
Lighter almost solarized red for terminal text is also a decent terminal experience IMHO
Show HN: Open-source digital stylus with six degrees of freedom
Very cool. The use of a webcam really makes me wonder if there's a future where our regular single ~78° FOV webcams are going to be replaced by dual (stereo) fisheye webcams that can:
- Enable all sorts of new UX interactions (gestures with eye tracking)
- Enable all sorts of new peripheral interactions (stylus like this, but also things like a steering wheel for racing games)
- Enable 3D 180° filming for far more flexible webcam meetings, including VR presence, etc.
The idea of being able to use the entire 3D space in front of your computer display as an input method feels like it's coming, and using a webcam the way OP describes feels like it's a little step in that direction.
I thought this was around the corner years ago when Intel and partners had RealSense modules being built into laptops but it seems like all the players have shifted focus to more enterprise and industrial markets.
Wii Remote (2006), Wiimote Whiteboard (2007), Kinect (2010), Leap Motion (2010- Ultraleap (2019)),
There are infrared depth cameras in various phones and laptop cameras now.
[VR] Motion controllers: https://en.wikipedia.org/wiki/Motion_controller#Gaming
Inertial navigation system: https://en.wikipedia.org/wiki/Inertial_navigation_system
Inertial measurement unit : https://en.wikipedia.org/wiki/Inertial_measurement_unit :
> An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. When the magnetometer is included, IMUs are referred to as IMMUs.[1]
Moasure does displacement estimation with inertial measurement (in a mobile app w/ just accelerometer or also compass sensor data?) IIUC: https://www.moasure.com/
/? wireless gesture recognition RSSI: https://scholar.google.com/scholar?q=wireless+gesture+recogn...
/? wireless gesture recognition RSSI site:github.com : https://www.google.com/search?q=wireless+gesture+recognition...
Awesome-WiFi-CSI-Sensing > Indoor Localization: https://github.com/Marsrocky/Awesome-WiFi-CSI-Sensing#indoor...
3D Scanning > Technology, Applications: https://en.wikipedia.org/wiki/3D_scanning#Technology
Are there a limited set of possible-path-corresponding diffraction patterns that NIRS (Near-Infrared Spectroscopy) could sense and process to make e.g. a magic pencil with pressure sensitivity, too?
/q.hnlog "quantum navigation": https://news.ycombinator.com/item?id=36222625#36250019 :
> Quantum navigation maps such signal sources such that inexpensive sensors can achieve something like inertial navigation FWIU?
From https://news.ycombinator.com/context?id=36249897 :
> Can low-cost lasers and Rdyberg atoms e.g. Rydberg Technology solve for [space-based] matter-wave interferometry? [...] Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
Because the digitizer
Getting the Lorentz transformations without requiring an invariant speed (2015)
It's curious that this simple proof -which don't require invariant speed, aka Maxwell equations- was discovered after Einstein's proposal which depends on invariant speed assumption. I wonder how the history of physics would have been if someone proposed this before Einstein. The maths needed for this derivation are quite simple, so I guess Newton or some mathematician before Einstein could have proposed special relativity.
(nonlinear) retrocausality: https://news.ycombinator.com/item?id=38047149
https://news.ycombinator.com/item?id=28402527 :
/? electrodynamic engineering in the time domain, not in the 3-space EM energy density domain https://www.google.com/search?q=electrodynamic+engineering+i...
"Electromagnetic forces in the time domain" (2022) https://opg.optica.org/oe/fulltext.cfm?uri=oe-30-18-32215&id... :
> [...] On looking through the literature, we notice that several previous studies undertook the analysis of the optical force in the time domain, but at a certain point always shifted their focus to the time average force [67–69] or, alternatively, use numerical approaches to find the force in the time domain [44,70–75]. To the best of our knowledge, only a few publications conducted analytical studies of the optical force evolution. Very recent paper employs the signal theory to derive the imaginary part of the Maxwell stress tensor, which is responsible for the oscillating optical force and torque [76]. The optical force is studied under two-wave excitation acting on a half-space [40] and on cylinders [77], and a systematic analytical study of the time evolution of the optical force has not yet been reported.
If mass warps space and time nonlinearly per relevant confirmations of General Relativity, and there is observable retrocausality and also indefinite causal order, is forcing time to be the frame of reference, and to be the constant frame of reference necessary or helpful for the OT problem and otherwise?
Ask HN: What are good books on SW architecture that don't sell microservices?
There have been multiple discussions of how the microservices movement did more harm than good, how a modular monolith can be a much better option.
I wish there was a comprehensive book (ideally) that is practical, pragmatic, doesn't advocate the use of microservices just because it is cool, etc.
Some books that are often recommended have "microservices" in their names which is a pretty bad start.
For example, I am thinking of how two services should communicate (I am unfortunately guilty of having more services that I really needed). There are multiple options and the choice depends on factors like synchronous vs asynchronous so I would like to read a detailed analysis of all tradeoffs and considerations. Ideally, from authors that really know what they're talking about.
Patterns of Enterprise Application Architecture, Martin Fowler, and Enterprise Integration Patterns, Gregor Hohpe.
both predate microservices, and both are still very useful today.
"Patterns of Distributed Systems (2022)" https://martinfowler.com/articles/patterns-of-distributed-sy... and notes regarding: https://news.ycombinator.com/item?id=36504073
Computation of the n'th digit of pi in any base in O(n^2) (1997)
Squinting at this, I wonder if it's at all valid to say that the existence of a quadratic time algorithm to calculate pi has anything to do with the fact that the implicit formula of a circle is made up of quadratic terms.
In other words, if pi basically sums up the most important fact about a circle's geometry, then it's reasonable to expect that geometry to be represented somehow in the important facts about algorithms that calculate pi.
That's an interesting concept. I think similar spigot algorithms are known for other transcendentals, and I suspect if you compared them you would not find a general trend of deep connections between algorithmic complexity and the geometric features of the corresponding value. What would you look for in the spigot algorithm for e, or log 2?
I suppose e's connection to hyperbolic geometry might suggest a relationship with the implicit formula x^2 - y^2 = 1. And I guess log2's behavior would be very much connected to that since it only differs from the natural log by a constant factor.
But I know I'm reaching here. I just like fantasizing about math :).
Is there unitarity, symmetry, or conservation when x^2 ± y^2 = 1?
We square complex amplitudes to make them real.
https://twitter.com/westurner/status/967970148509503488 :
> "Partly because, mathematically, wavefunctions are vectors in a L^2 Hilbert space, which is complex-valued. Squaring the amplitude, rather Ψ∗Ψ=|Ψ|^2 is one way to ensure that you get real-valued probabilities, which is also related to the fact that […]" https://physics.stackexchange.com/questions/280748/why-do-we...
Apple’s first online store played a crucial role in the company’s resurgence
Somewhere between Macintosh & iPod + iTunes and MacOS (was: OSX (Unix, Bash,)) I think Apple was saved.
And the white cabling with the silhouettes
This is exactly how I remember and have experienced it, in that order.
MacOS became big because of the included Unix. Developers flocked. And al ballman once said... developers, developers, developers!
The Developer story is key, I think, in addition to content CDNs and SAST/DAST for apps.
iPod Linux: https://en.wikipedia.org/wiki/IPodLinux
Rockbox Firmware for an Archos and then an Nano w/ color and no WiFi: https://en.wikipedia.org/wiki/Rockbox
podman run --rm -it docker.io/busybox
xcode-select -p
# installing homebrew installs the xcode CLI tools:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install podman
brew install --cask podman-desktop
https://mac.install.guide/commandlinetools/3.htmlHow to add a Software repository to an OS. Software repository; SLSA, Sigstore, DevSecOps: https://en.wikipedia.org/wiki/Software_repository
W/ Ansible:
osx_defaults_module: https://docs.ansible.com/ansible/latest/collections/communit...
honebrew_tap_module: https://docs.ansible.com/ansible/latest/collections/communit...
honebrew_module: https://docs.ansible.com/ansible/latest/collections/communit...
homebrew_cask_module https://docs.ansible.com/ansible/latest/collections/communit...
From my upgrade_mac.sh: https://github.com/westurner/dotfiles/blob/develop/scripts/u... :
upgrade_macos() {
softwareupdate --list
softwareupdate --download
softwareupdate --install --all --restart
}
From https://twitter.com/mitsuhiko/status/1720410479141487099 :
> GitHub Actions currently charges $0.16 *per minute* for the macOS M1 Runners. That comes out to $84,096 for 1 machine year
GitHub Runner is written in Go; it fetches tasks from GitHub Actions and posts the results back to the Pull Request that spawned the build.
nektos/act is how Gitea Actions builds GitHub Actions workflow YAML build definition documents. https://github.com/nektos/act
From https://twitter.com/MatthewCroughan/status/17200423527675700... :
> This is the macOS Ventura installer running in 30 VMs, in 30 #nix derivations at once. It gets the installer from Apple, automates the installation using Tesseract OCR and TCL Expect scripts. This is to test the repeatability. A single function call `makeDarwinImage`.
With a Multi-Stage Dockerfile/Containerfild, you can have a dev environment like xcode or gcc+make in the first stage that builds the package, and then the second stage the package is installed and tested, and then the package is signed and published to a package repo / app store / OCI container image repository.
Continuous integration: https://en.wikipedia.org/wiki/Continuous_integration
Is there a good way to do automated testing like pytest+Hypothesis+tox w/ e.g. the Swift programming language for computers? CloudFuzz is built upon OSS-Fuzz.
SLSA now specifies builders for signing things correctly in CI builds with keys in RAM on the build workers.
"Build your own SLSA 3+ provenance builder on GitHub Actions" https://slsa.dev/blog/2023/08/bring-your-own-builder-github
140-year-old ocean heat tech could supply islands with limitless energy
> Known as ocean thermal energy conversion or ‘OTEC,’ the technology was first invented in 1881 by French physicist Jacques Arsene d’Arsonval. He discovered that the temperature difference between sun-warmed surface water and the cold depths of the ocean could be harnessed to generate electricity. [...]
> For OTEC to work it requires a temperature difference between hot and cold water of around 20 degrees Celsius. This can only be found in the tropics, which is not a problem in itself.
OTEC: Ocean thermal energy conversion: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio...
We used to build steel mills near cheap power. Now we build datacenters
To push waste heat to a building across the street, it's usually necessary to add heat at the source n order to more efficiently transfer the thermal energy to the receiver.
OTOH other synergies that require planning and/or zoning:
- Algae plants can capture waste CO2.
- Datacenters produce steam-sterilized water that's usually tragically not fed back into water treatment.
- Smokestacks produce CO2, Nitrogen, and other flue gases that are reusable by facilities like Copenhill and probably for production of graphene or similar smokestack air filters.
Facebook Is Ending Support for PGP Encrypted Emails
> Once a hacker gains access to a Facebook account, they can proceed to activate email encryption.
> This renders recovery emails sent to the user’s email address unreadable, as only the hacker has the encryption keys.
So: PGP encrypted emails were rarely used, except to lock out the legit user after account was compromised.
Github asks you to log in again to add SSH keys in, this could've been similar
They're just looking for excuses
A lot of account compromise is due to reused passwords so I'm not sure that's a complete solution.
Sending a PGP-encrypted email with a verification link to activate the feature should solve that.
Vacuum in optical cavity can change material magnetic state wo laser excitation
Why Cities: Skylines 2 performs poorly
For a bit of reference, a full frame of Crysis (benchmark scene) was around 300k vertices or triangles (memory is fuzzy), so 3-10 log piles depending on which way my memory is off and how bad the vertex/triangle ratio is in each.
Author here: I never bothered counting the total vertices used per frame because I couldn't figure out an easy way to do it in Renderdoc. However someone on Reddit measured the total vertex count with ReShade and it can apparently reach hundreds of millions and up to 1 billion vertices in closeups in large cities.
Edit: Checked the vert & poly counts with Renderdoc. The example scene in the article processes 121 million vertices and over 40 million triangles.
> The issues are luckily quite easy to fix, both by creating more LOD variants and by improving the culling system
How many polygons are there with and without e.g. AutoLOD/InstaLOD?
An LLM can probably be trained to simplify meshes and create LOD variants with e.g. UnityMeshSimplifier?
Whinarn/UnityMeshSimplifier: https://github.com/Whinarn/UnityMeshSimplifier :
> Mesh simplification for Unity. The project is deeply based on the Fast Quadric Mesh Simplification algorithm, but rewritten entirely in C# and released under the MIT license.
Mesh.Optimize: https://docs.unity3d.com/ScriptReference/Mesh.Optimize.html
Unity-Technologies/AutoLOD: https://github.com/Unity-Technologies/AutoLOD
"Unity Labs: AutoLOD - Experimenting with automatic performance improvements" https://blog.unity.com/technology/unity-labs-autolod-experim...
InstaLOD: https://github.com/InstaLOD
"Simulated Mesh Simplifier": https://github.com/Unity-Technologies/AutoLOD/issues/4 :
> Yes, we had started work on a GPU-accelerated simplifier using QEM, but it was not robust enough to release.
"Any chance of getting official support now that Unreal has shown off it's AutoLOD?" https://github.com/Unity-Technologies/AutoLOD/issues/71#issu... :
> "UE4 has had automatic LOD generation since it first released - I was honestly baffled when I realized that Unity was missing what I had assumed to be a basic feature.*
> Note that Nanite (which I assume you're referring to) is not a LOD system, despite being similar in the basic goal of not rendering as many polygons for distant objects.
"Unity: Feature Request: Auto - LOD" (2023-05) https://forum.unity.com/threads/auto-lod.1440610/
"Discussion about Virtualized Geometry (as introduced by UE5)" https://github.com/godotengine/godot-proposals/issues/2793
UE5 Unreal Engine 5 docs > Rendering features > Nanite: https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Na...
Unity-GPU-Based-Occlusion-Culling: https://github.com/przemyslawzaworski/Unity-GPU-Based-Occlus...
Tractor Beams Are Real, and Could Solve a Major Space Junk Problem
"With the commercial space industry booming, the number of satellites in Earth's orbit is forecast to rise sharply. This bonanza of new satellites will eventually wear out and turn the space around Earth into a giant junkyard of debris..."
All modern commercial satellites are required to have a safe deorbit plan. The FCC regulates that LEO satellites must deorbit within 5 years of mission completion.
Disclaimer: I work for a commercial satellite operator. Our satellites deorbit and burn up without intervention at the end of their lifecycle.
>Our satellites deorbit and burn up without intervention at the end of their lifecycle.
I'm curious how this works. Is there a certain amount of propellant reserved for the deorbit plan?
Typically yes... Alternative mechanisms include using solar panels to increase drag, and some companies are experimenting with devices that interact with the earths magnetic field to produce electromagnetic drag…
But reserving fuel for “decommissioning operations” is standard practice for satellite operators and the space industry in general.
Are there any unboosted or boosted orbital trajectories that deorbit by "ejection" rather than atmospheric friction and pollution?
Is the minimum perturbation necessary to "eject from earth orbit" lower in an Earth-Moon Lunar cycler orbit? (And, If decommissioned, why shouldn't the ISS be placed into a Lunar Cycler Earth-Moon orbit to test systems failure, lunar cycler orbits, and extra- Van-Allen radiation's impact on real systems failure?)
Couldn't you build those out of recyclable proton batteries and heat-shielded bioplastic?
"Falling metal space junk is changing Earth's upper atmosphere in ways we don't fully understand" (2023) and also solar microwave power beaming and fluids https://www.livescience.com/space/space-exploration/falling-...
"Metals from spacecraft reentry in stratospheric aerosol particles" (2023) https://www.pnas.org/doi/full/10.1073/pnas.2313374120
There sort of are some ejection disposals, but they are rarely used due to how specific the orbital parameters need to be in order for that to be the “cheaper” option.
You usually find it’s not exactly the sort of disposals your expecting though. Several space probes have used launch trajectories where their final booster stage will be placed into a heliocentric orbit, with the general trajectory placing it on an uncontrolled gravity assist and then the space probe will use a very small amount of fuel to precisely control the gravity assist it receives in order to get where it needs to be going… and the booster stage will have its already heliocentric orbit further changed by the gravity assist.
It’s all about the deltaV … it takes way less to deorbit LEO and even some of the MEO satellites than to perform a lunar orbit raising, and GEO would be too costly to do either when which is why the GEO graveyard orbit belt is quite well defined… as for boosting the ISS up towards the moon to perform a Salyut 7 style long duration equipment survival experiment…. That’s a lot of fuel, and we already have that experience since we’ve been monitoring the entire ISS since we launched it…
Would it be easier to band together the currently-unrecyclable orbital debris and waste and boost bundles of usable material with e.g. solar until it's usable in orbit or on the ground of a planet?
Someone probably has the costs to lift n kg of payload into orbit then and today in today dollars.
Matter persists in cycler orbits around attractors.
Is there a minimum escape velocity for each pending debris object, and then also for a moon gravity-assist solar destination orbit?
How much solar energy per kg of orbital mass is necessary to dispose by ejection in some manner?
I'm reminded of Wonka's cane and the Loompland river. Can such orbits be simulated with Kerbal Space Program 2?
There’s two fundamental issues with that sort of approach:
1st, is the deltaV difference between useful and desirable orbits that satellite operators want to be in and the selected “disposal orbit band”… with GEO it’s just a small deltaV Hohmann transfer “up” into the geo graveyard “band”… the graveyard starts just 300km further out from earth and the geostationary satellite typically use on the order of 11m/s… which is bugger all. The Manned Manoeuvring Unit “EVA jetpack” had approximately 25m/s.
2nd, is sometimes overlooked, but nonetheless quite important aspect. Any graveyard orbit intended for use by multiple objects should cross the orbits of as few active spacecraft as possible and have as low of a relative velocity between the objects in that graveyard orbit as possible… Geosynchronous satellites are basically above all but a small number of very specific satellites in places like lunar orbit or Lagrange points, and “in a ring” so, performing a gentle boost to the graveyard orbit gets them out of the way of almost everything and they are at least as far as satellites go, not moving particularly fast relative to each other, and they are so high up nothing else is going to regularly cross their obit, so it’s a relatively low risk environment for a satellite on satellite collision…
the majority of cycling orbits tend to have a LOT of relative velocity compared to the orbits they cross, tangential crossings leave high relative velocity, and the elongated orbits tend to end up crossing through a lot of space as the orbital precession moves along… on top of that the deltaV to go from most orbits to a cycling orbit tends to be relatively high, not as much as a full transfer to lunar orbit, but it’s basically in the same ballpark as using the moon to kick yourself into a solar orbit, which while potentially doable at considerable extra cost for a GEO satellite, is completely impossible for the sort of smaller LEO and MEO satellites without an entire kick stage, tug or other propulsion which would at least for LEO probably weigh more than the satellite does.
Such graveyarded debris and waste is potentially reusable in orbit, on the Moon, and on Mars.
~space tugs with electric propulsion: https://g.co/bard/share/df6189ac8113
Is there an effective fulcrum on a lever in space if you attach thrusters at one end; like a space baseball bat?
MEU: "Mission Extension Unit"
How many kwh of (solar) electricity would be necessary to transfer to and from lunar cycler orbit to earth orbit to stay within the belt? 8, orbit, 8; or 8,8,orbit,8,8 etc?
FWIU there are already-costed launch and refuel mission plans?
Launch rocket_2 with fuel for rocket_1 which is already in orbit, attach to object_1, and apply thrust towards an optimal or sufficient gravity-assisted solar trajectory or bundle of space recyclables.
Is there any data on space stations in Lunar Cycler orbits?
Are there Lunar Cycler orbits that remain within the van Allen radiation belt?
What would be the costs and benefits of long-term positioning of a space station or other vessel with international docking adapter(s) in a Lunar cycler orbit?
Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle
A GUI built on top of this: https://github.com/markjfine/nrsc5-dui
From https://github.com/markjfine/nrsc5-dui#maps :
> Maps: When listening to radio stations operated by iHeartMedia, you may view live traffic maps and weather radar. The images are typically sent every few minutes and will fill the tab area once received, processed, and loaded. Clicking the Map Viewer button on the toolbar will open a larger window to view the maps at full size. The weather radar information from the last 12 hours will be stored and can be played back by selecting the Animate Radar option. The delay between frames (in seconds) can be adjusted by changing the Animation Speed value. Other stations provide Navteq/HERE navigation information... it's on the TODO 'like to have' list.
Is this an easier way to get weather info without Internet than e.g. Raspberry-NOAA and a large antenna?
https://www.google.com/search?q=weather+satellite+antenna+ha... https://github.com/jekhokie/raspberry-noaa-v2#raspberry-noaa... :
> NOAA and Meteor-M 2 satellite imagery capture setup for the regular 64 bit Debian Bullseye computers and Raspberry Pi!
> Is this an easier way to get weather info without Internet than e.g. Raspberry-NOAA and a large antenna?
If you're OK with audio only, you cant beat NOAA weather radio: https://www.weather.gov/nwr/
You can listen with a SDR, or any number of cheap radios.
If you live within the coverage area of an FM radio station that's sending weather radar, it will probably be easier to receive than NOAA satellites.
I'm a co-maintainer of this project. If anyone has questions, I'd be happy to answer them.
Would it be feasible to do something similar with OpenWRT opkg packages to support capturing weather radar (and weather forecasts and alerts?) data from digital FM radio with a USB RTL-SDR radio?
Python apps require a bunch of disk space, which is at a premium on low-wattage always-on routers.
OpenWRT's luci-app-statistics application supports rrdtool and collectd for archived stats over time (optionally on a USB stick or an SSD instead of the flash ROM of the router, which has a max lifetime in terms of number of writes) https://github.com/openwrt/luci/tree/master/applications/luc...
From https://news.ycombinator.com/item?id=38138230 :
> LuCI is the OpenWRT web UI which is written in Lua; which is now implemented mostly as a JSON-RPC API instead of with server-side HTML templates for usability and performance on embedded devices. [...] Notes on how to write a LuCI app in Lua:
It might be possible, but I'm not sure whether a typical router would have enough CPU horsepower to do the processing required to demodulate the signal.
What models of Raspberry Pi are sufficient, or how many Mhz and RAM are necessary to demodulate an HD radio stream?
(Pi Pico, Pi Zero, and Pi A+/B+/2/3/4 have 2x20 pin headers for HATs. Orange Pi 5 Plus has hardware H.265 encoding with hw-enc and gstreamer fwiu.)
I haven't investigated the CPU and RAM requirements in depth, but I have used nrsc5 on a Pi 3B without issue.
I suspect a Pi Pico would be too small.
This is actually extremely useful when hooked up to a laptop for traveling, because of the embedded traffic information and maps in the sideband data.
If you're willing to share more on this subject, color me interested.
Some stations send out traffic and weather images (as well as album art and station logos). The files can be dumped to disk using nrsc5's "--dump-aas-files" option. A few people have built GUIs that display the information in a more convenient way:
https://github.com/cmnybo/nrsc5-gui https://github.com/markjfine/nrsc5-dui https://github.com/KYDronePilot/hdfm
In addition to files, some stations also send out "stream" and "packet" data. There is ongoing work to reverse engineer the formats. See the discussion here for details: https://github.com/theori-io/nrsc5/pull/308
Are there yet Clock, Weather Forecast, or Emergency Alert text data channels in digital FM radio?
FWIU there are also DVB data streams?
Time information is broadcast, but in my experience it's often inaccurate.
There's also a special stream for emergency alerts, but I haven't seen it in use.
There are various data streams, but not DVB.
A lot of the details are described in the standard: https://www.nrscstandards.org/standards-and-guidelines/docum...
DVB-T could technically carry clock, weather forecasts, and alerts as text data feeds.
What needs to be done to link WEA Wireless Emergency Alerts with HD radio data streams? WX radio could possibly embed a data channel? If it doesn't already for e.g. accessible captioning?
DVB-T: https://en.wikipedia.org/wiki/DVB-T :
> This system transmits compressed digital audio, digital video and other data in an MPEG transport stream, using coded orthogonal frequency-division multiplexing (COFDM or OFDM) modulation.
From https://www.rtl-sdr.com/about-rtl-sdr/ :
> The origins of RTL-SDR stem from mass produced DVB-T TV tuner dongles that were based on the RTL2832U chipset. [...]
> Over the years since its discovery RTL-SDR has become extremely popular and has democratized access to the radio spectrum. Now anyone including hobbyists on a budget can access the radio spectrum. It's worth noting that this sort of SDR capability would have cost hundreds or even thousands of dollars just a few years ago. The RTL-SDR is also sometimes referred to as RTL2832U, DVB-T SDR, DVB-T dongle, RTL dongle, or the "cheap software defined radio"
From https://www.reddit.com/r/RTLSDR/comments/6nsnqy/comment/dkbv... :
> [You need an upconverter to receive the time from the WWV shortwave clock station on 2.5, 5, 10, 15, and 20 MHz] http://www.nooelec.com/store/ham-it-up.html
From https://news.ycombinator.com/item?id=37712506 :
> TIL there's a regular heartbeat in the quantum foam; [...] https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
FCC wants to bolster amateur radio
From "WebSDR – Internet-connected Software-Defined Radios" (2023) https://news.ycombinator.com/item?id=38034417 :
> pipewire-screenaudio: https://github.com/IceDBorn/pipewire-screenaudio :
>> Extension to passthrough pipewire audio to WebRTC Screenshare
> awesome-amateur-radio#sdr https://github.com/mcaserta/awesome-amateur-radio#sdr
> The OpenWRT wiki lists a few different weather station apps that can retrieve, record chart, and publish weather data from various weather sensors and also from GPIO or SDR; pywws, weewx
> weewx: https://github.com/weewx/weewx
> A WebSDR LuCI app would be cool.
What are some other interesting applications for [digital] terrestrial radio (in service of bolstering support for amateur radio)?
What could K12cs "Q12" STEM science classes do to encourage learning of this and adjacent EM skills?
"Listen to HD radio with a $30 RTL SDR dongle" https://news.ycombinator.com/item?id=38157466
New centralized pollination portal for better global bee data creates a buzz
Is it possible to create a lawn weed killer (a broadleaf herbicide) that doesn't kill white dutch clover; because bees eat clover (and dandelions) and bees are essential?
"Tire dust makes up the majority of ocean microplastics" (2023) https://news.ycombinator.com/item?id=37728005 :
> "Rubber Made From Dandelions is Making Tires More Sustainable – Truly a Wondrous Plant" (2021) https://www.goodnewsnetwork.org/dandelions-produce-more-sust...
Electrical switching of the edge current chirality in quantum Hall insulators
"Electrical switching of the edge current chirality in quantum anomalous Hall insulators" (2023) https://www.nature.com/articles/s41563-023-01694-y :
> A quantum anomalous Hall (QAH) insulator is a topological phase in which the interior is insulating but electrical current flows along the edges of the sample in either a clockwise or counterclockwise direction, as dictated by the spontaneous magnetization orientation. Such a chiral edge current eliminates any backscattering, giving rise to quantized Hall resistance and zero longitudinal resistance. Here we fabricate mesoscopic QAH sandwich Hall bar devices and succeed in switching the edge current chirality through thermally assisted spin–orbit torque (SOT). The well-quantized QAH states before and after SOT switching with opposite edge current chiralities are demonstrated through four- and three-terminal measurements. We show that the SOT responsible for magnetization switching can be generated by both surface and bulk carriers. Our results further our understanding of the interplay between magnetism and topological states and usher in an easy and instantaneous method to manipulate the QAH state.
"Researchers Simplify Switching for Quantum Electronics" (2023) https://spectrum.ieee.org/amp/hall-effect-2666062907 :
> “Achieving instantaneous electrical control over the edge current chirality [direction] in QAH materials, without the need for sweeping the external magnetic field, is indispensable for the advancement of QAH-based computation and information technologies,” he said.
> [...] Finding ways to to exploit these dissipation-less “chiral edge currents,” as they are known, could have far-ranging applications in quantum metrology, spintronics, and topological quantum computing. The idea was given a boost by the discovery that thin films of magnetic materials exhibit similar behavior without the need for a strong external magnetic field—something known as the quantum anomalous Hall effect (QAH)—which makes building electronic devices that harness the phenomenon much more practical.
One stumbling block has been that switching the direction of these edge currents—a crucial step in many information-processing tasks—could be done only by passing an external magnetic field over the material. Now, researchers at Penn State University have demonstrated for the first time that they can switch the direction by simply applying a pulse of current.
Quantum anomalous Hall effect: https://en.wikipedia.org/wiki/Quantum_anomalous_Hall_effect
Show HN: MicroLua – Lua for the RP2040 Microcontroller
MicroLua allows programming the RP2040 microcontroller in Lua. It packages the latest Lua interpreter with bindings for the Pico SDK and a cooperative threading library.
MicroLua is licensed under the MIT license.
I wanted to learn about Lua and about the RP2040 microcontroller. This is the result :)
OpenWRT's LuCI WebUI, Torch ML, and embedded interpreters in game engines are also written in Lua.
Apache Arrow's C GLib implementation works with Lua. From https://news.ycombinator.com/item?id=38103326 :
> Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua: https://github.com/apache/arrow/tree/main/c_glib/example/lua
LearnXinYminutes Lua: https://learnxinyminutes.com/docs/lua/
OpenWRT is a Make-based Linux distro for embedded devices with limited RAM and flash ROM, x86, and docker. OpenWRT is built on `uci` (and procd and ubusd instead of systemd and dbus). UCI is an /etc/config/* dotted.key=value configuration system which procd sys-v /etc/init.d/* scripts read values in from when regenerating their configuration when $1 is e.g. 'start', 'restart', or 'reload' per sys-v. LuCI is the OpenWRT web UI which is written in Lua; which is now implemented mostly as a JSON-RPC API instead of with server-side HTML templates for usability and performance on embedded devices.
Notes on how to write a LuCI app in Lua: https://github.com/x-wrt/luci/commit/73cda4f4a0115bb05bbd3d1...
applications/luci-app-example: https://github.com/openwrt/luci/tree/master/applications/luc...
openwrt/luci//docs: https://github.com/openwrt/luci/tree/master/docs
https://openwrt.org/supported_devices
It's probably impossible to build OpenWRT (and opkg packages) for an RP2040W.
Yes, it's impossible (i'm dev on OpenWRT) because it build toolchain first then, Linux kernel, drives and userland apps.
> probably impossible
https://github.com/raspberrypi/pico-sdk/ links to a PDF about connecting to the interwebs with a pi pico: "Connecting to the Internet with Raspberry Pi Pico W" https://rptl.io/picow-connect
micropython/micropython//ports/rp2/boards/RPI_PICO_W: https://github.com/micropython/micropython/tree/master/ports...
micropython/micropython//lib: https://github.com/micropython/micropython/blob/master/lib
micropython/micropython//examples/network/http_server_simplistic.py: https://github.com/micropython/micropython/blob/master/examp...
micropython/micropython//examples/network/http_server_ssl.py: https://github.com/micropython/micropython/blob/master/examp...
raspberrypi/pico-sdk//lib: btstack, cyw43-driver, lwip, mbedtls, tinyusb https://github.com/raspberrypi/pico-sdk/tree/master/lib
raspberrypi/pico-examples//pico_w/wifi/access_point/picow_access_point.c: https://github.com/raspberrypi/pico-examples/blob/master/pic...
There's an iperf opkg pkg, or is it just netperf (which works with the flent CLI and GUI)?
raspberrypi/pico-examples//pico_w/wifi/iperf/picow_iperf.c: https://github.com/raspberrypi/pico-examples/blob/master/pic...
raspberrypi/pico-examples//pico_w/wifi/freertos/iperf/picow_iperf.c: https://github.com/raspberrypi/pico-examples/blob/master/pic...
FreeRTOS > Process management: https://en.wikipedia.org/wiki/FreeRTOS#Process_managememt
elf2uf2: https://github.com/raspberrypi/pico-sdk/tree/master/tools/el...
adafruit/circuitpython//tests/micropython: https://github.com/adafruit/circuitpython/tree/main/tests/mi...
adafruit/circuitpython//tools: https://github.com/adafruit/circuitpython/tree/main/tools
adafruit/circuitpython//tools/cortex-m-fault-gdb.py: https://github.com/adafruit/circuitpython/blob/main/tools/co...
RP2040 > Features: 2x ARM Cortex-M0 https://en.wikipedia.org/wiki/RP2040#Features
Pix2tex: Using a ViT to convert images of equations into LaTeX code
From "STEM formulas" https://news.ycombinator.com/item?id=36839748 :
> latex2sympy parses LaTeX and generates SymPy symbolic CAS Python code (w/ ANTLR) and is now merged in SymPy core but you must install ANTLR before because it's an optional dependency. Then, sympy.lambdify will compile a symbolic expression for use with TODO JAX, TensorFlow, PyTorch,.
mamba install -c conda-forge sympy antlr # pytorch tensorflow jax # jupyterlab jupyter_console
https://news.ycombinator.com/item?id=36159017 : sympy.utilities.lambdify.lambdify() , sympytorch, sympy2jaxBut then add tests! Tests for LaTeX equations that had never been executable as code.
There are a number of ways to generate tests for functions and methods with and without parameter and return types.
Property-based testing is one way to auto-generate test cases.
Property testing: https://en.wikipedia.org/wiki/Property_testing
awesome-python-testing#property-based-testing: https://github.com/cleder/awesome-python-testing#property-ba...
https://github.com/HypothesisWorks/hypothesis :
> Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation then generates simple and comprehensible examples that make your tests fail. This simplifies writing your tests and makes them more powerful at the same time, by letting software automate the boring bits and do them to a higher standard than a human would, freeing you to focus on the higher level test logic.
> This sort of testing is often called "property-based testing", and the most widely known implementation of the concept is the Haskell library QuickCheck, but Hypothesis differs significantly from QuickCheck and is designed to fit idiomatically and easily into existing styles of testing that you are used to, with absolutely no familiarity with Haskell or functional programming needed.
Fuzzing is another way to auto-generate tests and test cases; by testing combinations of function parameters as a traversal through a combinatorial graph.
Fuzzing: https://en.wikipedia.org/wiki/Fuzzing
Google/atheris is based on libFuzzer: https://github.com/google/atheris
Clusterfuzz supports libFuzzer and APFL: https://google.github.io/clusterfuzz/setting-up-fuzzing/libf...
"Show HN: BetterOCR combines and corrects multiple OCR engines with an LLM" https://news.ycombinator.com/context?id=38056243
Google Play rolls out an "Independent security review" badge for apps
https://www.bleepingcomputer.com/news/security/google-play-a... :
> Specifically, that standard is MASA (Mobile App Security Assessment), which was introduced last year as an initiative of the App Defense Alliance (ADA) to define a concrete set of requirements for mobile app security.
> The requirements concern data storage and data privacy practices, cryptography, authentication and session management, network communication, platform interaction, and code quality.
App Defense Alliance > Mobile Application Security Assessment: https://appdefensealliance.dev/masa
OWASP Mobile Application Security: https://mas.owasp.org/ :
> The OWASP Mobile Application Security (MAS) flagship project provides a security standard for mobile apps (OWASP MASVS) and a comprehensive testing guide (OWASP MASTG) that covers the processes, techniques, and tools used during a mobile app security test, as well as an exhaustive set of test cases
OWASP MAS Checklist .xlsx: https://github.com/OWASP/owasp-mastg/releases/latest/downloa...
OWASP/owasp-mastg: https://github.com/OWASP/owasp-mastg :
> The Mobile Application Security Testing Guide (MASTG) is a comprehensive manual for mobile app security testing and reverse engineering. It describes the technical processes for verifying the controls listed in the OWASP Mobile Application Security Verification Standard (MASVS).
OWASP/owasp-masvs: https://github.com/OWASP/owasp-masvs :
> The OWASP MASVS (Mobile Application Security Verification Standard) is the industry standard for mobile app security.
> MSTG-CODE-3
> Debugging symbols have been removed from native binaries
What's wrong with debugging symbols, from a security perspective? What if you desire maximum inspectability of the internals?
Other than that, these guidelines seem great.
It could be that it's easier to reverse (closed source) apps (with unreviewed code after each DevSecOps Pull Request with Changelog entry) with debugging symbols.
gdb on Fedora auto-installs signed debuginfo packages with debug symbols; Fedora hosts a debuginfod server for their packages (which are built by Koji) and sets `DEBUGINFOD_URLS=https://debuginfod.fedoraproject.org/ ` : https://fedoraproject.org/wiki/Debuginfod https://fedoraproject.org/wiki/Changes/DebuginfodByDefault#S...
Without debug symbols, a debugger has to read unlabeled ASM instructions (or VM opcodes (or an LL IR)).
From "Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" https://news.ycombinator.com/item?id=37086102 :
> "Ask HN: How did you learn x86-64 assembly?" (2020) re: HLA, : https://news.ycombinator.com/item?id=23931373 re: the Diaphora bindiff tool and ghidra, which has GDB support now FWIU: https://news.ycombinator.com/item?id=36454485
"Show HN: Ghidra Plays Mario" the NES ROM. https://news.ycombinator.com/item?id=37475761
In the context of Linux distros like Fedora -- How does having separated debug symbols improve the security posture, though? Sure, it adds another step to exploitation, but not an insurmountable one.
(It seems like that DebuginfodByDefault FAQ is discussing the security implications of the debugger's parsing of debug symbols itself, rather that of the security of shipping debug symbols in the first place)
Independently security tested apps on Google Play
From "Google Play rolls out an "Independent security review" badge for apps" https://news.ycombinator.com/item?id=38134841 ::
https://www.bleepingcomputer.com/news/security/google-play-a... :
> Specifically, that standard is MASA (Mobile App Security Assessment), which was introduced last year as an initiative of the App Defense Alliance (ADA) to define a concrete set of requirements for mobile app security.
> The requirements concern data storage and data privacy practices, cryptography, authentication and session management, network communication, platform interaction, and code quality.
App Defense Alliance > Mobile Application Security Assessment: https://appdefensealliance.dev/masa
OWASP Mobile Application Security: https://mas.owasp.org/ :
> The OWASP Mobile Application Security (MAS) flagship project provides a security standard for mobile apps (OWASP MASVS) and a comprehensive testing guide (OWASP MASTG) that covers the processes, techniques, and tools used during a mobile app security test, as well as an exhaustive set of test cases
OWASP MAS Checklist .xlsx: https://github.com/OWASP/owasp-mastg/releases/latest/downloa...
OWASP/owasp-mastg: https://github.com/OWASP/owasp-mastg :
> The Mobile Application Security Testing Guide (MASTG) is a comprehensive manual for mobile app security testing and reverse engineering. It describes the technical processes for verifying the controls listed in the OWASP Mobile Application Security Verification Standard (MASVS).
OWASP/owasp-masvs: https://github.com/OWASP/owasp-masvs :
> The OWASP MASVS (Mobile Application Security Verification Standard) is the industry standard for mobile app security.
Scientists Develop Micro Heat Engine That Challenges the Carnot Limit
"Overcoming power-efficiency tradeoff in a micro heat engine by engineered system-bath interactions" (2023) https://www.nature.com/articles/s41467-023-42350-y :
> Abstract: All real heat engines, be it conventional macro engines or colloidal and atomic micro engines, inevitably tradeoff efficiency in their pursuit to maximize power. This basic postulate of finite-time thermodynamics has been the bane of all engine design for over two centuries and all optimal protocols implemented hitherto could at best minimize only the loss in the efficiency. The absence of a protocol that allows engines to overcome this limitation has prompted theoretical studies to suggest universality of the postulate in both passive and active engines. Here, we experimentally overcome the power-efficiency tradeoff in a colloidal Stirling engine by selectively reducing relaxation times over only the isochoric processes using system bath interactions generated by electrophoretic noise. Our approach opens a window of cycle times where the tradeoff is reversed and enables the engine to surpass even their quasistatic efficiency. Our strategies finally cut loose engine design from fundamental restrictions and pave way for the development of more efficient and powerful engines and devices.
Electrophoresis > Theory: https://en.wikipedia.org/wiki/Electrophoresis#Theory
Quasistatic process: https://en.wikipedia.org/wiki/Quasistatic_process :
> In thermodynamics, a quasi-static process (also known as a quasi-equilibrium process. Latin quasi, meaning ‘as if’ [1]), is a thermodynamic process that happens slowly enough for the system to remain in internal physical (but not necessarily chemical) thermodynamic equilibrium. An example of this is quasi-static expansion of a mixture of hydrogen and oxygen gas, where the volume of the system changes so slowly that the pressure remains uniform throughout the system at each instant of time during the process.[2] Such an idealized process is a succession of physical equilibrium states, characterized by infinite slowness. [3]
> Only in a quasi-static thermodynamic process we can exactly define intensive quantities (such as pressure, temperature, specific volume, specific entropy) of the system at every instant during the whole process; otherwise, since no internal equilibrium is established, different parts of the system would have different values of these quantities, so a single value per quantity may not be sufficient to represent the whole system. In other words, when an equation for a change in a state function contains P or T, it implies a quasi-static process.
Show HN: GitInsights – a weekly summary email of your team's GitHub activity
Do Bigquery queries of github activity gharchive still work? https://www.gharchive.org/#resources https://github.com/igrigorik/gharchive.org
GH itself shcould provide a BigQuery-like query interface for API events
/? "github archive" inurl:awesome site:github.com https://www.google.com/search?q=%22github+archive%22+inurl%3...
Podman Desktop v1.5 with Compose onboarding and enhanced Kubernetes pod data
Glad to see usability improvements here, but I won't be using it until there's a light theme (asking an application to respect the OS theme is apparently too much these days).
"Light mode button?" https://github.com/containers/podman-desktop/issues/576
Scientists discover new system to control the chaotic behavior of light
"Coherent control of chaotic optical microcavity with reflectionless scattering modes" (2023) https://www.nature.com/articles/s41567-023-02242-w :
> Abstract: Non-Hermitian wave engineering has attracted a surge of interest in photonics in recent years. Prominent non-Hermitian phenomena include coherent perfect absorption and its generalization, reflectionless scattering modes, in which electromagnetic scattering at the input ports is suppressed due to critical coupling with the power leaked to output ports, and interference phenomena. These concepts are ideally suited to enable real-time dynamic control over absorption, scattering and radiation. Nonetheless, reflectionless scattering modes have not been observed in complex photonic platforms involving open systems and multiple inputs. Here we demonstrate the emergence of reflectionless scattering modes in a chaotic photonic microcavity involving over a thousand optical modes. We model the optical fields in a silicon stadium microcavity within a quasi-normal mode expansion, which is able to capture a dense family of reflection zeros at the input ports, associated with reflectionless scattering modes. We observe non-Hermitian degeneracies of reflectionless scattering modes in the telecommunication wavelength band, enabling efficient dynamic control over light radiation from the cavity.
Researchers develop solid-state thermal transistor for better heat management
"Electrically gated molecular thermal switch" (2023) https://www.science.org/doi/10.1126/science.abo4297 :
> Abstract: Controlling heat flow is a key challenge for applications ranging from thermal management in electronics to energy systems, industrial processing, and thermal therapy. However, progress has generally been limited by slow response times and low tunability in thermal conductance. In this work, we demonstrate an electronically gated solid-state thermal switch using self-assembled molecular junctions to achieve excellent performance at room temperature. In this three-terminal device, heat flow is continuously and reversibly modulated by an electric field through carefully controlled chemical bonding and charge distributions within the molecular interface. The devices have ultrahigh switching speeds above 1 megahertz, have on/off ratios in thermal conductance greater than 1300%, and can be switched more than 1 million times. We anticipate that these advances will generate opportunities in molecular engineering for thermal management systems and thermal circuit design.
From "First Law of Thermodynamics Breakthrough Upends Equilibrium Theory in Physics" https://news.ycombinator.com/item?id=34967195 ::
> - "Thermodynamics of Computation Wiki" https://news.ycombinator.com/item?id=18146854
What do we mean by "the foundations of mathematics"?
https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t... :
> Today, Zermelo–Fraenkel set theory [ZFC], with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics.
Foundation of mathematics: https://en.wikipedia.org/wiki/Foundations_of_mathematics
Implementation of mathematics in set theory: https://en.wikipedia.org/wiki/Implementation_of_mathematics_... :
> The implementation of a number of basic mathematical concepts is carried out in parallel in ZFC (the dominant set theory) and in NFU, the version of Quine's New Foundations shown to be consistent by R. B. Jensen in 1969 (here understood to include at least axioms of Infinity and Choice).
> What is said here applies also to two families of set theories: on the one hand, a range of theories including Zermelo set theory near the lower end of the scale and going up to ZFC extended with large cardinal hypotheses such as "there is a measurable cardinal"; and on the other hand a hierarchy of extensions of NFU which is surveyed in the New Foundations article. These correspond to different general views of what the set-theoretical universe is like
IEEE-754 specifies that float64s have ±infinity and specify ZeroDivisionError. Symbolic CAS with MPFR needn't be limited to float64s.
HoTT in CoQ: Coq-HoTT: https://github.com/HoTT/Coq-HoTT
What is the relation between Coq-HoTT & Homotopy Type Theory and Set Theory with e.g. ZFC?
Homotopy Type Theory: https://en.wikipedia.org/wiki/Homotopy_type_theory :
> "Cartesian Cubical Computational Type Theory: Constructive Reasoning with Paths and Equalities" (2018)
https://scholar.google.com/scholar?cites=9763697449338277760... and sorted by date: https://scholar.google.com/scholar?hl=en&as_sdt=5,43&sciodt=...
What does Mathlib have for SetTheory, ZFC, NFU, and HoTT?
leanprover-community/mathlib4// Mathlib/SetTheory: https://github.com/leanprover-community/mathlib4/tree/master...
Homotypy is a constructive one. It drops the assumption that any given proposition is either true or false, on the grounds that lots of propositions are undecidable and that's fine. A consequence of no law of the excluded middle (the p or ~p one) is no axiom of choice, so ZFC isn't going to be compatible with it, but ZF probably is.
Personally I'm convinced constructive maths is the right choice and will cheerfully abandon whatever wonders the axiom of choice provides to mathematics, because I'm not a mathematician and I care about computable functions.
From https://news.ycombinator.com/item?id=37430759#37441973 :
> Do any existing CAS systems have configurable axioms? OTOH: Conway's surreal infinities, Do not early eliminate terms next to infinity, Each instance of infinity might should have a unique identity, configurable Order of operations,"
Optional Axiom of Choice,
Complex Wave functions: https://en.wikipedia.org/wiki/Wave_function ,
Quantum logic,
https://news.ycombinator.com/item?id=37367951#37379123 :
> [...] is it ever shown that Quantum Logic is indeed the correct and sufficient logic for propositional calculus and also for all physical systems?
( Quantum statistical mechanics: https://en.wikipedia.org/wiki/Quantum_statistical_mechanics )
Quantum logic > Relationship to other logics: https://en.wikipedia.org/wiki/Quantum_logic#Relationship_to_...
W3C Invites Implementations of RDF Dataset Canonicalization
Correct me if I'm wrong, but AIUI known algorithms for graph canonicalization are NP hard. (Graph isomorphism has been found to be likely lower in complexity, but canonicalization is not the same problem as graph isomorphism). Given blank nodes, the problem of finding a canonical form for RDF is at least as hard as graph canonicalization. So this is a non-starter for general RDF graphs, you'd have to severely restrict the use of blank nodes to make this practical and the way you do this will likely be application-dependent.
"W3C RDF Dataset Canonicalization: A Standard RDF Dataset Canonicalization Algorithm" > "4.3 Blank Node Identifier Issuer State" https://w3c-ccg.github.io/rdf-dataset-canonicalization/spec/...
IIRC ld-signatures (-> ld-proofs -> Data Integrity) specified URDNA many years ago?
If you search for “isomorphism” in the doc, they state that most real world cases will have unique IDs for nodes and that make this efficient in practice and that implementations can detect if a case is difficult (likely aborting).
https://www.w3.org/wiki/BnodeSkolemization
rdf-concepts/#section-blank-nodes: https://www.w3.org/TR/rdf-concepts/#section-blank-nodes
Rdflib docs on Skolemization: https://rdflib.readthedocs.io/en/stable/persisting_n3_terms.... :
> Skolemization is a syntactic transformation routinely used in automatic inference systems in which existential variables are replaced by ‘new’ functions - function names not used elsewhere - applied to any enclosing universal variables. In RDF, Skolemization amounts to replacing every blank node in a graph by a ‘new’ name, i.e. a URI reference which is guaranteed to not occur anywhere else. In effect, it gives ‘arbitrary’ names to the anonymous entities whose existence was asserted by the use of blank nodes: the arbitrariness of the names ensures that nothing can be inferred that would not follow from the bare assertion of existence represented by the blank node.
SPARQL has UUID() and BNODE() functions: https://www.w3.org/TR/sparql11-query/#func-bnode :
> The BNODE function constructs a blank node that is distinct from all blank nodes in the dataset being queried and distinct from all blank nodes created by calls to this constructor for other query solutions. If the no argument form is used, every call results in a distinct blank node. If the form with a simple literal is used, every call results in distinct blank nodes for different simple literals, and the same blank node for calls with the same simple literal within expressions for one solution mapping.
Blank node: https://en.wikipedia.org/wiki/Blank_node
JSON is incredibly slow: Here's What's Faster
https://news.ycombinator.com/item?id=33410198 :
> Arrow Flight RPC (and Arrow Flight SQL, a faster alternative to ODBC/JDBC) are based on gRPC and protobufs: https://arrow.apache.org/docs/format/Flight.html
Powered by Apache Arrow in JS: https://arrow.apache.org/docs/js/index.html#md:powered-by-ap...
Protobufs, Thrift, or FlatBuffers?
simdjson: https://news.ycombinator.com/item?id=37805810#37808036
Light can make water evaporate without heat
"Plausible photomolecular effect leading to water evaporation exceeding the thermal limit" (2023) https://www.pnas.org/doi/abs/10.1073/pnas.2312751120 :
> Abstract: We report in this work several unexpected experimental observations on evaporation from hydrogels under visible light illumination. 1) Partially wetted hydrogels become absorbing in the visible spectral range, where the absorption by both the water and the hydrogel materials is negligible. 2) Illumination of hydrogel under solar or visible-spectrum light-emitting diode leads to evaporation rates exceeding the thermal evaporation limit, even in hydrogels without additional absorbers. 3) The evaporation rates are wavelength dependent, peaking at 520 nm. 4) Temperature of the vapor phase becomes cooler under light illumination and shows a flat region due to breaking-up of the clusters that saturates air. And 5) vapor phase transmission spectra under light show new features and peak shifts. We interpret these observations by introducing the hypothesis that photons in the visible spectrum can cleave water clusters off surfaces due to large electrical field gradients and quadrupole force on molecular clusters. We call the light-induced evaporation process the photomolecular effect. The photomolecular evaporation might be happening widely in nature, potentially impacting climate and plants’ growth, and can be exploited for clean water and energy technologies.
Can low-cost integrated photonics help with e.g. water desalination and sterilization? #Goal6 #CleanWater
> Under certain conditions, at the interface where water meets air, light can directly bring about evaporation without the need for heat, and it actually does so even more efficiently than heat. In these experiments, the water was held in a hydrogel material, but the researchers suggest that the phenomenon may occur under other conditions as well.
Various methods of integrated photonics with various production costs: https://news.ycombinator.com/context?id=38056088
Cave: A downloadable game engine for windows
You really want a higher performance language than python for things like games. C# is a decent compromise, since you can do some tricks to avoid garbage collection, but python? It'd have to be a situation like sklearn and numpy where all the actual functionality is implemented in c/c++ and you're just using it as a glue language to coordinate those pieces, and at that point it's basically c/c++ already with python acting like unreal blueprints or something
Even if you managed a good framerate, would you get good uniform frame pacing?
It's not that it's impossible, I'm just skeptical
> It'd have to be a situation like sklearn and numpy where all the actual functionality is implemented in c/c++ and you're just using it as a glue language to coordinate those pieces
You've just described most game engines that use a "scripting language" - even those that use C# do it this way (barring maybe one exception I can think of)
https://www.reddit.com/r/O3DE/comments/rdvxhx/why_python/ :
> Python is used for scripting the editor only, not in-game behaviors.
> For implementing entity behaviors the only out of box ways are C++, ScriptCanvas (visual scripting) or Lua. Python is currently not available for implementing game logic.
C++, Lua, and Python all implement CFFI (C Foreign Function Interface) for remote function and method calls.
"Using CFFI for embedding" https://cffi.readthedocs.io/en/latest/embedding.html :
> You can use CFFI to generate C code which exports the API of your choice to any C application that wants to link with this C code. This API, which you define yourself, ends up as the API of a .so/.dll/.dylib library—or you can statically link it within a larger application.
Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua:
https://github.com/apache/arrow/tree/main/c_glib/example/lua :
> Arrow Lua example: All example codes use LGI to use Arrow GLib based bindings
pyarrow.from_numpy_dtype: https://arrow.apache.org/docs/python/generated/pyarrow.from_...
https://github.com/scikit-learn-contrib/sklearn-pandas :
> Sklearn-pandas: This module provides a bridge between Scikit-Learn's machine learning methods and pandas-style Data Frames. In particular, it provides a way to map DataFrame columns to transformations, which are later recombined into features.
Pandas docs > PyArrow > I/O https://pandas.pydata.org/docs/user_guide/pyarrow.html#i-o-r... :
> By default, these functions and all other IO reader functions return NumPy-backed data. These readers can return PyArrow-backed data by specifying the parameter dtype_backend="pyarrow"
> [...] Several non-IO reader functions can also use the dtype_backend argument to return PyArrow-backed data including: to_numeric() , DataFrame.convert_dtypes() , Series.convert_dtypes()
df = pandas.read_csv(".csv", dtype_backend="pyarrow")
sx = pandas.Series([1.0,2.0], dtype="float32[pyarrow]")
Vim Racer – VI Keyboard Skillz Game
BUG,UBY: Responsive CSS (for a mobile device with a keyboard)
vim/vim//runtime/tutor/tutor: https://github.com/vim/vim/blob/master/runtime/tutor/tutor
vimtutor
One tip I finally learned for learning keyboard shortcuts and vim mappings: write them down by handEngineers develop a process to make formate fuel from CO2
The headline number of 96% (in the original new release) is very misleading. What matters is not the fraction of gaseous carbon that ends up in the fuel (the yield), it's the energetic efficiency of the process. ie: Nobody will spend 100 Joules of energy to store, say, 1 Joule of energy. The paper says their process has low energy efficiency (low enough they don't seem to want to say the number)[1]:
> ...rather than relying on the CO2(gas) feedstock, bipolar membrane (BPM) electrolyzers with aqueous bicarbonate HCO3−(aq) input were demonstrated (Figure S2b). In principle, the energy-intensive CO2 regeneration process could be circumvented. In practice, however, the BPM induces a large overpotential, causing *low energy efficiency*, and it still suffers from CO2 escape, low FE, low yield, and low operation lifetime. [** emphasis added]
[1] https://www.cell.com/cell-reports-physical-science/fulltext/...
I added the 96% to the title here on HN intentionally, because it says here in the abstract "We convert highly concentrated bicarbonate solution to solid formate fuel with a yield (carbon efficiency) of greater than 96%"
Neither MIT Press nor the ScholarlyArticle authors asked or paid me to modify the headline to cite the carbon efficiency.
This capability is arguably better than everything else we do for energy; it is 96% carbon efficient.
Other stats for comparison of such methods are proposed herein?
Exactly. You need a lot of energy to upgrade CO2 to something with higher enthalpy (fuel).
Not sure why so much money has been spent on these thermodynamic dead ends.
The use case for all of these CO2 reduction methods that seems particularly interesting is if you capture the CO2 inevitably produced as a byproduct of cement production and then convert it into useful plastics. You avoid air capture and you get something that's good for more than just burning.
We have so much plastic, I almost wonder if it would be easy enough to burn it, capture the CO2, and convert that into new plastic.
https://news.ycombinator.com/item?id=37886975 :
> "Synthesis of Clean Hydrogen Gas [and Graphene] from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763
"A carbon-efficient bicarbonate electrolyzer" (2023) https://www.cell.com/cell-reports-physical-science/fulltext/... :
> Carbon efficiency is one of the most pressing problems of carbon dioxide electroreduction today. While there have been studies on anion exchange membrane electrolyzers with carbon dioxide (gas) and bipolar membrane electrolyzers with bicarbonate (aqueous) feedstocks, both suffer from low carbon efficiency. In anion exchange membrane electrolyzers, this is due to carbonate anion crossover, whereas in bipolar membrane electrolyzers, the exsolution of carbon dioxide (gas) from the bicarbonate solution is the culprit. Here, we first elucidate the root cause of the low carbon efficiency of liquid bicarbonate electrolyzers with thermodynamic calculations and then achieve carbon-efficient carbon dioxide electroreduction by adopting a near-neutral-pH cation exchange membrane, a glass fiber intermediate layer, and carbon dioxide (gas) partial pressure management. We convert highly concentrated bicarbonate solution to solid formate fuel with a yield (carbon efficiency) of greater than 96%. A device test is demonstrated at 100 mA cm−2 with a full-cell voltage of 3.1 V for over 200 h.
"Aluminum formate Al(HCOO)3: Earth-abundant, scalable, & material for CO2 capture" (2022) https://news.ycombinator.com/item?id=33501182
Sigh. The PR department of universities are about selling IP, getting grant$ to fund their toys, and sometimes attracting PI's and students to do the work based on the university's reputation.
Show HN: Anchor – developer-friendly private CAs for internal TLS
Hi HN! I'm Ben, co-founder of Anchor (https://anchor.dev/). Anchor is a hosted service for ACME powered internal X.509 CAs. We recently launched our features & tooling for local development. The goal is to make it easy and toil-free to develop locally with HTTPS, and also provide dev/prod parity for TLS/HTTPS encryption.
You can add Anchor to your development workflow in minutes. Here's how:
- https://blog.anchor.dev/getting-started-with-anchor-for-loca...
- https://blog.anchor.dev/service-to-service-tls-in-developmen...
We started Anchor because private CAs were a constant source of frustration throughout our careers. Avoiding them makes it all the more painful when you're finally forced to use one. The release of ACME and Let's Encrypt was a big step forward in certificate provisioning, but the improvements have been almost entirely in the WebPKI and public CA space. Internal TLS is still as unpleasant & painful to use as it has been for the past 20 years. So we've built Anchor to be a developer-friendly way to setup internal TLS that fully leverages the benefits of ACME:
- no encryption experience or X.509 knowledge required
- automatically generated system and language packages to manage client trust stores
- ACME (RFC 8555) compliant API, broad language/tooling support for cert provisioning
- fully hosted, no services or infra requirements
- works the same in all deployment environments, including development
If you're interested in more specific details and strategy, our blog posts cover all this and more: https://blog.anchor.dev/
We are asking for feedback on our features for local development, and would like to hear your thoughts & questions. Many thanks!
What advantages over say, smallstep/certificates, letsencrypt/boulder, django-ca, square/certstrap, or hashicorp/vault (and e.g. OpenWRT's luci-app-acme ACMEv2 GUI) does Anchor offer?
https://github.com/topics/acme
applications/luci-app-acme/htdocs/luci-static/resources/view/acme.js: https://github.com/openwrt/luci/blob/master/applications/luc...
https://openwrt.org/docs/guide-user/services/tls/acmesh
https://developer.hashicorp.com/vault/tutorials/secrets-mana... https://github.com/hashicorp/vault :
> Refer to Build Certificate Authority (CA) in Vault with an offline Root for an example of using a root CA external to Vault.
This is a managed SaaS solution, not self-hosted software like the ones listed. We're more akin to one of the certificate management products in cloud providers, but our target users are not security experts with prior PKI/X.509 deployment experience. We're building anchor for developers who want or need TLS/HTTPS, but don't want the headache & toil of manually setting up & running an internal CA and the extra infra that goes with it.
DIY IP-KVM Based on Raspberry Pi
All the use cases I have for one of these would need VGA sadly.
<$10 adapter
Server VGA output to HDMI capture input? Needs an active converter in that direction, not a passive pinout-changing one.
/? db9 to USB https://www.google.com/search?q=db9+to+USB
(RS-232, Serial port, UART, USB-TTL, USB-to-serial adapter: https://news.ycombinator.com/item?id=35120757 )
/? VGA capture card https://www.google.com/search?q=vga+capture+card
/? HDMI capture card USB-A Linux https://www.google.com/search?q=hdmi+capture+card+USB-A+linu...
--
Intel ME Management Engine, Intel AMT, Intel vPro (ME+AMT), AMD Pro; VNC to boot-time BIOS configuration
Intel AMT: https://en.wikipedia.org/wiki/Intel_Active_Management_Techno... :
> AMT is designed into a service processor located on the motherboard, and uses TLS-secured communication and strong encryption to provide additional security.[6] AMT is built into PCs with Intel vPro technology and is based on the Intel Management Engine (ME).[6] AMT has moved towards increasing support for DMTF Desktop and mobile Architecture for System Hardware (DASH) standards and AMT Release 5.1 and later releases are an implementation of DASH version 1.0/1.1 standards for out-of-band management.[7] AMT provides similar functionality to IPMI, although AMT is designed for client computing systems as compared with the typically server-based IPMI.
https://www.reddit.com/r/linux/comments/s41a17/in_2017_amd_p... :
> To understand what PSP and ME can do you need to understand CPU security rings. https://en.wikipedia.org/wiki/Protection_ring . These are different levels of privilege software has on the system.
> Originally the Linux kernel occupies ring 0, which is the level with highest privilege. Then user space would run in some other ring.
> However with things like PSP and ME they have created rings lower then ring 0. Things like Ring -1 and -2. Intel Minix ME runs in -3 ring.
vPro supports VNC over TLS.
coreboot and Libreboot don't support PQ SSH, TLS, VNC, or DASH; https://en.wikipedia.org/wiki/Libreboot, https://en.wikipedia.org/wiki/Coreboot
DASH: Desktop and Mobile Architecture for System Hardware https://www.dmtf.org/standards/dash :
> DASH provides support for the redirection of KVM (Keyboard, Video and Mouse) and text consoles, as well as USB and media, and supports the management of software updates, BIOS (Basic Input Output System), batteries, NIC (Network Interface Card), MAC and IP addresses, as well as DNS and DHCP configuration.
Light and gravitational waves don't arrive simultaneously
My guess...
In the mathematics of the physics, gravity is a scalar field. We don't really know what gravity is, we just have various descriptions that seem to be useful at various levels of detail. So when folks talk about gravity waves, in the maths it is a pulse traversing a scalar field, and we don't really know what is happening to the space or whatever it is that makes the pulse possible. Is space actually stretchy like elastic? Unknown. But it means that if there is no other potential field that can "bend" a moving gravitational wave's trajectory then gravitational waves will travel in absolutely straight lines.
Light, on the other hand, moves through the potentials generated by the gravity field and is governed by its energy, sort of, to follow geodesics of the gravity field. So in GR light takes the curvy path around objects with gravitational potentials around them. The article doesn't mention if there were any influential masses along the journey.
1.7 secs is about 320,000 miles. Maybe the curves?
> In the mathematics of the physics, gravity is a scalar field.
No, it's a tensor field.
Or only for the things that experience gravity as a curved surface?
Gravity, arguably, does not experience gravity as a curved surface.
How would you go about arguing that?
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
TLDR; In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics).
That article describes a hypothetical microphysical model of gravity. It's an old idea which has been done better by others (see e.g. "The Universe in a Helium Droplet" [1]). Whether right or wrong, it has no bearing on your claim that
> Gravity, arguably, does not experience gravity as a curved surface.
Any valid microphysical model of gravity must be able to reproduce the successes of general relativity in the classical limit, including the ability to match the shape of gravitational waves produced by black hole mergers. So if you want to argue that gravity "does not experience gravity as a curved surface", you have two options:
1) show that the non-linear (i.e. self-interaction) terms of Einstein's equations do not involve curvature or
2) come up with an alternative theory of gravity which does not reduce to general relativity in the classical limit and yet manages to reproduce all its successful predictions.
Which one is it?
[1] (2009) Does not disprove hydrodynamic SQS theories of quantum gravity.
A table of predictive error per experiment,{parameters},model might help us understand.
SQS purports to describe black hole internal topology where others do not. GR does not describe the internal topology of merging black hole vortices. Various theories of Quantum Gravity (QG) attempt to reconcile Dirac's pre-"Dirac sea" antimatter claims.
A unified model must: differ from classical mechanics where observational results don't match classical predictions, describe superfluid 3Helium in a beaker, describe gravity in Bose-Einstein condensate superfluids , describe conductivity in superconductors and dielectrics, not introduce unoobserved "annihilation", explain how helicopters have lift, describe quantum locking, describe paths through fluids and gravity, predict n-body gravity experiments on earth in fluids with Bernoulli's and in space, [...]
What else must a unified model of gravity and other forces predict with low error?
Why do I like Fedi's (2015/2016)? IDK. Maybe it's the abstract, maybe it's that nothing else even tries to do fluids and Bernoulli's. N-body gravity solutions with fluid vortices should predict all existing numerical n-body outcomes?
That so many things in space look fluidic - how many spiral arms are there on a nebula, all existing visual representations of black holes look like fluids, merging neutron stars look like emergent patterns from curl, too
Somehow I doubt anyone has even yet left an evolutionary algorithm online even all night to mutate and crossover the expression tree(s) to minimize predictive error according to existing experimental observations
Rydberg Field Measurement System
This sounds like something from vxjunkies.
It's hard to understand what this is actually for. Does anyone have an ELI5 for this?
Info on a Rydberg atom is on wikipedia
It's basically a fancy receive-only radio antenna that can be tuned to any frequency.
Normal antennas are great, but it's very difficult to make them work over more than one "octave", e.g., 1-2 Ghz or 2-4 GHz. A single Rydberg sensor can tune from 1-40 GHz no problem.
The Rydberg sensing element is a glass tube filled with gas. You use one laser to excite the gas into a state that's sensitive to radio waves. You query the sensor with a second laser, whose wavelength sets the radio carrier frequency of interest.
You can set it up as a spectrum analyzer or even coherently demodulate an IF or I/Q signal. But the fundamental advantage is tunability.
[Why] is it receive-only?
What is the difference between a Rydberg atom, an Ion trap, and a Quantum Dot?
From "Distorted crystals use 'pseudogravity' to bend light like black holes do" (2023) https://news.ycombinator.com/item?id=38009426 :
> "Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision" (2023) https://www.science.org/doi/10.1126/sciadv.adh8508
From https://news.ycombinator.com/item?id=38032257#38051252 re: Rydberg antennae :
> How many 4 or 2 Mhz antenna/transceivers are necessary to sufficiently cover a 0 to 1 Ghz (0 to 1000 to 1000000 Mhz) band? What about to cover 1 Thz: (terahertz) band(s)?
> How should they be placed in spacetime to minimize signal loss e.g due to crosstalk and cost?
A lot to unpack here...
* Why is it receive-only?
Like an antenna, a Rydberg sensor can detect incoming radio signals. Otherwise, the underlying physics are completely different. If there's a way to make it transmit, I've never heard of it.
* What is the difference between a Rydberg atom, an Ion trap, and a Quantum Dot?
These are all unrelated devices with different underlying principles.
* How many 4 or 2 Mhz antenna/transceivers are necessary to sufficiently cover a 0 to 1 Ghz band? What about to cover 1 Thz: (terahertz) band(s)?
It depends what you mean by "cover".
Rydberg sensors are tunable over a huge range, typically quoted up to 20-40 GHz. So one sensor would "cover" the whole range of you're willing to wait.
(I don't know what currently sets that upper limit, but what are you doing that needs more than 40 GHz? Electronics capable of handling such frequencies are rare, specialized, and expensive.)
That said, the instantaneous bandwidth of a Rydberg sensor is low, typically around 1 MHz. So if you want to "cover" the whole 0-1 GHz range all at once, I'd start from a different approach.
* How should they be placed in spacetime to minimize signal loss e.g due to crosstalk and cost?
No idea. Each Rydberg sensor requires a lot of expensive support equipment, including two or more precisely tuned lasers.
What are you trying to do?
> A lot to unpack here...
> * Why is it receive-only?
> Like an antenna, a Rydberg sensor can detect incoming radio signals. Otherwise, the underlying physics are completely different. If there's a way to make it transmit, I've never heard of it.
> * What is the difference between a Rydberg atom, an Ion trap, and a Quantum Dot?
> These are all unrelated devices with different underlying principles.
Ion traps as used in Trapped ion quantum computing involve particles whose wavefunctions are read and written.
Trapped-ion quantum computer: https://en.wikipedia.org/wiki/Trapped-ion_quantum_computer
Typically trapped ion QC systems are designed to minimize loss and interference. Visible photonic (and other) spectral emissions are certainly caused by changing the wave functions of particles with waveguided EM with and without an additional beam as a waveguide. Is a third beam necessary or useful for modulating particle wave state?
Quantum Dots have spectral emissions due to applied fields.
Quantum Dots are useful as sensors for photonic and other bands.
Rydberg atoms' outer electrons are measurable like Hydrogen electrons.
Single-photon_source#History : https://en.wikipedia.org/wiki/Single-photon_source#History :
>> Within the 21st century defect centres in various solid state materials have emerged,[9] most notably diamond, silicon carbide[10][11] and boron nitride.[12] the most studied defect is the nitrogen vacancy (NV) centers in diamond that was utilised as a source of single photons.[13] These sources along with molecules can use the strong confinement of light (mirrors, microresonators, optical fibres, waveguides, etc.) to enhance the emission of the NV centres. As well as NV centres and molecules, quantum dots (QDs),[14] quantum dots trapped in optical antenna,[15] functionalized carbon nanotubes,[16][17] and two-dimensional materials[18][19][20][21][22][23][24] can also emit single photons and can be constructed from the same semiconductor materials as the light-confining structures. It is noted that the single photon sources at telecom wavelength of 1,550 nm are very important in fiber-optic communication and they are mostly indium arsenide QDs.[25] [26] However, by creating downconversion quantum interface from visible single photon sources, one still can create single photon at 1,550 nm with preserved antibunching. [27]
The above linked article describes DNA-based quantum dots.
Glass 5x harder than steel for applying atop DNA: https://www.popularmechanics.com/science/a44725449/new-mater...
> *How many 4 or 2 Mhz antenna/transceivers are necessary to sufficiently cover a 0 to 1 Ghz band? What about to cover 1 Thz: (terahertz) band(s)?
> It depends what you mean by "cover".
> Rydberg sensors are tunable over a huge range, typically quoted up to 20-40 GHz. So one sensor would "cover" the whole range of you're willing to wait.
> (I don't know what currently sets that upper limit, but what are you doing that needs more than 40 GHz? Electronics capable of handling such frequencies are rare, specialized, and expensive.)
DSS Digital Spread Spectrum and OFDM Orthogonal Frequency Division Multiplexing use multiple frequencies for transmission (of ~"wave packets") concurrently.
IDK, particle collider sensors that need to monitor for shift across wavelength, phase, and amplitude.
("This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity." https://news.ycombinator.com/item?id=37226121#37226160 )
Don't e.g. radiotelescopes already cover wider ranges of frequencies than 1 Mhz? The visible colors span more than 1 Mhz.
Terahertz medical imaging is real. https://www.google.com/search?q=terahertz+medical+imaging
Terahertz tomography: https://en.wikipedia.org/wiki/Terahertz_tomography :
>> Terahertz radiation is electromagnetic radiation with a frequency between 0.1 and 10 THz; it falls between radio waves and light waves on the spectrum; it encompasses portions of the millimeter waves and infrared wavelengths. Because of its high frequency and short wavelength, terahertz wave has a high signal-to-noise ratio in the time domain spectrum.[1] Tomography using terahertz radiation can image samples that are opaque in the visible and near-infrared regions of the spectrum.
> That said, the instantaneous bandwidth of a Rydberg sensor is low, typically around 1 MHz. So if you want to "cover" the whole 0-1 GHz range all at once, I'd start from a different approach.
Is there a bus that can record e.g. 10 Tbit/s of random? So, the task is to do EM ADC/DAC to/from optical fiber?
> *How should they be placed in spacetime to minimize signal loss e.g due to crosstalk and cost?*
> No idea. Each Rydberg sensor requires a lot of expensive support equipment, including two or more precisely tuned lasers.*
The linked article above describes a low cost integrated laser that would probably work for trapped ion quantum computing and also Rydberg sensors:
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33490730#33493885 re: Surface Plasmon Polaritons and photonic emission
https://news.ycombinator.com/item?id=37335751#37342016 : "Newer waveguide approaches;"
> What are you trying to do?
Get them all! (And filter out bands per legal regulations)
Is it that there need to be 1000x 1 Mhz antennae to monitor for a whole band (linear scaling), or are there potential efficiencies in addition to reducing the cost and size of the electron trap and the laser(s)?
> What are you trying to do
Use Case: Wave field / light field recording
Light field / plenoptic function: https://en.wikipedia.org/wiki/Light_field
Light field camera: https://en.wikipedia.org/wiki/Light_field_camera#Metalens_ar...
IIUC integrated laser + this [1] is almost a QC platform and accidentally a transmitter if nonunitary in an open system.
[1] "Photonic chip transforms lightbeam into multiple beams with different properties" "Universal visible emitters in nanoscale integrated photonics" (2023) https://news.ycombinator.com/item?id=36580151
A Semantics for Counterfactuals in Quantum Causal Models (2023)
"A Semantics for Counterfactuals in Quantum Causal Models" (2023) https://arxiv.org/abs/2302.11783 :
> We introduce a formalism for the evaluation of counterfactual queries in the framework of quantum causal models, by generalising the three-step procedure of abduction, action, and prediction in Pearl's classical formalism of counterfactuals. To this end, we define a suitable extension of Pearl's notion of a "classical structural causal model", which we denote analogously by "quantum structural causal model". We show that every classical (probabilistic) structural causal model can be extended to a quantum structural causal model, and prove that counterfactual queries that can be formulated within a classical structural causal model agree with their corresponding queries in the quantum extension - but the latter is more expressive. Counterfactuals in quantum causal models come in different forms: we distinguish between active and passive counterfactual queries, depending on whether or not an intervention is to be performed in the action step. This is in contrast to the classical case, where counterfactuals are always interpreted in the active sense. As a consequence of this distinction, we observe that quantum causal models break the connection between causal and counterfactual dependence that exists in the classical case: (passive) quantum counterfactuals allow counterfactual dependence without causal dependence. This illuminates an important distinction between classical and quantum causal models, which underlies the fact that the latter can reproduce quantum correlations that violate Bell inequalities while being faithful to the relativistic causal structure.
Ask HN: How do you calculate Pi to N digits?
How do you actually calculate pi to say 1000 digits? or 10,000? Most info I found on google take you to pre-solved lists of digits OR code that is optimised beyond recognition so I'm unsure how you actually do the calculation to begin with.
To approximate an arbitrary number of digits of pi with sympy:
#! pip install sympy gmpy
import sympy
sympy.evalf(sympy.pi)
sympy.N(sympy.pi)
https://docs.sympy.org/latest/modules/evalf.htmlRosettaCode > [Python] lists quite a few more: https://rosettacode.org/wiki/Pi#Python
I want to try write the underlying code myself, but thanks!
gmpy2.const_pi(precision=0: =53bits) probably isn't too helpful then either: https://gmpy2.readthedocs.io/en/latest/mpfr.html#gmpy2.const... https://github.com/aleaxit/gmpy/issues/253#issue-499509692
RosettaCode has only one solution to the "Approximate Pi with Rationals (not float64s)" in Python problem.
I remember having seen a few statistical manifestation of Pi demos on YouTube
"Calculating Pi with Darts by Physics Girl [and Veritasium Derek] (PBS LearningMedia)" https://youtube.com/watch?v=M34TO71SKGk
Monte Carlo method: https://en.wikipedia.org/wiki/Monte_Carlo_method
"Inferred value of Pi is never 3.1415" (with pymc) https://discourse.pymc.io/t/inferred-value-of-pi-is-never-3-... https://github.com/pymc-devs/pymc
WebSDR – Internet-connected Software-Defined Radios
Years ago I made a couple of SoftRock receivers for the 20m and 40m CW bands and put them on a couple of Raspberry Pis. I got WebSDR to run on it and it's kinda become a thing now.
From the WebSDR FAQ:
> Q: My SDR can be tuned from 0 to 30 MHz (or from 25 to 1900 MHz, or whatever). Can I offer all of that tuning range to the users?
> A: No. Such an SDR does not feed the entire 0-30 or 25-1900 MHz spectrum to your computer: that would be way too much data. Instead, a small part (at most a few MHz) are filtered out in external hardware, centered around some frequency that you can tune. With the WebSDR software, users can only tune around within that small part of the spectrum. You (as the operator of the site) choose the centerfrequency
Rydberg antenna range: 0-20Ghz, https://news.ycombinator.com/item?id=28402527 https://en.wikipedia.org/wiki/Rydberg_atom
Bus width:
W3C WebUSB enables USB device access from JS, TypeScript, WebAssembly, and anything that emscripten emcc can compile to WASM (for use with the in-browser WASM runtimes, or the newish docker WASM runtime CLI arg). Emscripten-forge is one way to build WASM WebAssembly from e.g. C, Python.
W3C WebRTC specifies audio/video/message relaying and NAT traversal: https://webrtc.org/getting-started/media-devices
OpenWRT has an rtl-sdr package but not yet a WebSDR package; and an nginx-ssl package, but there's not yet an nginx-rtmp-module package fwics: https://www.google.com/search?q=nginx-rtmp-module+openwrt
OpenWRT wiki > SDR, LoRA / LoRAWAN , Mesh https://openwrt.org/docs/guide-user/advanced/sdr , https://www.google.com/search?q=openwrt+lora
Pipewire may now arguably be the best way to handle audio and video streams with Linux. Pipewire is not yet supported on OpenWRT ? but is probably already supported by DragonOS (Lubuntu 22.10+)?
Pipewire: https://en.wikipedia.org/wiki/PipeWire https://wiki.archlinux.org/title/PipeWire
https://news.ycombinator.com/item?id=30613125 ... Pipewire to WebRTC:
pipewire-screenaudio: https://github.com/IceDBorn/pipewire-screenaudio:
> Extension to passthrough pipewire audio to WebRTC Screenshare
awesome-amateur-radio#sdr https://github.com/mcaserta/awesome-amateur-radio#sdr
The OpenWRT wiki lists a few different weather station apps that can retrieve, record chart, and publish weather data from various weather sensors and also from GPIO or SDR; pywws, weewx
weewx: https://github.com/weewx/weewx
A WebSDR LuCI app would be cool.
LuCI docs > Modules: https://github.com/openwrt/luci/blob/master/docs/Modules.md
SETI + Unistellar => all cool things: https://www.seti.org/unistellar-and-seti-institute-partnersh... https://www.seti.org/unistellar-seti-institute-education
https://www.unistellar.com/citizen-science/ :
> Citizen Astronomer Network: The Unistellar Network is a worldwide community of Citizen Astronomers working in partnership with professional astronomers at the SETI Institute. Members use their Unistellar telescope to collect astronomical data, which is supplied to SETI Institute astronomers who then use it to develop predictions and models
"Synthetic aperture": https://en.wikipedia.org/wiki/Synthetic-aperture_radar
"WebUSB Support for RP2040" that has 2x20 pin (*) GPIO and MicroUSB: https://news.ycombinator.com/item?id=38007967
A radiotelescope attached to a 30 pin Model B could probably also feed WebSDR?
>Rydberg antenna range: 0-20Ghz, https://news.ycombinator.com/item?id=28402527 https://en.wikipedia.org/wiki/Rydberg_atom
Actually the instantaneous bandwidth of a ryberg antenna based receiver is only about ~4 MHz in implementation papers I've read. A good fit for a 2 MHz instantaneous banwidth $20 rtl-sdr dongle. The 0-20 GHz is the tuning range (again).
Scientists simulate backward time travel using quantum entanglement
I suppose the actual paper is describing retrocausality and not time travel but maybe I'm perceiving that wrong?
"Scientists spot the telltale tick of a time crystal in a kid's toy" (2018) https://newatlas.com/time-crystal-found-crystal-growing-kit/...
Time Crystal: https://en.wikipedia.org/wiki/Time_crystal
/q discrete time crystal "retrocausality" https://www.google.com/search?q=discrete+time+crystal+%22ret...
Delayed choice quantum eraser: https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser
From https://news.ycombinator.com/item?id=28402527 :
> one must perform electrodynamic engineering in the time domain, not in the 3-space EM energy density domain.
And from https://news.ycombinator.com/item?id=35877402#35886041 :
> Physical observation (via the transverse photon interaction) is the process given by applying the operator ∂/∂t to (L^3)t, yielding an L3 output
... > FWIU electrons are most appropriately modeled with Minkowski 4-space in the time-domain; (L^3)t
Retrocausality > Physics: https://en.wikipedia.org/wiki/Retrocausality#Physics
Ten Tbit/s physical random bit generation with chaotic microcomb
"Massive and parallel 10 Tbit/s physical random bit generation with chaotic microcomb" (2023) https://link.springer.com/article/10.1007/s12200-023-00081-4
"Is it possible for a random bit generator to reach a rate of petabits per second?" (2023) https://phys.org/news/2023-10-random-bit-generator-petabits....
WebUSB Support for RP2040
This makes an RP2040 Raspberry Pi Pico usable as a WebUSB 2x20 pin GPIO from JS or e.g. Python in WASM.
/q rp2040 "WebBluetooth"
awesome-web-serial lists a few RP2040-compatible things: https://github.com/louisfoster/awesome-web-serial
web-bluetooth-repl is a REPL over WebBluetooth with MicroPython devices that support Bluetooth: https://github.com/siliconwitchery/web-bluetooth-repl
Pybricks does MicroPython over WebBluetooth with a NextJS app.
Also, the Ryanteck RTk.GPIO (PC GPIO Interface) has USB and 2x20 pin GPIO: https://uk.pi-supply.com/collections/all-raspberry-pi-hats-a... :
> The RTk.GPIO is a Plug & Play USB Device which adds 28 x Raspberry Pi style GPIO pins to your computer
The Pi Pico RP2040 has one onboard LED on pin 25. The Pi Pico W (RP2040W) has an onboard LED but it's no longer connected to the 2x20 GPIO, so it's `machine.Pin("LED")` instead of `machine.Pin(25)`.
Wokwi has a WASM Pi Pico simulator.
Pulpatronics tackles single-use electronics with paper RFID tags
> By contrast, PulpaTronics' alternative RFID design requires no other material than paper. The company simply uses a laser to mark a circuit onto its surface, with the laser settings tuned so as not to cut or burn the paper but to change its chemical composition to make it conductive.
Can RFID tags be lased into food and clothing, too?
"Laser-Induced Graphene by Multiple Lasing: Toward Electronics on Cloth, Paper, and Food" (2018) https://pubs.acs.org/doi/10.1021/acsnano.7b08539
Nicely spotted. I only managed to find more recent papers searching for "conductive carbon-based material".
PulpaTronics will be so cool if it works.
Even better would be to come up with some kind of combo RFID and barcode. And laser print that tag on the outside.
Straddling (or bridging) the RFID and barcode domains is a huge opportunity.
FWIU RFID cost has been a major limiting factor in supply chain traceability; pallet-level RFID is almost affordable?
> PulpaTronics estimates its tags will reduce carbon dioxide emissions by 70 per cent compared to standard RFID tags while halving the associated price for businesses.
From https://news.ycombinator.com/item?id=28691422#28701355 :
> There's probably already a good way to bridge between sub-SKU GS1 schema.org/identifier on barcodes and QR codes and with DIDs. For GS1, you must register a ~namespace prefix and then you can use the rest of the available address space within the barcode or QR code IIUC
https://schema.org/identifier rdfs:subPropertyOf^-1 duns, globalLocationNumber, gtin, isbn, orderNumber, productID, serialNumber, sku, taxID
GS1: https://en.wikipedia.org/wiki/GS1
List of GS1 country codes: https://en.wikipedia.org/wiki/List_of_GS1_country_codes
W3C DID: Decentralized Identifier: https://en.wikipedia.org/wiki/Decentralized_identifier
> FWIU RFID cost has been a major limiting factor in supply chain traceability; pallet-level RFID is almost affordable?
But I guess that depends on the type of RFID tag, isn't it?
For example, Decathlon already introduces tags with UUIDs with every single product they sell (even the cheapest socks), and they use them to inventory tracking and to scan your cart when you buy in store.
https://www.cisper.nl/en/case-studies/rfid-case-study-decath...
Of course, although business-wise practical, that can pose a security issue... You can track peoples clothes on the wild!
Is it the link between the Invoice and the ~serialNumber of the scannable-at-a-distance Product?
Most barcodes today only encode a ~sku (and not a serial number).
Minimally, to lookup additional product information from just an sku (similar to a shorturl app), consumers would need an API to retrieve and/or lookup from a conjunction of per-Organization named graph documents linking schema:Things' ~schema:identifiers with the RDF subject URIs of NutritionInformation records.
- P: schema:nutrition
- d: MenuItem,Recipe
- r: NutritionInformation
- C: NutritionInformation
https://schema.org/NutritionInformationC: https://schema.org/Offer supersedes the GoodRelations vocabulary for Products and Services:
> For GTIN-related fields, see Check Digit calculator and validation guide from GS1.
I just checked one of the electronic invoices I had in my email, and for every line (be it a protein bar or a jacket) it includes an "RFID" field with what I guess it's the serial number contained in the tag. I don't have a reader for that frequency so I can't really confirm.
Edit:
An example for some chocolate cereal bars:
Barrita cereales Chocolate x10
Talla única
Ref: 2610768
RFID: 003869592404
That'd be for this product:https://www.decathlon.es/es/p/barrita-de-cereales-clak-con-c...
My dream: to have every supermarket item embedded with an RFID tag so I can have a constantly up-to-date pantry inventory, with notifications of out-of-stock or near-expired products.
Yes, I am too lazy.
Would also be interesting for household garbage recycling at the recycling plant. If you can ask your garbage "what are you" and it responds "I'm a polystyrene yogurt cup with an aluminum lid", separating it gets really easy.
There should be sufficient incentive for trash and recycling service providers to implement sensing and sorting now that "Making hydrogen from waste plastic could pay for itself" (2023) https://news.ycombinator.com/item?id=37886955
Biophysicists Uncover Powerful Symmetries in Living Tissue
> In a study published in Nature Physics, they conclude that sheets of epithelial tissue, which make up skin and sheathe internal organs, act like liquid crystals — materials that are ordered like solids but flow like liquids. To make that connection, the team demonstrated that two distinct symmetries coexist in epithelial tissue. These different symmetries, which determine how liquid crystals respond to physical forces, simply appear at different scales.
"Epithelia are multiscale active liquid crystals" (2023) https://www.nature.com/articles/s41567-023-02179-0
Show HN: OpenAPI DevTools – Chrome extension that generates an API spec
Effortlessly discover API behaviour with a Chrome extension that automatically generates OpenAPI specifications in real time for any app or website.
My most common use case here is to then want to hit the API from python, and adjust the params / url etc.
Would love a "copy to python requests" button that
grabs the headers
generates a boilerplate python snippet including the headers and the URL:
import requests
import json
url = '<endpoint>'
headers = {
'User-Agent': 'Mozilla/5.0 ...',
...
}
data = {
"page": 5,
"size": 28
...
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
print(response.json())
else:
print(f"Error {response.status_code}: {response.text}")
SeleniumIDE can record and save browser test cases to Python: https://github.com/SeleniumHQ/selenium-ide
awesome-test-automation/python-test-automation.md lists a number of ways to wrap selenium/webdriver and also playwright: https://github.com/atinfo/awesome-test-automation/blob/maste...
vcr.py, playback, and rr do [HTTP,] test recording and playback. httprunner can record and replay HAR. DevTools can save http requests and responses to HAR files.
awesome-web-archiving lists a number of tools that work with WARC; but only har2warc: https://github.com/iipc/awesome-web-archiving/blob/main/READ...
Kidney stone procedure "has the potential to be game changing"
> Still being run through clinical trials at UW Medicine, the procedure called _ burst wave lithotripsy _ uses an ultrasound wand and soundwaves to break apart the kidney stone
> Ultrasonic propulsion is then used to move the stone fragments out, potentially giving patients relief in 10 minutes or less
(Edit)
"Fragmentation of Stones by Burst Wave Lithotripsy in the First 19 Humans" (2022) https://www.auajournals.org/doi/abs/10.1097/JU.0000000000002... gscholar citations: https://scholar.google.com/scholar?cites=4452238185664707620...
Lithotripsy: https://en.wikipedia.org/wiki/Lithotripsy
EWST: Extracorporeal shockwave therapy: https://en.wikipedia.org/wiki/Extracorporeal_shockwave_thera...
Could NIRS help with this targeted ultrasound procedure, too?
"Ultrasound-activated chemotherapy advances therapeutic potential in deep tumours" (2023) https://news.ycombinator.com/item?id=37885774#37885798 :
> NIRS Near-Infrared Spectroscopy does not require contrast FWIU?
On reading brains with light, https://news.ycombinator.com/item?id=28399099#28400060 :
> What about with realtime NIRS with an (inverse?) scattering matrix?
FWIU Openwater's technology portfolio includes targeted ultrasound with e.g. NIRS and LSI/LSCI: https://www.openwater.health/technology
One of their demo videos explains how inverting the scattering caused by the occluding body yields the mass-density at least (?). [Radio]Spectroscopy and quantum crystallography may have additional insight for tissue identification with low-cost NIRS sensor data?
Open fNIRS: https://openfnirs.org/
"Quantum light sees quantum sound: phonon/photon correlations" (2023) https://news.ycombinator.com/item?id=37793765 ; the photonic channel actually embeds the phononic field
Animated AI
Nicely designed. Here is another visualizer for CNNs, from research at Georgia Tech:
https://poloclub.github.io/cnn-explainer/
Another link to various visualization tools: https://github.com/ashishpatel26/Tools-to-Design-or-Visualiz...
Another one: https://playground.tensorflow.org/
ManimML: https://github.com/helblazer811/ManimML
"But what is a convolution?" https://youtu.be/KuXjwB4LzSA?si=qwnZMQYJhDxraGc8 https://github.com/3b1b/videos/tree/master/_2022/convolution... https://github.com/3b1b/videos/tree/master/_2023/convolution...
"Convolution Is Fancy Multiplication" https://news.ycombinator.com/item?id=25190770#25194658
https://news.ycombinator.com/item?id=37953886 :
> Manim, Blender, ipyblender, PhysX, o3de, [FEM, CFD, [thermal, fluidic,] engineering]: https://github.com/ManimCommunity/manim/issues/3362
Manim + O3DE would be neat and probably useful for learning
A Manim Rubik's cube algorithm video: https://github.com/polylog-cs/rubiks-cube-video/blob/main/co...
Manim API docs: https://docs.manim.community/en/stable/reference.html
Common fungus candida albicans might fuel Alzheimer's onset
As I understand it, causes of Alzheimer's:
- Traumatic head injury
- Potentially increases risk: getting more viral infections (getting the common cold very often).
- Depression
- Cardiovascular and cerebrovascular disease
- Born to older parents
- Smoking.
- Family history of dementia
- Increased homocysteine levels (B vitamins, hypothyroidism, smoking, excess alcohol, aging)
- Having the APOE e4 allele (which can be mitigated through diet).
Then, more recent discoveries point to the roles of:
- Sleep: detoxifies our brain.
- Exercise: detoxifies the brain and stimulates neuronal growth.
- Cribiform plate ossification through aging: somehow prevents detoxification of substances in the brain.
- Bacteria that causes gingivitis: found to get to the brain, promote inflammation, and its increased presence is found among biopsied brains for thoss who had Alzheimer's.
- Candida albicans, as this paper describes.
Supplements being studied to somehow benefit Alzheimer's:
- Curcumin: supplements like Longvida potentially help, but at doses much higher than those listed on supplement bottles. Other forms are also being investigated.
- Magnesium threonate: gets more magnesium into the brain. Some MDs testify of what they did for their Alzheimer's patients. Still being studied.
Things that probably don't help:
- Donepezil: still being given to patients, but data now suggests it's not very useful.
That's all I can recall from memory. Very interesting disease.
Causes:
- VZV & HSV1 Alzheimer's; and so Gardasil9+ for all?
[deleted]
Gardasil does not help against HSV of any types. Only HPV.
https://www.medicalnewstoday.com/articles/why-is-there-no-cu... :
> However, according to a review of research published in 2022, no HSV vaccine has received FDA approval yet. This is despite eight decades worth of effort to develop a vaccine.
https://scholar.google.com/scholar?cites=5235726399437456270...
> The researchers expressed hope that current developments of the mRNA vaccines due to COVID-19 may help in finding a solution sooner
Are Language Models Capable of Physical Reasoning?
This is the modern version of stone age savages worshipping a rock.
Why do you say that?
Some people are having a hard time understanding that the knowledge embedded in language and culture includes much of the skills that we understand as reasoning and inference.
The line of thought is that statistical predictions of language normality cannot possibly equal reasoning, but that ignores the capacity of a metascale n-dimensional matrix to encode much, much more than merely words and their statistical frequency of appearance.
Much like a table of sines and cosines, computation itself can be encoded in data. LLMs allow us to extract this data.
[deleted]
Distorted crystals use 'pseudogravity' to bend light like black holes do
"Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0... :
> We demonstrate electromagnetic waves following a gravitational field using a photonic crystal. We introduce spatially distorted photonic crystals (DPCs) capable of deflecting light waves owing to their pseudogravity caused by lattice distortion. We experimentally verify the phenomenon in the terahertz range using a silicon DPC. Pseudogravity caused by lattice distortion reveals alternative approaches to achieve on-chip trajectory control of light propagation in PCs.
Re: Quantum gravitational models observed superfluidity, and reasoning about sufficiency: https://news.ycombinator.com/item?id=37967527
Can distorted photonic crystals help confirm or reject current theories of superfluid quantum gravity?
Amplituhedrons are also "quantum-geometric" "manifolds" (?); and so no Feynman diagrams.
Could these be a layer in a PV/TPV cell? Isn't that almost already a rectenna?
Is there Hawking radiation in all things?
Can quantum wave functions be written to and read from holographically-encoded photonic crystals; thereby overcoming current qubit storage coherence limits of less than a second? IDK that holographic storage folks are yet aware of this either
https://westurner.github.io/hnlog/ /q photon, black hole, antimatter / Dirac sea / condensate / superfluid / quantum foam, Planck relic / microscopic black hole, dielectric, FTL
Potentially useful for such experiments and applications:
"Universal visible emitters in nanoscale integrated photonics" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-7-8... .. "Photonic chip transforms lightbeam into multiple beams with different properties" https://news.ycombinator.com/item?id=36580151
"Electrons turn piece of wire into laser-like light source" (2023) https://news.ycombinator.com/item?id=33493885 ; "Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) https://doi.org/10.21203/rs.3.rs-1572967/v1
"Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision" (2023) https://www.science.org/doi/10.1126/sciadv.adh8508
https://scitechdaily.com/scientists-create-new-material-five... :
> The team is currently working with the same DNA structure but substituting even stronger carbide ceramics for glass. They have plans to experiment with different DNA structures to see which makes the material strongest.
"AI system can generate novel proteins that meet structural design targets" (2023) https://news.mit.edu/2023/ai-system-can-generate-novel-prote...
FPGA N64
If anyone is interested in new projects for N64 hardware, Kaze Emanuar has an amazing series about optimizing Super Mario 64 to run incredibly complex romhacks on original hardware.
Interestingly, SM64 is usually throttled by memory speed. This means that using 'inline' is detrimental to performance, since making the program larger can cause another read from memory.
Here's an overview video, he has many more specific videos about specific topics:
Romhacks are typically modifying the compiled binary ROM image. Kaze' work is based on the painstakingly disassembled code from the n64decomp project[1]. He's working in C, modifying the game and compiling it again for the original hardware. Not sure I'd call that a "romhack".
Great videos though!
"Show HN: Ghidra Plays Mario" (2023) https://news.ycombinator.com/item?id=37475761 :
[RL, ..., MuZero unplugged w/ PyTorch ]
> Farama-Foundation/Gymnasium is a fork of OpenAI/gym and it has support for additional Environments like MuJoCo: https://github.com/Farama-Foundation/Gymnasium#environments
The Mario64 game can be won with a minimal number of stars;
> Farama-Foundation/MO-Gymnasiun: "Multi-objective Gymnasium environments for reinforcement learning": https://github.com/Farama-Foundation/MO-Gymnasium
Ghidra may or may not be useful for e.g. gadgets with mario64
Return-oriented programming > On the x86-architecture: https://en.wikipedia.org/wiki/Return-oriented_programming#On...
SBEMU – Run your retro games with on board audio via Sound Blaster emulation
DOSBox-X links to a 2019 emscripten WASM port: https://github.com/joncampbell123/dosbox-x
em-dosbox: https://github.com/dreamlayers/em-dosbox
Does dosbox[-X] [in WASM] support SBEMU?
There's a note about fpu=false and 80 bit precision in the dosbox-x and em-dosbox docs; how does that affect SBEMU and sound ouput?
docker-dosbox: https://github.com/schneidexe/docker-dosbox/blob/main/Docker...
Two level homomorphic encryption for node by web assembly
Could this be used to grade students' notebooks in WASM on their own machines without giving away the answers?
From https://news.ycombinator.com/item?id=37786968 :
> Can the students grade their own work with containers in Crostini/Crouton//ARC without homomorpohic encryption (like Confidential Computing) running potentially signed but unknown code on their local workstations?
This with Ottergrader or nbgrader with JupyterLite would help teachers and students.
From https://news.ycombinator.com/item?id=37840416 :
> What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
How can they be sure what's using their CPU?
Golden ratio base is a non-integer positional numeral system
What about radix e*pi*i, or just e?
That doesn't do integers well.
https://news.ycombinator.com/item?id=37860238
Have you seen ternary tau / variations of phinary?
1. https://neuraloutlet.wordpress.com/tag/ternary-tau-system/
2. https://en.wikipedia.org/wiki/Balanced_ternary
3. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.8...
Fish skin can heal other animals' eye injuries
"Cell therapy that repairs cornea damage with patient's own stem cells achieves positive phase I trial results" (2023) https://news.ycombinator.com/item?id=37204123 :
> In this study, they found that only one brand (Bausch and Lomb IIRC) of contact lens worked well as a scaffold for the SC
From https://news.ycombinator.com/item?id=36912925 :
"Genetic and epigenetic regulators of retinal Müller glial cell reprogramming" (2023) https://www.sciencedirect.com/science/article/pii/S266737622...
Mathematicians Found 12,000 Solutions to the Notoriously Hard Three-Body Problem
The clarification of what is meant by a solution is in the last sentence of the article:
The “solutions” were instances when these three bodies found a way to maintain an orbit around one another.
This is my question. Does this mean that the solution orbits are stable equilibria, i.e. small errors or perturbations return to a stable orbit?
Considering that thousands were found, I'm guessing that they were unstable periodic orbits. These can still be used to characterize the system but they occur naturally with probability zero.
Are all of the identified solutions (also) fluid attractor systems; and - if correct - shouldn't a theory of superfluid quantum gravity predict all of the n-body gravity solutions already discovered with non-fluidic numerical solutions?
I don't see a reason to think any of them have relevance to superfluid physics. The three-body problem is always cast in terms of Newtonian physics.
As far as I know, no one has ever analyzed these systems even in settings of special or general relativity, so it's mostly a toy problem divorced from our current understanding of reality anyway. If such a solution is unstable, it probably couldn't be constructed in reality due to relativistic effects because it requires overly simplistic physics.
We can reason about whether things are necessary or sufficient.
To be sufficient as a theory of (n-body) gravity, a theory of gravity must also describe superfluidic gravity in order to describe gravitational dynamics within e.g. Bose-Einstein condensate superfluids.
Newtonian mechanics are insufficient to describe gravity in superfluids like Bose-Einstein condensates: Newtonian numerical methods do not predict superfluidic gravity effects with low error. Newtonian numerical methods probably cannot predict superfluidic gravity effects with low error. New
The three-body problem as commonly depicted is an abstract math problem wherein other physical fields are not modeled: electrostatic forces are not modeled in the three-body gravity problem as a classical mechanics problem (a Newtonian numerical methods problem). Were there to be, say, solar wind pushing a three-body system in the vacuum of very cold space containing superfluidic Helium at very low pressure, we would then recognize the need to model solar wind and thus fluids in order to predict the relative positions of n masses with gravity in real physical space after time t.
Is that changing the goal posts? When do ideal 3 point attractor systems exist in isolation outside of abstact mathematics using expensive HPC time?
We observe steady flows in attractor systems which all eventually decay to states with less relativistic motion per Newtonian inertia and the second law of thermodynamics.
But entropy always increases,, so could there be actual 3-body (or n-body) perpetual motion machines in space? Well, there's e.g. solar pressure and superfluidity in space and we model that with fluids: with curl and viscosity, and vortices.
And it is unknown which fluidic solutions the posted numerical n-body solutions must correspond to (if a theory of gravity is suffcient, and any of such solutions are ever experimentally confirmed)
One application of n-body gravity problems: "Gravitational Machines" (2023) https://news.ycombinator.com/item?id=36266570
If Newtonian mechanics is not sufficient to model gravity in superfluid systems like Bose-Einstein condensates and Helium in the cold of space, then Newtonian mechanics cannot predict all n-body gravity solutions.
Other intersections between fluids and gravity? Planes, rockets, gliders; for these we must model fluids in order to predict "lift" and "thrust" counter to gravity (which is a weak force).
We often model fluids with Bernoulli's and Navier-Stokes.
Which axioms of relative motion model fluids and gravity in order to actually predict an experimental outcome given initial parameters?
Gravity is observed to be downward at 9.8ms/2 at sea level at many points on Earth.
The relation between gravitational force and distance is an inverse square relation.
Gravity decreases with the square of the distance from the greatest local mass centroid; . The relative gravitational force between objects at twice the distance is 1/(2*2)=1/4 the strength.
Electromagnetic signal power also decreases with the square of the distance. We're familiar with cross sections of EM field lines from e.g. experiments with metal shavings on (electro)magnetic field lines: while the force potential between two points is just one real complex scalar, there's an apparently deterministic unchanging field between two magnetic poles given metal shavings and magnets. But in real experiments, shortwave radio waves in and out and we say it's due to atomospheric disturbance and atmospheres are also fluidic.
Models of relativistic effects of gravity demonstrate the degree to which mass warps space. We like to start with quantized space (a regular 3d grid) and then add objects with mass and velocity; with tabula rasa as a closed system in isolation.
Typically we fail to model other fields due to specialization and lack of time for unified model search (because our solutions are internally consistent with the axioms chosen for simulation). But as with all of physics, real problems occur in real space and "there is yet no known way to subtract the effects of other fields that aren't modeled".
What the chosen predictive axioms fail to model with sufficient predictive error is what we should be concerned with over enumerating additional solutions given a known insufficient model of gravity and other fields. There are various theories of Quantum Gravity (QG), Quantum Field Theory (QFT), and alternative theories of non-quantum gravity. A sufficient theory of quantum gravity must describe n-body gravity within Bose-Einstein Condensates and also quantum levitation.
Newtonian mechanics (classical mechanics) does not explain quantum levitation or quantum locking; which is observably demonstrated in this video of Quantum Levitation of a (nitrogen-chilled) disc on a track formed into a mobius strip: https://www.youtube.com/watch?v=Vxror-fnOL4 and this video https://www.youtube.com/watch?v=f2Z8HyojgLQ. Note that the disc does not level around its mass centroid like maglev trains; the disc retains its locked position independent of gravity until the disc approaches thermal equilibrium with the track as it absorbs thermal entropy.
Newtonian numerical methods do not predict quantum levitation n-body solutions.
A sufficient theory of superfluid quantum gravity must predict for example quantum levitation n-body problems, Bose-Einstein Condensate n-body problems, what we call the Bernoulli effect, and might need to be compatible with GR: General Relativity.
Is downward gravity relevant to modeling the relative motions of objects in free-fall without wind resistance (in a zero-g plane, for example)? What about in microgravity? How does the gravitational mass centroid of the n-body system initially rooted at a zero-gravity Lagrangian point change the centroid of the Lagrangian point? Such dynamics are not modeled with closed-system numerical methods.
Versioning data in Postgres? Testing a Git like approach
In case it's useful to anyone, I've been running the cheapest possible implementation of Git-based revision history for my blog's PostgreSQL database for a few years now.
I run a GitHub Actions workflow every two hours which grabs the latest snapshot of the database, writes the key tables out as newline-delimited JSON and commits them to a Git repository.
https://github.com/simonw/simonwillisonblog-backup
This gives me a full revision history (1,500+ commits at this point) for all of my content and I didn't have to do anything extra in my PostgreSQL or Django app to get it.
If you need version tracking for audit purposes or to give you the ability to manually revert a mistake, and you're dealing with tens-of-thousands of rows, I think this is actually a pretty solid simple way to get that.
pgreplay parses not the WAL Write Ahead Log but the log file, " extracts the SQL statements and executes them in the same order and relative time against a PostgreSQL database cluster": https://github.com/laurenz/pgreplay
"A PostgreSQL Docker container that automatically upgrades your database" (2023) https://news.ycombinator.com/item?id=36748041 :
pgkit wraps Postgres PITR backup and recovery: https://github.com/SadeghHayeri/pgkit#pitr :
$ sudo pgkit pitr backup <name> <delay>
$ sudo pgkit pitr recover <name> <time>
$ sudo pgkit pitr recover <name> latest
"[31M"? ANSI Terminal security in 2023 and finding 10 CVEs
Every single time you use inband signalling you will introduce security issues, compatibility issues and various other nasty bits. This is because you are now multiplexing two levels of 'privilege' on the same channel, and I'm not aware of any way in which you could do this with perfect security. The phone system of old suffered from this (which is why Phreaking was a thing in the first place), various radio based signalling systems had it, HTML has it and terminal streams have them too. It's unavoidable, so you may as well assume that these issues are there if you decide to use a protocol that doesn't cleanly separate data and control.
nbformat .ipynb notebooks are JSON documents with distinctions between input and output cells, and with output MIME types. https://nbformat.readthedocs.io/en/latest/format_description...
Jupyter-book docs > Code outputs > ANSI outputs: https://jupyterbook.org/en/stable/content/code-outputs.html#...
`less -R` is not the default.
FWIW, textual (and urwid) does ANSII escape codes well: https://github.com/Textualize/textual
touch file$'\n'name
find . -print0 | xargs -0
Rip – Rust crate to resolve and install Python packages
How well does this work for projects that include dependencies that are still reliant on the old setup.py and don't have wheels published?
pypa/cibuildwheel: https://github.com/pypa/cibuildwheel :
> Example setup: To build manylinux, musllinux, macOS, and Windows wheels on GitHub Actions, you could use this .github/workflows/wheels.yml
That's great and all for packages I control, but I publish wheels already.
https://pythonwheels.com/ now lists only three packages that are not yet published to PyPI as wheels.
But do you or others build them with SLSA 3? https://slsa.dev/get-started#slsa-3
There's thousands more packages than listed on that website.
Also now you are moving the goalposts. Python wheels exist but not all Python packages are distributed as wheels.
If I am going through the effort to build wheels myself, then I don't need a resolver since I can just make sure to pick all the versions I need. Then I don't need rip to install anything.
https://github.com/mamba-org/rattler :
> Rattler: Rust crates for fast handling of conda packages
Yeah, rattler are our low-level crates to handle conda packages (similar to how rip is a low-level crate to handle PyPI packages). Both build on top of resolvo (our SAT solver, based on MiniSAT and libsolv) and both are used (or going to be used) in pixi, which is our package manager.
[deleted]
Text-to-CAD: Risks and Opportunities
The world doesn't need Text-to-CAD. The world needs a fully capable open source parametric 3D geometric CAD kernel.
Solidworks, Creo, AutoCAD, Fusion, etc., can all take their bug ridden unoptimized single threaded rent-seeking monstrosities and stick em where the sun don't shine.
Seriously - if anyone wants to create an absolutely world-changing piece of software, start working on a new CAD kernel that takes the last 50 years of computer science advances into account, because none of the entrenched industry standards have done so. Don't worry about having to provide customer service, because none of the entrenched industry standards worry about that either.
And no - while openCascade and solvespace are impressive, they aren't fully capable, nor do they start from a modern foundation.
Have you worked with Open Cascade recently? As someone who works with it every day for developing a CAD application, it would be interesting to know what people see as its limitations. It's the only geometry kernel I've had access to, and it seems like an absolute gift
cadquery wraps OCC (OpenCascades) but used to wrap freeCAD.
Here's a LEGO interlocking block brick in cadquery: https://cadquery.readthedocs.io/en/latest/examples.html#lego... .
awesome-cadquery: https://github.com/CadQuery/awesome-cadquery
cadquery and thus also jupyter-cadquery now have support for build123d.
gumyr/build123d https://github.com/gumyr/build123d :
> Build123d is a python-based, parametric, boundary representation (BREP) modeling framework for 2D and 3D CAD. It's built on the Open Cascade geometric kernel and allows for the creation of complex models using a simple and intuitive python syntax. Build123d can be used to create models for 3D printing, CNC machining, laser cutting, and other manufacturing processes. Models can be exported to a wide variety of popular CAD tools such as FreeCAD and SolidWorks.
> Build123d could be considered as an evolution of CadQuery where the somewhat restrictive Fluent API (method chaining) is replaced with stateful context managers* - e.g. with blocks - thus enabling the full python toolbox: for loops, references to objects, object sorting and filtering, etc.*
"Build123d: A Python CAD programming library" (2023) https://news.ycombinator.com/item?id=37576296
build123d docs > Tips & Best Practices: https://build123d.readthedocs.io/en/latest/tips.html
BREP: Boundary representation: https://en.wikipedia.org/wiki/Boundary_representation
Manim, Blender, ipyblender, PhysX, o3de, [FEM, CFD, [thermal, fluidic,] engineering]: https://github.com/ManimCommunity/manim/issues/3362
NURBS: Non-Uniform Rational B-Splines: https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline
NURBS for COMPAS: test_curve.py, test_surface.py: https://github.com/gramaziokohler/compas_nurbs :
> This package is inspired by the NURBS-Python package, however uses a NumPy-based backend for better performance.
> Curve, and Surface are non-uniform non-rational B-Spline geometries (NUBS), RationalCurve, and RationalSurface are non-uniform rational B-Spline Geometries (NURBS). They all built upon the class BSpline. Coordinates have to be in 3D space (x, y, z)
compas_rhino, compas_blender,
- [ ] compas_o3de
Blender docs > Modeling Surfaces; NURBs implementation, limits, challenges: https://docs.blender.org/manual/en/latest/modeling/surfaces/...
/? "NURBS" opencascade https://www.google.com/search?q=%22nurbs%22+%22opencascade%2...
OCCT (OCC) Open Cascade Technology: https://en.wikipedia.org/wiki/Open_Cascade_Technology
OCCT > Standard Transient _ MMtg_TShared > Geom_Geometry > Geom_Curve > Geom_BoundedCurve > Geom_BSplineCurve https://dev.opencascade.org/doc/occt-6.9.1/refman/html/class...
OCC > Standard Transient _ MMtg_TShared > Geom_Geometry > Geom_Surface > Geom_BoundedSurface > Geom_BSplineSurface: https://dev.opencascade.org/doc/occt-6.9.1/refman/html/class...
Cadquery.Shape.toSplines(degree: int = 3, tolerance: float = 0.001, nurbs: bool = False)→ T https://cadquery.readthedocs.io/en/latest/classreference.htm...
X starts experimenting with a $1 per year fee for new users
It seems like it would cost X more than $1 in fees to process such a small amount.
From https://webmonetization.org/ :
> Web Monetization provides an open, native, efficient, and automatic way to compensate creators, pay for content, and support crucial web infrastructure.
> Why Now?: Until recently, there hasn't been an open, neutral and cost-efficient protocol for transferring money. Interledger provides a simple, interoperable, and currency-agnostic method for the transfer of small amounts of money.
> Web Monetization is being proposed as a W3C standard at the Web Platform Incubator Community Group.
W3C Interledger Protocol works with any type of ledger.
From "Anatomy of an ACH transaction" (2023) https://news.ycombinator.com/item?id=36503888 :
> The WebMonetization spec [4] and docs [5] specifies the `monetization` <meta> tag for indicating where supporting browsers can send payments and micropayments:
<meta
name="monetization"
content="$<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">wallet.example.com/alice">
Raspberry Pi 5 vs. Orange Pi 5 Plus vs. Rock 5 Model B
They're all shit. Get a Lenovo mini PC off eBay and stick Debian on it. I replaced my shitty Pi with a Lenovo M600.
If you want a dependable computer, stay away from pseudo-embedded hardware.
I like this idea, but what do you do for GPIO (and SPI, I2C etc.)?
USB to 2x20 pin GPIO (like Raspberry Pi have) should be easy.
Found this:
"Ryanteck RTk.GPIO (PC GPIO Interface)" https://uk.pi-supply.com/products/ryanteck-rtk-gpio-pc-gpio-...
gpiozero > Remote GPIO: https://gpiozero.readthedocs.io/en/stable/remote_gpio.html :
> GPIO Zero supports a number of different pin implementations (low-level pin libraries which deal with the GPIO pins directly). By default, the RPi.GPIO library is used (assuming it is installed on your system), but you can optionally specify one to use. For more information, see the API - Pins documentation page.
> One of the pin libraries supported, pigpio, provides the ability to control GPIO pins remotely over the network, which means you can use GPIO Zero to control devices connected to a Raspberry Pi on the network. You can do this from another Raspberry Pi, or even from a PC.
Presumably, e.g. TLS to secure that control channel is your responsibility.
Parallel ports > Pinouts: https://en.wikipedia.org/wiki/Parallel_port#Pinouts
"The parallel port" (2023) https://news.ycombinator.com/item?id=34585216
Serial port: https://en.wikipedia.org/wiki/Serial_port
UART: Universal asynchronous receiver-transmitter:
Serial TTL: https://en.wikipedia.org/wiki/Transistor%E2%80%93transistor_... :
> TTL serial refers to single-ended serial communication using raw transistor voltage levels: "low" for 0 and "high" for 1. [31] UART over TTL serial is a common debug interface for embedded devices.
"Possible to use a 9 Pin Serial port as "GPIO" using ioctl()?" https://stackoverflow.com/questions/27789099/possible-to-use...
D-subminiature > Typical applications > Communications ports: https://en.wikipedia.org/wiki/D-subminiature#Communications_...
Crosstalk: https://en.wikipedia.org/wiki/Crosstalk
"Can you hotwire this computer to transmit a tone through the radio?" https://www.getyarn.io/yarn-clip/aac07d77-c6d6-4da7-bdd3-b93... https://en.wikipedia.org/wiki/Air-gap_malware
Making hydrogen from waste plastic could pay for itself
"Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763 :
> Abstract: Hydrogen gas (H2) is the primary storable fuel for pollution-free energy production, with over 90 million tonnes used globally per year. More than 95% of H2 is synthesized through metal-catalyzed steam methane reforming that produces 11 tonnes of CO2 per tonne H2. “Green H2” from water electrolysis using renewable energy evolves no CO2, but costs 2–3x more, making it presently economically unviable. Here we report catalyst-free conversion of waste plastic into clean H2 along with high purity graphene. The scalable procedure evolves no CO2 when deconstructing polyolefins and produces H2 in purities up to 94% at high mass yields. Sale of graphene byproduct at just 5% of its current value yields H2 production at negative cost. Life-cycle assessment demonstrates a 39–84% reduction in emissions compared to other H2 production methods, suggesting the flash H2 process to be an economically viable, clean H2 production route.
"A Baking Soda Solution for Clean Hydrogen Storage" (2023) https://www.pnnl.gov/news-media/baking-soda-solution-clean-h... :
"Using earth abundant materials for long duration energy storage: electro-chemical and thermo-chemical cycling of bicarbonate/formate" (2023) https://pubs.rsc.org/en/content/articlelanding/2023/GC/D3GC0...
Show HN: OpenBLE, Swagger for Bluetooth
OpenBLE is an API specification language and client generator for Bluetooth services built on the generic attribute (GATT) profile.
Bluetooth development is a mess. Too many datasheets, too little documentation and SDK fragmentation across platforms. I built this tool to improve documentation, version control and development speed for BLE programs.
Even though shunned by Apple and Mozilla, Web Bluetooth enjoys wide support and just works. It makes sense to build the frontend for OpenBLE using web Bluetooth. No SDK hell, no installations, wide support albeit experimental. I could ship the spec, SDK, code generator and testing framework in pure JavaScript.
Is there a HID over GATT YAML?
From https://news.ycombinator.com/item?id=37522059 :
> https://github.com/DJm00n/ControllersInfo: HID profiles for Xbox 360, Xbox One, PS4, PS5, Stadia, and Switch
> "HID over GATT Profile (HOGP) 1.0" https://www.bluetooth.org/docman/handlers/downloaddoc.ashx?d...
Not yet but cool idea. Lot of profiles are reused, so it makes sense to write a yaml repository for common profiles.
> Features
> 1. Define your GATT services in YAML.
Uh, nope, thank you.
I'll bite. What other format would you have chosen that's widely supported, easy to read and type by hand, and supports deeply nested data structures?
JSON5
YAML5: https://github.com/quasilyte/yaml5
YAML-LD w/ convenience.jsonld or dollar-convenience.jsonld: https://json-ld.github.io/yaml-ld/spec/
Australian student invents affordable hybrid electric car conversion kit
From "Australian student invents affordable electric car conversion kit" (2023) https://www.dezeen.com/2023/10/09/alexander-burton-revr-elec... :
> Australian design student Alexander Burton has developed a prototype kit for cheaply converting petrol or diesel cars to hybrid electric, winning the country's national James Dyson Award in the process.
> Titled REVR (Rapid Electric Vehicle Retrofits), the kit is meant to provide a cheaper, easier alternative to current electric car conversion services, which Burton estimates cost AU$50,000 (£26,400) on average and so are often reserved for valuable, classic vehicles.
> [...] With REVR, those components are left untouched. Instead, a flat, compact, power-dense axial flux motor would be mounted between the car's rear wheels and disc brakes, and a battery and controller system placed in the spare wheel well or boot.
> Some additional off-the-shelf systems – brake and steering boosters, as well as e-heating and air conditioning – would also be added under the hood.
> By taking this approach, Burton believes he'll be able to offer the product for around AU$5,000 (£2,640) and make it compatible with virtually any car.
AU$5,000 is $3,146.90 USD.
And then for the other side of the wheel,
From "JPL Open Source Rover Project" (2023) https://news.ycombinator.com/item?id=37619564 :
> TIL about teh "Bush Winch" which mounts to a tire for off-road vehicle recovery. Note the winch line blankets, snatch blocks, and tree protectors in this video about off-road vehicle recovery: https://youtu.be/OXxLh8shMu8?si=bv59t8T1or07-K7l
Sessionic: A cross-browser extension to save, manage, restore tabs and sessions
So basically OneTab [1] and/or xBrowserSync [2] but across devices?
1 → https://microsoftedge.microsoft.com/addons/detail/onetab/hoi...
2 → https://chrome.google.com/webstore/detail/xbrowsersync/lcbjd...
https://github.com/cnwangjie/better-onetab :
> A better OneTab for Chrome Temporarily removed from firefox V2 is WIP
2 of 5 Bay Area refineries to stop making gasoline
Sounds nice. I grew up in El Paso Texas and there is a huge refinery in the lower valley and it completely ruined the quality of life. It’s located in a poor immigrant neighborhood right in the city core.
You can see the flares burning all day and all night. The smell in the surrounding neighborhood is unbearable. Every day tankers line up outside causing traffic jams so much that the refinery has to hire police traffic controllers. More than once I recall a truck crashing and spilling its contents while growing up.
Really sucks all around.
Refineries are usually built at the peripheries of towns. But land is cheap and people who want cheap houses buy there after the refineries are built. Kinda like airports --built out in the outskirts, land is cheap, then people build around the cheap land and subsequently complain about noise pollution.
Coastlines and waterways are typically highly-valued even without oil refineries on the coastline next to the homes and the port and the harbor.
FWIU there are newer AGR Acidic Gas Reduction capabilities for reducing emissions from refineries?
The Copenhill facility in Copenhagen appears to be really good at capturing flue waste; maybe the best in the world? https://en.wikipedia.org/wiki/Amager_Bakke
Can e.g. graphene filters be made onsite from e.g. flue gas?
Other oil things:
Fossil fuel phase out: https://en.m.wikipedia.org/wiki/Fossil_fuel_phase-out
Carbon-neutral fuel: https://en.wikipedia.org/wiki/Carbon-neutral_fuel
Electrofuel (eFuel); Porsche, https://en.wikipedia.org/wiki/Electrofuel
Decarbonization of shipping: https://en.wikipedia.org/wiki/Decarbonization_of_shipping
Civilian Drone port control; https://freetakteam.github.io/FreeTAKServer-User-Docs/tools/...
FAA UAS RemoteID: https://www.faa.gov/uas/getting_started/remote_id
Open Drone ID: https://github.com/opendroneid/
Standby drone spill containment could be recommended?
Could a (partially-submerged) prop pull oil spill containment booms of foam and/or aerogel, in order to automatedly haul up oil spills for pressing into extant modular recapture vessels, with drift and drone sensor fusion for e.g. human in-the-loop route planning?
The Ocean Cleanup has experience with similar trawling, though plastic recycling is looking good.
SeaBin organization has dock-mounted trash-capture fluid vortices with low pressure.
FWIU, it looks like hemp aerogel just bested treated polyurethane foam just bested hair for soaking up oil spills: "Self-cleaning superhydrophobic aerogels from waste hemp noil for ultrafast oil absorption and highly efficient PM removal" https://www.sciencedirect.com/science/article/abs/pii/S13835...
"NASA finds super-emitters of methane" https://news.ycombinator.com/item?id=33427157#33431427
Dandelion rubber tires solve the "synthetic rubber is most of the microplastic in the ocean" problem; and dandelions can probably be planted next to refineries? https://news.ycombinator.com/item?id=37728005
"Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763 https://news.ycombinator.com/item?id=37886955 from https://news.ycombinator.com/from?site=rice.edu
Ultrasound-activated chemotherapy advances therapeutic potential in deep tumours
"An Ultrasound-Activatable Platinum Prodrug for Sono-Sensitized Chemotherapy" (2023) https://www.science.org/doi/10.1126/sciadv.adg5964 :
> Abstract: Despite the great success achieved by photoactivated chemotherapy, eradicating deep tumors using external sources with high tissue penetration depth remains a challenge. Here, we present cyaninplatin, a paradigm of Pt(IV) anticancer prodrug that can be activated by ultrasound in a precise and spatiotemporally controllable manner. Upon sono-activation, mitochondria-accumulated cyaninplatin exhibits strengthened mitochondrial DNA damage and cell killing efficiency, and the prodrug overcomes drug resistance as a consequence of combined effects from released Pt(II) chemotherapeutics, the depletion of intracellular reductants, and the burst of reactive oxygen species, which gives rise to a therapeutic approach, namely sono-sensitized chemotherapy (SSCT). Guided by high-resolution ultrasound, optical, and photoacoustic imaging modalities, cyaninplatin realizes the overall theranostics of tumors in vivo with superior efficacy and biosafety. This work highlights the practical utility of ultrasound to precisely activate Pt(IV) anticancer prodrugs for the eradication of deep tumor lesions and broadens the biomedical uses of Pt coordination complexes.
> In addition, the prodrug’s fluorescence property enables it to act as a multi-imaging contrast agent, allowing the visualising and reconstructing of a tumour in a semi-3D fashion. This provides accurate guidance for applying FUS at a tumour site and monitoring the drug-accumulation time.
[OpenWater.cc's,] NIRS Near-Infrared Spectroscopy does not require contrast FWIU?
From "Great Potential – Traditional Medicine Plant Discovered To Emit Ethereal Blue Hue" https://scitechdaily.com/great-potential-traditional-medicin... :
> These natural coumarins possess unique fluorescent qualities, and one of the substances holds the potential of being utilized for medical imaging in the future.
> Fluorescent substances take in UV light that’s directed at them and release vibrantly colored visible light. And some glow even more brightly when they are close together, a phenomenon seen in compounds called aggregation-induced emission luminogens (AIEgens).
Widely accepted mathematical results that were later shown to be wrong? (2010)
List of common misconceptions > Science, technology, and mathematics > Mathematics: https://en.wikipedia.org/wiki/List_of_common_misconceptions#...
Is the limit of 1/x equal to the limit of 2/x; is ±infinity_1 equal to ±infinity_2?
I'm not sure what you're implying about the limits, but the limit is exactly zero isn't it?
Not sure what you imply the misconception and the correct answer to be in this case.
Ultra-efficient machine learning transistor cuts AI energy use by 99%
"Reconfigurable mixed-kernel heterojunction transistors for personalized support vector machine classification" (2023) https://www.nature.com/articles/s41928-023-01042-7 :
> Advances in algorithms and low-power computing hardware imply that machine learning is of potential use in off-grid medical data classification and diagnosis applications such as electrocardiogram interpretation. However, although support vector machine algorithms for electrocardiogram classification show high classification accuracy, hardware implementations for edge applications are impractical due to the complexity and substantial power consumption needed for kernel optimization when using conventional complementary metal–oxide–semiconductor circuits. Here we report reconfigurable mixed-kernel transistors based on dual-gated van der Waals heterojunctions that can generate fully tunable individual and mixed Gaussian and sigmoid functions for analogue support vector machine kernel applications. We show that the heterojunction-generated kernels can be used for arrhythmia detection from electrocardiogram signals with high classification accuracy compared with standard radial basis function kernels. The reconfigurable nature of mixed-kernel heterojunction transistors also allows for personalized detection using Bayesian optimization. A single mixed-kernel heterojunction device can generate the equivalent transfer function of a complementary metal–oxide–semiconductor circuit comprising dozens of transistors and thus provides a low-power approach for support vector machine classification applications.
Heterojunction: https://en.wikipedia.org/wiki/Heterojunction
dual-gated van der Waals heterojunctions:
Anyways, the Heterojunctions article mentions quantum dots, which can be fabricated with [synthetic] DNA: https://news.ycombinator.com/item?id=37698081 :
> "Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision" (2023) https://www.science.org/doi/10.1126/sciadv.adh8508
On separable classes, separable states, SVMs without random initialization, and quantum annealing SVMs on QC: https://news.ycombinator.com/item?id=37367951#37378940
Show HN: SimSIMD vs SciPy: How AVX-512 and SVE make SIMD nicer and ML 10x faster
Over the years, the merits of Intel’s 512-bit Advanced Vector eXtensions (AVX-512) have been extensively debated. Introduced in 2014, CPUs took time to offer robust support. In parallel, Arm Scalable Vector Extensions (SVE), targeting Arm servers, has only gained traction in recent times. Today, the landscape has shifted significantly with Intel’s Sapphire Rapids CPUs on one flank and the AWS Graviton 3 and Ampere Altra chips on the other. Here are compelling reasons to opt for these over the traditional AVX2 and NEON extensions:
1. Masked Loads: Efficient data processing by selectively loading data. 2. Half-Precision Floating Point Math: Accelerated computations with reduced memory footprint.
These features proved invaluable for the latest SimSIMD release. The software now processes vector similarities up to 300x faster using NEON, SVE, AVX2, and AVX-512 extensions across Inner Product, Euclidean, Angular, Hamming, and Jaccard distances. It outstrips commonly used libraries like NumPy and SciPy, famously built on BLAS and LAPACK.
Check out the post for some cool tricks, clarifications on AVX-512 and SVE advantages, and benchmark numbers :)
Here is the repo: https://github.com/ashvardanian/simsimd
I encourage one to merge into e.g. {NumPy, SciPy, }; are there PRs?
Though SymPy.physics only yet supports X,Y,Z vectors and doesn't mention e.g. "jaccard"?, FWIW: https://docs.sympy.org/latest/modules/physics/vector/vectors... https://docs.sympy.org/latest/modules/physics/vector/fields.... #cfd
include/simsimd/simsimd.h: https://github.com/ashvardanian/SimSIMD/blob/main/include/si...
conda-forge maintainer docs > Switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base.html#... :
conda install "libblas=*=*mkl"
conda install "libblas=*=*openblas"
conda install "libblas=*=*blis"
conda install "libblas=*=*accelerate"
conda install "libblas=*=*netlib"
numpy-feedstock: https://github.com/conda-forge/numpy-feedstock/blob/main/rec...scipy-feedstock: https://github.com/conda-forge/scipy-feedstock/blob/main/rec...
pysimdjson-feedstock: https://github.com/conda-forge/pysimdjson-feedstock/blob/mai...
simdjson-feedstock: https://github.com/conda-forge/simdjson-feedstock/blob/main/...
mkl_random-feedstock: https://github.com/conda-forge/mkl_random-feedstock https://github.com/google/paranoid_crypto/tree/main/paranoid... :
> NumPy-based implementation of random number generation sampling using Intel (R) Math Kernel Library, mirroring numpy.random, but exposing all choices of sampling algorithms available in MKL
blas: https://github.com/conda-forge/blas-feedstock/blob/main/reci...
xtensor-blas-feedstock: https://github.com/conda-forge/xtensor-blas-feedstock
xtensor-fftw (FFT with xtensor (c++)) could probably be AVX-512 and SVE -optimized as well? https://github.com/xtensor-stack/xtensor-fftw
ggml_cpu_has_avx512() https://github.com/search?q=repo%3Aggerganov%2Fggml%20AVX&ty... https://github.com/search?q=repo%3Aggerganov%2Fllama.cpp%20a...
CuPy would also be an impactful place to merge and defend these optimizations; though no GPUs have AVX-512 or SVE? cupyx.scipy.spatial.distance: https://docs.cupy.dev/en/stable/reference/scipy_spatial_dist... https://docs.cupy.dev/en/stable/reference/comparison.html
Hey, thanks for recommendations! Yes, I’m definitely open to contributing there.
Np. Thanks for the optimizations.
From "PostgresML is 8-40x faster than Python HTTP microservices" (2023) https://news.ycombinator.com/item?id=33270638 :
> Apache Ballista and Polars do Apache Arrow and SIMD.
> The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/ *
LLM -> Vector database: https://en.wikipedia.org/wiki/Vector_database
/? inurl:awesome site:github.com "vector database" https://www.google.com/search?q=inurl%253Aawesome+site%253Ag... : https://github.com/dangkhoasdc/awesome-vector-database , https://github.com/mileszim/awesome-vector-database , https://github.com/currentslab/awesome-vector-search
/? "vector database" "duckdb" https://www.google.com/search?q=+%22vector+database%22+%22du... ... pgvector
pgvector/pgvector/src/vector.c: vector_spherical_distance https://github.com/pgvector/pgvector/blob/master/src/vector....
postgresml/postgresml: /? distance https://github.com/search?q=repo%3Apostgresml%2Fpostgresml%2...
"Implementing a GPU's programming model on a CPU" (2023-10) https://news.ycombinator.com/item?id=37874423
SIMD: https://en.wikipedia.org/wiki/Single_instruction,_multiple_d...
github.com/topics/simd: https://github.com/topics/simd
AVX-512 > CPUs with AVX-512: https://en.wikipedia.org/wiki/AVX-512#CPUs_with_AVX-512
AArch64 (ARM) > Scalable Vector Extension (SVE) https://en.wikipedia.org/wiki/AArch64#Scalable_Vector_Extens...
Implementing a GPU's programming model on a CPU
Hey, AVX-512 again!
"Show HN: SimSIMD vs SciPy: How AVX-512 and SVE make SIMD nicer and ML 10x faster" (2023-10) https://news.ycombinator.com/item?id=37805810
Which is to be expected, given that AVX is what survived from Larrabe.
TimeGPT-1
As someone who's worked in time series forecasting for a while, I haven't yet found a use case for these "time series" focused deep learning models.
On extremely high dimensional data (I worked at a credit card processor company doing fraud modeling), deep learning dominates, but there's simply no advantage in using a designated "time series" model that treats time differently than any other feature. We've tried most time series deep learning models that claim to be SoTA - N-BEATS, N-HiTS, every RNN variant that was popular pre-transformers, and they don't beat an MLP that just uses lagged values as features. I've talked to several others in the forecasting space and they've found the same result.
On mid-dimensional data, LightGBM/Xgboost is by far the best and generally performs at or better than any deep learning model, while requiring much less finetuning and a tiny fraction of the computation time.
And on low-dimensional data, (V)ARIMA/ETS/Factor models are still king, since without adequate data, the model needs to be structured with human intuition.
As a result I'm extremely skeptical of any of these claims about a generally high performing "time series" model. Training on time series gives a model very limited understanding of the fundamental structure of how the world works, unlike a language model, so the amount of generalization ability a model will gain is very limited.
Reminds me a bit how in psychology you have ANOVA, MANOVA, ANCOVA, MANCOVA etc etc but really in the end we are just running regressions—variables are just variables.
Re: Causal inference [with observational data]: "The world needs computational social science" (2023) https://news.ycombinator.com/item?id=37746921
Comcast squeezing 2Gbps internet speeds through decades-old coaxial cables
Surprised it’s actually symmetrical. Though I’ve never understood why, it’s always perplexed and irritated me that they typically offer upload bandwidth at a tenth of download. In the day and age of remote work, you’d think this would be illegal.
You need about 4Mbps up for a high quality stream up to converse live in HD. A reasonable asymmetrical connection on plain Jane docsis 3.1 already offers 20-50Mbps. It being asymmetrical isn't preventing many from working remotely.
A bigger issue is how many areas are so poorly served that in the present time there is no point calling it high speed.
That said ISPs tend to scale up with down so even if you might be reasonably served with 100Mbps down if you want a reasonable up you will have to upgrade to the higher down. If you intend to work from home this doesn't seem altogether unreasonable. When working for an ISP I was always surprised about how many complainers whose remote work was "so" important paid for bad speed, delivered via bad hardware, connected to bad wifi, on another floor.
From https://www.boxcast.com/blog/internet-speeds-for-4k-live-str... :
> Even with good compression, video content still requires a significant amount of bandwidth. For example, YouTube recommends sending 34 Mbps for 4Kp30, and 50 Mbps for 4Kp60.
> In addition to needing high internet speed for 4k streaming, you need to factor in more bandwidth for your audio (typically in the 128–320 Kbps range) and a small amount more for overhead. Then double it — you should plan to always stay at 50% or less of available bandwidth in case of disruptions
This isn't terribly relevant because remote work gets done via tools like zoom at 2-4Mbps up not by live streaming 4K on youtube.
Show HN: A TaskWarrior client for mobile, in flutter
For the terminal junkies like myself that use taskwarrior a lot but that need to also manage tasks on the go.
Free, open source, in flutter. You can find it in the Play Store already for Android. Eventually on iPhone.
Looks like a self-hosted taskserver is the way now: https://github.com/coddingtonbear/inthe.am :
> Note:: Inthe.AM will be shutting down on January 1st, 2024. See #427 for more informaion.
Looks like there are a number of ways to sync over just git now?
https://taskwarrior.org/tools/
- [ ] DOC: taskwarrior-flutter: add the 'taskwarrior' topic label https://github.com/topics/taskwarrior
Desmos 3D graphing calculator
Well it passes the sin(x)sin(y)sin(z)>0.1 test
Edit: also sin(x)sin(y)sin(z)+0.1sin(10x)sin(10y)sin(10z)>0.1
ENH: desmos [3d]: Support complex exponents; with i and/or a complex() function
Test equations for geogebra:
equation -- what I think it looks like
xi^2 -- Integer coordinate grid
e^xπi -- Unit circle with another little circle also about the origin (0,0)
e^(πi^x) -- crash / not responding: a(x)=e^(πi^(x))
though it seems to work with x in Z+
e**(x*pi*I)
e^(x π i^π) -- somewhat scale-invariant interposed spirals around a single point attractor. (Zoom in/out)
Only SageMath preprocesses Python to replace XOR (^) with exp() or **, so: f(x) = x^2
g(x) = x**2 # Python
h(x) = exp(x, 2)
x**2 # SymPy Gamma, Beta
x**math.pi # Python: 3.141592653589793
x**pi # SymPy: π
x**1j # Python
x**I # SymPy
x**(1+I) # BUG/ENH: Plot complex expressions with SymPy
import sympy as sy
display(sy.E, sy.I, sy.pi)
from sympy import E, pi, I
x,y = sy.symbols('x,y', real=True); display(x,y)
eq01 = sy.Eq(y, E**(x*pi*I)); display(eq01)
eq02 = sy.Rel(y, E**(x*pi*I), '=='); display(eq02)
func01 = sy.Function('f')(E**(x*pi*I)); display(func01)
func02 = sy.Function('f')(eq02.rhs); display(func02)
assert eq01 == eq01
assert func01 == func02
import unittest
test = unittest.TestCase()
test.assertEqual(eq01, eq02)
test.assertEqual(func01, func02)
Sympy Gamma: https://gamma.sympy.org/Sympy Beta is SymPy Gamma compiled to WASM: https://github.com/eagleoflqj/sympy_beta
What methods for visualizing complex coordinate(s) are helpful? You can map the complex coordinate into e.g. the z-axis; or is complex phase - as is necessary to model [qubit] wave functions psi - just another high-dimensional dimension to also visualize?
Use of weather derivatives surges as extreme climate events rock the globe
Obtainium – Get Android App Updates Directly from the Source
hmmm. mediated access. how do I know to trust the hash checksum checks? Feels like it requires an independent tool to verify what I should get and what I do get because .. trust isn't transitive through a proxy.
There could be asset hashes in sigstore: https://sigstore.dev/
Is there a good way to run native mobile app GUI tests with GitHub Actions?
A VM/container emulator like anbox, waydroid, (or all of ChromeOS Flex in KVM) in a GitHub Action is probably enough to run GUI tests?
A SLSA builder for Android apps would be good: "Build your own SLSA 3+ provenance builder on GitHub Actions" https://slsa.dev/blog/2023/08/bring-your-own-builder-github
FWIU e.g. Fdroid does not do SafetyNet-like SAST scans of APKs.
Yea I can see how you could know. The problem is.. a tool like this has the awesome and can say "oh yes I checked all those this random APK is definitely the one you wanted"
Thats what F-Droid and Google store (and the apple store) do: they stand their assets as "if we lied, you know where to find us" regarding the provenance of what they pass. They do of course, also routinely (ok not Apple mostly) pass apps which do heinous bad things, because it turns out there's only so much automated tests can find.
As you observe, sometimes the promise is hollow. (F-Droid)
We depend upon package repositories to maintain the list of packages for a given namespace, and to always serve the most recent signed list of packages for the requested, some, or all namespaces in the repository's package catalog. The SLSA and TUF specs summarize the vulnerabilities, risks, and components of software supply chain security.
Fdroid does not claim to scan all uploaded APKs AFAIU. Fdroid > Security Model: https://f-droid.org/en/docs/Security_Model/ :
> There is a big emphasis on operating in the public and making everything publicly available. We include source tarballs and build logs when we publish binaries
What's a ballpark figure for what the monthly cost to Fdroid would be to scan all uploaded APKs for security vulnerabilities?
Practically, it should be easy to add an upload_scan_and_post_back_to_the_pull_request task to each project's e.g. GitHub Actions YAML build definition; but then how does or how can SLSA help prove that the scan results were actually requested and merging and releasing were prevented if positive?
Microsoft says VBScript will be ripped from Windows in a future release
I'm a bit nostalgic about this. Not because of VBScript or even JScript. But because of ActiveState which provided Perl, TCP, and Python interpreters that worked with Windows Script Host. I could create and call COM interfaces plus the CPAN library to pick from. I built Tkinter GUIs for my scripts.
It was part of the OLE dream that users would have lots of small scripts automating their component-oriented workflows. What we got instead was cargo-cult coding and innumerable vulnerabilities because, to borrow a phrase from the IoT folks, the 'S' in COM/OLE stands for security.
With process-sandboxed WASM in modern browsers, you can call into whatever you can compile into WASM from JS. emscripten-forge hosts WASM packages built from conda-forge -like recipes installable at runtime with micropip in pyodide in e.g. JupyterLite and vscode.
There's no scripting MS Agents like Peedy and Bonzai Buddy from VBScript, but there are now a bunch of W3C-standardized APIs with varying levels of browser implementations catalogued at MDN and caniuse; WebSockets, WebRTC, WebUSB, WebBluetooth, WebGPU, WebNN, File System Access API: https://developer.mozilla.org/en-US/docs/Web/API/File_System...
From "Manifest V3, webRequest, and ad blockers" (2022) https://news.ycombinator.com/item?id=32953286 :
> What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU, RAM, Disk, [GPU, TPU, QPU] (Linux: cgroups,)
Engineered material can reconnect severed nerves
I'm not a medical person, can anyone who knows more than me explain exactly how cool this is? Fixing spinal cords, or just repairing simple nerve severance?
From the article, the device is more like a bridge rather than something that helps the body heal.
Yeah, as far as I understand, it can act like a bridge between two ends of a severed nerve.
Give me a pair of bridges and a conductor to make the nerve signals travel at electric current speeds and I’ll give you almost instantaneous reflexes.
I wonder if you could multiplex the signals from a bunch of nerves and send them over an Ethernet link.
A radio link would simplify this enormously. You could link nerves "wirelessly"from a leg to near the brain without any surgery in between.
I don't see why it's fundamentally impossible?
"Walking naturally after spinal cord injury using a brain–spine interface" (2023) https://www.nature.com/articles/s41586-023-06094-5 :
> To establish this digital bridge, we integrated two fully implanted systems that enable recording of cortical activity and stimulation of the lumbosacral spinal cord wirelessly and in real time (Fig. 1a).
Metals Fuse Together in Space
Cold welding: https://en.wikipedia.org/wiki/Cold_welding
"Autonomous healing of fatigue cracks via cold welding" (2023) https://www.nature.com/articles/s41586-023-06223-0 :
> [...] However, unexpectedly, cracks were also observed to heal by a process that can be described as crack flank cold welding induced by a combination of local stress state and grain boundary migration. The premise that fatigue cracks can autonomously heal in metals through local interaction with microstructural features challenges the most fundamental theories on how engineers design and evaluate fatigue life in structural materials. We discuss the implications for fatigue in a variety of service environments.
..
https://www.agriculture.com/machinery/maintenance--repair/im... :
> Cold weld is a common – although incorrect – term used to describe the process of using filled epoxy glue to repair broken metal machinery components. The product is a combination of a plastic resin compound and finely powdered steel. A catalyst, or hardener, is added to this mixture to cause a chemical reaction, which hardens the putty into a solid mass
..
Cold spraying: https://en.wikipedia.org/wiki/Cold_spraying
Pg_bm25: Elastic-Quality Full Text Search Inside Postgres
I checked the benchmarks and was surprised to see that native search is (a) so slow (seconds), and (b) demonstrating O(N) behavior – with indexing, it should not happen at all.
Indeed, looking at the benchmark source code (thanks for providing it!), it completely lacks index for the native case, leading to a false statement the that native full-text search indexes Postgres provides (usually GIN indexes on tsvector columns) are slow.
https://github.com/paradedb/paradedb/blob/bb4f2890942b85be3e... – here the tsvector is being built. But this is not an index. You need CREATE INDEX ... USING gin(search_vector);
This mistake could be avoided if bencharks included query plans collected with EXPLAIN (ANALYZE, BUFFERS). It would quickly become clear that for the "native" case, we're dealing with SeqScan, not IndexScan.
GINs are very fast. They are designed to be very fast for search – but they have a problem with slower UPDATEs in some cases.
Another point, fuzzy search also exists, via pg_trgm. Of course, dealing with these things require understanding, tuning, and usually a "lego game" to be played – building products out of the existing (or new) "bricks" totally makes sense to me.
This sort of thing is more common with postgres than you'd think. I interviewed a candidate once whose company completely replaced querying in their postgres with elasticsearch because they could not figure out how to speed up certain text search queries. Nothing they tried would use the index.
"Preferred Index Types for Text Search" https://www.postgresql.org/docs/current/textsearch-indexes.h... :
> There are two kinds of indexes that can be used to speed up full text searches: GIN and GiST. Note that indexes are not mandatory for full text searching, but in cases where a column is searched on a regular basis, an index is usually desirable.
Bisphenol-A and phthalate metabolism in children w neurodevelopmental disorders
/? glucuronidation: https://www.google.com/search?q=glucuronidation
/? "glucuronidation" "bpa": https://www.google.com/search?q="glucuronidation"+"bpa"
https://en.wikipedia.org/wiki/Glucuronidation does not list BPA. What is the strength of evidence?
Autodesk Tinkercad – free web app for 3D design, circuits and coding
From https://news.ycombinator.com/item?id=35120757 :
> TIL electronics.stackexchange has CircuitLab built-in: https://electronics.stackexchange.com/questions/241537/3-3v-... . Autodesk TinkerCAD is a neat, free, web-based circuit simulator, too; though it supports Arduino and bbc:micro but not (yet?) Pi Pico like Wokwi. [... awesome-micropython, awesome-circuitpython ]
A NICER approach to genome editing with less mutations than CRISPR (2023)
"AI combined with CRISPR precisely controls gene expression" (2023) https://phys.org/news/2023-06-ai-combined-crispr-precisely-g... :
> Sanjana's lab teamed up with the lab of machine learning expert David Knowles to engineer a deep learning model they named TIGER (Targeted Inhibition of Gene Expression via guide RNA design) that was trained on the data from the CRISPR screens. Comparing the predictions generated by the deep learning model and laboratory tests in human cells, TIGER was able to predict both on-target and off-target activity, outperforming previous models developed for Cas13 on-target guide design and providing the first tool for predicting off-target activity of RNA-targeting CRISPRs.
Would TIGER also work with NICER?
You don't need the year tag when the article is recent.
Kentucky made child care free for child care workers
Setting aside the philosophical free market questions, additional government subsidy childcare seems fine. There’s a positive externality of readily available childcare: one adult to many children is more efficient than home care, and parents being able to work generates economic production, taxes, etc.
But couldn’t we come up with a subsidy with less market distortion? A specific subsidy to pay for childcare for childcare workers will just cause the market clearing wage to childcare workers to drop to the point that only people with high childcare costs will work in the field. This labor pool is much smaller and people will tend to cycle in and out as their children grow to school age. At the end of the day society will end up paying a lot more due to lower liquidity than with a plain flat subsidy.
It’s like loan forgiveness for federal workers. Sounds lovely but it just ends up breaking the market for example further subsidizing already wasteful higher education spending.
> couldn’t we come up with a subsidy with less market distortion?
Yes. Universal childcare. Alternatively, payment for children (as a refundable tax credit or e.g. social security payment for one's first N years).
How is universal childcare less market distorting? It seems to me that the larger the subsidy (or tariff for that matter), the more the distortion.
Universal childcare seems to me significantly less distorting than universal childcare for childcare workers. It’s larger subsidy yes but it corrects for an existing distortion (parents pay full price for childcare to raise children who then end up paying taxes to the state), and it does so in a way that it doesn’t break one specific labor market.
We do not do parents who aren’t already childcare workers a favor by skewing the market for their labor. For some parents, working in childcare is the right choice, but for many it won’t be, and when you artificially inflate short term wages only if they go into childcare, everyone loses.
But the entire childcare market is run on such tight margins that they all require a model of having a waiting list for children, meaning the market is massively underserved by design.
It would seem to me that this is only likely to result in more childcare workers, making it more accessible to all, no?
Ask HN: What tools are you currently utilizing for public static documentation?
Sphinx, JupyterBook (MyST Markdown), Quarto + nbdev
executablebooks project: https://github.com/executablebooks
quarto: https://github.com/quarto-dev
nbdev: https://nbdev.fast.ai/
Photos can hear you. AI and ML can extract audio from images, silent videos
"Side Eye: Characterizing the Limits of POV Acoustic Eave sdropping from Smartphone Cameras with Rolling Shutters and Movable Lenses" (2023) https://arxiv.org/abs/2301.10056
Is this a result of photonic/phononic correlation? What type of quantum entropy is that, per Quantum Discord?
"Quantum light sees quantum sound: phonon/photon correlations" (2023) https://news.ycombinator.com/item?id=37793765
Quantum light sees quantum sound: phonon/photon correlations
From TA:
> Ben Humphries, Ph.D. student in theoretical chemistry, at UEA said, "Our research shows that when a molecule exchanges phonons—quantum-mechanical particles of sound—with its environment, this produces a recognizable signal in the photon correlations."
> While photons are routinely created and measured in laboratories all over the world, individual quanta of vibrations, which are the corresponding particles of sound, phonons, cannot in general be similarly measured.
This is also the goal with phononic quantum computation iiuc
> The new findings provide a toolbox for investigating the world of quantum sound in molecules."
> Lead researcher Dr. Garth Jones, from UEA's School of Chemistry, said, "We have also computed correlations between photon and phonons.
> "It would be very exciting if our paper could inspire the development of new experimental techniques to detect individual phonons directly," he added.
Phonon: https://en.wikipedia.org/wiki/Phonon
It may be inexpensive to detect phonons due to their interaction with photons, which we have inexpensive [quantum] sensors for
From https://news.ycombinator.com/item?id=37793957 :
> "Side Eye: Characterizing the Limits of POV Acoustic Eave sdropping from Smartphone Cameras with Rolling Shutters and Movable Lenses" (2023) https://arxiv.org/abs/2301.10056
https://phys.org/news/2023-10-discovery-enable-network-inter... :
> made a device capable of the conversion of quantum information between microwave and optical photons
Acoustic waves (phonons) affect electromagnetic waves (EM waves), which are also described as quantized quanta of light (photons)
Particle-Wave Duality: https://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality
Because the medium is fluidic, at least, I believe fluids are probably necessary to predict real-world phononic transmission with high accuracy; why does e.g. shortwave radio wave in and out given Earth's fluidic atmosphere?
Algae-based blocks could make for a more sustainable building
"Sargablock: Bricks from Seaweed" https://news.ycombinator.com/item?id=37175721
There are also interlocking blocks that don't require mortar.
Lime for hempcrete can be made from algae.
"Eco-Friendly Construction Breakthrough: Lego-like Hempcrete Blocks That Don't Require Framing" [with Hempwood or spec pine] https://news.ycombinator.com/item?id=37643102
There are Structural and Non-structural {brick, concrete, hempcrete, sargablock, CEB Compressed Earth block} construction products and methods with varying levels of support in varying building codes.
"IRC Commentary Submitted to International Code Council" (2023) https://www.hempbuildmag.com/home/irc-commentary :
> The Hemp Building Institute, in partnership with the US Hemp Building Association, completed the final steps to include hemp-lime in the US residential building codes this summer.
> The new International Residential Code will be updated starting in January, 2024. States and building departments can adopt the code or parts of the code.
The US Farm Bills were good choices.
"Zero energy ready homes are coming" https://news.ycombinator.com/item?id=35060298
"Ask HN: Why don't datacenters have passive rooflines like Net Zero homes?" https://news.ycombinator.com/item?id=37579505
https://www.dezeen.com/2022/06/07/prometheus-biocomposite-ce... :
> This bio-cement was made from biomineralizing cyanobacteria that are grown using sunlight, seawater, and CO2.
> The blocks were created by mixing this bio-cement with aggregate to create a low-carbon building material with mechanical, physical and thermal properties comparable to portland cement-based concrete
ChromeOS is Linux with Google’s desktop environment
Google put in a ton of work to make all the Linux and Android sandboxing while still being able to do file sharing into the sandboxes, Wayland GUI app bridging, usb pass-through, and a bunch of other stuff.
https://chromium.googlesource.com/chromiumos/docs/+/HEAD/con...
ChromeOS got good and is wildly underrated for software development. ¯\_(ツ)_/¯
Do google developers develop on Chromebooks ?
All new devs are given Chromebooks unless they need a specific device (like a MacBook for iOS devs)
But aren't devs also provided with: SSH, CI build minutes, and GCP budgets for containers and VMs?
To actually dogfood Chromebooks internally like Chromebooks for Education and for Families, Google would need to deny its devs containers behind a greyed out "Turn on Linux" button and deny them access to Colab notebooks and AI platform etc (while only allowing Play Store apps); nothing to SSH to, no notebooks, no devpod, no local git repos (!), no local terminal to run e.g. pytest tests with, no devtools JS console, etc.
Not exactly but the equivalent yes. Workstation and VMs. All tests and builds are basically run in borg by default.
For most Google developer developing on a Chromebook basically means running ssh and chrome remote desktop.
For students, unless there are allocated server resources with network access, it SHOULD/MUST scale down to one local offline ARM64 node (because school districts haven't afforded containers on a managed k8s cloud for students at scale fwiu, though universities do with e.g. JupyterHub and BinderHub [4] and Colab).
For Chromebook sysadmins, Instructors, and Students learning about how {Linux*, ChromiumOS, Android, Git, Bash, ZSH, Python, and e.g. PyData Tools supported by NumFOCUS} are developed, for example;
When you git commit to a git branch, and then `git push` that branch to GitHub, and create a Pull Request, GitHub Actions runs the (container,command) tasks defined in the YAML files in the .github/workflows/ directory of the repo; so `git push` to a PR branch runs the CI job and the results are written back as cards in the Pull Request thread on the GitHub Project; saving to the server runs the (container,command) Actions with that revision of the git repo.
Somewhat-equivalent GitOps CI Continuous Integration workflows (without Bazel or Blaze or gtest or gn, or GitHub Enterprise or GitHub Free due to the kids' intererests) that might be supported at least in analogue by Education and Chromebooks: k8s with podman-desktop in a VM, Gitea Actions (nektos/act; like Github Actions), devpod
devpod: https://github.com/loft-sh/devpod :
> Codespaces but open-source, client-only and unopinionated: Works with any IDE and lets you use any cloud, kubernetes or just localhost docker. (with devcontainer.json, like Github Codespaces)
devcontainer.json is supported by a number of tools; e.g. VScode, IntelliJ,: https://containers.dev/supporting
repo2docker has buildpacks (like Heroku and Google AppEngine).
repo2docker buildpacks should probably work with devcontainer.json too?
repo2docker docs > Usage > "REES: Reproducible Execution Environment" describes what all repo2docker will build a container from: https://repo2docker.readthedocs.io/en/latest/specification.h...
jupyterhub/repo2docker builds a Dockerfile (Containerfile) from git repo (or a Figshare/Zenodo DOI) that minimally has at least an /environment.yml and /example.py (and probably also at least a /README.md to start with), and installs a current, updated version of jupyter notebook along with whatever's in e.g. /environment.yml per the REES spec. [1][2][3]
[1] repo2docker/buildpacks/base.py: https://github.com/jupyterhub/repo2docker/blob/main/repo2doc...
[2] "Make base_image configurable" https://github.com/jupyterhub/repo2docker/commit/20b08152578...
[3] repo2docker/buildpacks/conda/environment.py-3.11.yml: https://github.com/jupyterhub/repo2docker/blob/main/repo2doc...
[4] "When to use [TLJH or Z2JH]" https://tljh.jupyter.org/en/latest/topic/whentouse.html
SLSA; Supply-chain Levels for Software Artifacts:
https://security.googleblog.com/2022/04/how-to-slsa-part-2-d...
https://slsa.dev/blog/2023/06/slsa-github-worfklows-containe...
How do ContainerSec, DevOpsSec, and UI/UX apply to getting a perhaps necessarily at least sometimes restricted set of containers provisions to support STEM lab module workflows for learning?
1. Instructor: Specify e.g. docker.io/busybox:latest, and docker.io/buildpack-deps:container_tag as the necessary containers for the course/module/day
2. Student: Login to that context (as allowed by domain policy if necessary), have the container images pulled according to idk like a syllabus.yml before the schema.org/CourseInstance :eventSchedule :Event begins or at the start of class per a distributed URL/QRcode.
3. Student: Review and manage which containers are running with something like the podman-desktop indication bar applet.
4. Student: git push a repo of notebooks to be graded with ottergrader, okpy, nbgrader (and get feedback on a Pull Request)
Gaps/Challenges/Opportunities:
Instructors don't have CI workflows implemented in curricula, but easily could create a git repo that works with repo2docker and/or devcontainers.json.
Education Organizations don't have control over which containers are running on students' managed workstations, do have such monitoring for which apps are so errantly provisioned; and so students don't have containers but do have apps.
Android Apps are installed as APKs, which are a ZIP file with a manifest. Binaries on Android devices will not run without SELinux labels. SELinux labels for executables on Android can only be granted by (root or) a the process that installs the APKs.
This means that other installers like pip and apt (in Termux or any app off the Android Google Play Store with the new SDK) cannot work in an Android App in a VM on a Chromebook. This means that e.g. Termux could work from the Play Store only if they repack every installable package as an APK on the Play Store. Python is installable with Termux on Android with Fdroid; but the Fdroid Android App repository ("Allow third party sources" (with a different cert trust root)) doesn't work on Chromebook Android, so FWICS you can't install Python with Termux apt on a managed Chromebook, and you can't keep sideloaded APKs from Fdroid updated with an unmanaged Chromebook: a [Google] dev would just use containers in a VM (or CI) instead.
JupyterLite and VScode.dev work in a browser tab because everything is compiled to WASM (WebAssembly), but the browser traps shortcuts like Ctrl-P.
So then you might say, "wrap JupyterLite in Electron or similar so it works offline as an app with all the keyboard shortcuts that you need". (Electron apps run a web app locally in a browser without the chrome (the address bar and back buttons); but if 5 apps each depend upon unique copies of Electron that they must repack and that users must keep updated, app sizes are suboptimal due to lack of a proper package dependency model for Android APKs.) Flatpack is neat, supports package dependency edges, and supports having multiple versions of a package installed, but most flatpaks have similar boundary violations where the guest container is insufficiently isolated from the host machine. VScode flatpak, for example, can call commands on the host with `flatpak-spawn` or `host-spawn` or the additional package for the vscode flatpak copy of the `podman-remote` go binary. distrobox and fedora/toolbox make it easy to mount your entire $HOME into the container with the correct UID and file permissions: `distrobox create arithmetic; distrobox enter arithmetic` provisions a container and opens an interactive shell within the container. ChromiumOS's zygote messages also cross container/vm boundaries IIRC.
gvisor is considered good enough to contain containerized workflows for shared multitenant workloads at Google; and so it should also probably be useful for getting devcontainers.json and at least vscode working on Chromebooks for the kids.
> ChromiumOS's [sommelier also crosses] container/vm boundaries IIRC.
chromiumos/docs/+/HEAD/containers_and_vms.md > Can I run Wayland programs? with Sommelier: https://chromium.googlesource.com/chromiumos/docs/+/HEAD/con...
But that doesn't solve because containers and vms aren't on and aren't supported for their accounts.
A school chromebook's access to containers could be controlled by setting the containers.conf repo URL to a Container Image Repository controlled by the school. GitHub, Gitea, and GitLab all support storing (OCI) container images.
An instructor would import a container image with an associated Pull Request that causes an Action to run to (1) scan the container and its SBOM Software Bill of Materials; before (2) hosting the container image for the students and (3) regularly (along with e.g. Dependabot, which for security regularly checks for references to outdated versions of software in GitHub repos) .
It looks like GitHub supports 3rd party code scanning tools, too; so Instructors and Students could auto-scan for security vulnerabilities and get reports back in the Pull Request https://docs.github.com/en/code-security/code-scanning/intro...
GitHub Project Templates are designed to be forked; e.g. like an assignment handout to be filled out (that already has an /.github/workflows/actions.yml and README.md headings). Cookiecutter is another way to create a project/assignment/handout/directory/git_repo skeleton; with jinja2 templates to generate file names like `/{{name}}/README.md` and file contents like {% if name %}<h1>Hello World, {{name}}{% endif %} . jinja2 is a useful skill also for ansible [collections of roles of] playbooks of tasks.
chromebook-ansible installs a number of apps by default (including docker and vscode (instead of podman and vscodium or similar)), but because there are variables in the playbook, you can change which parts of the playbook runs by specifying different parameters with ansible inventory: https://github.com/seangreathouse/chromebook-ansible#include... https://github.com/seangreathouse/chromebook-ansible/blob/c8...
It would be helpful to be able to provision [Android and Chromebook] devices with [Ansible] like it is possible with Mac, Windows, and Linux devices (without a domain controller; decentralizedly and for bootstrapping). It appears that there happens to be no way to `adb install play-store://url` with Ansible, but there is news about Ansible support for Enterprise Chromebooks.
There are [vscode] IDE mentions in the chromebook git repos IIRC. The [vscode] [docker/podman extension] could work with aforementioned functionality to limit which containers can be pulled or are running at a given time.
USE CASE (for a "STEM workstations for learning" spec): Create a minimal git repo project from a project template with cookiecutter-pypackage or similar.
A minimal project [template] would have at least:
/README.md # h1, badges, {{schema:description}}, #install, #usage, #license, #citation
/LICENSE || /COPYING # software/content license terms
/devcontainers.json # for vscode and other IDEs
/environment.yml # which packages to install (conda, pip, repo2podman)
[/src]/example/__init__.py
[/src]/example/main.py
/tests/
/tests/__init__.py
/tests/test_main.py
/example/tests/ # this dir would ship with the package, so that that tests Could run in production
/example/data/ # will be installed in site-packages
/data/ # will not be installed in site-packages without a special setup.py
/docs/
/docs/conf.py # sphinx
/docs/index.rst # .. include('../README.md') .. toctree:: # sphinx
/docs/_toc.yml # jupyter-book
/docs/_quarto.yml # quarto
/docs/_metadata.yml # quarto
# When we need to vendor _other projects in
# we then need src/ (and/or lib/) for _those (and maybe ours_ 2)
[/src][/example]/tests/
[/src][/example]/tests/__init__.py
[/src][/example]/tests/test_main.py
[/src][/libexample2]/COPYING
This is a very common workflow for STEM (PyData) software; how is it done with Win/Mac/Lin (and bash and git) and how do we do this with our Chromebook with no Terminal or Containers?I don't follow? Software developers... aren't students or children.
It's a different market with different product segmentation and feature sets, it just happens to run on the same hardware and OS. And it's true that schools and parents often demand that their kids not hack stuff that the "admins" don't understand. And maybe that's bad or misguided or whatever. But it's not a statement about using a Chromebook as a developer client, which IMHO works very well.
The kids can't code locally on a git repo on Chromebooks and run make; they don't even have a terminal.
Googlers only code remotely on Chromebooks.
The kids MUST be able to run `pytest arithmetic.ipynb` on whichever platform.
(It was not appropriate for Google 2016-2020 (?) to decide that we should all accept WASM runtime vulnerabilities in sandboxed browser processes running as the same user instead of per-app SELinux contexts like Android requires since 4.4+, or instead of containers and VMs like almost all of Google's hosted apps and internal systems)
Cannot believe there's not even a JS console on these for the kids (because "Inspect Element" and "Turn on Linux" are blocked for them all)
I'm still not understanding the criticism. You're saying it's bad that kids can't hack on their school-supplied/parent-managed computers. And... I think I agree, as far as it goes.
But you're responding to a thread saying "Chromebooks are good for Developers", where's it's just not responsive. There's absolutely no reason schools or parents can't hand kids unmanaged/unrestricted Chromebooks; they just choose not to (for some good and some bad reasons). Take that up with schools and parents, I guess?
>There’s no reason schools or parents can’t hand kids unmanaged/unrestricted Chromebooks
Holy cow. Are you serious? Kids break stuff for fun.
This is basically a “You’re holding it wrong” argument. Schools are gonna have heavily managed chromebooks, because it’s the reality on the ground that they can hardly afford anything, much less insane IT support for every student. Like, kids are going to actively sabotage the software on their school equipment if they can as a way to avoid doing schoolwork. This was common when I was in high school over a decade ago, they solved it with Deep Freeze software and extremely restrictive Microsoft policies.
Google is the one with a wealth of IT and developer talent, not school systems. They are making the product, not the schools. Devs have to come from somewhere, and going out a limb here, I don’t think smartphones (the other computer that [low income] students are likely to have access to) provide an on ramp for developers, so it would be good if these chromebooks did.
Virtually any Linux distro can be turned into a managed OS.
Something could manage access to and the run state of containers (just like APKs and other apps running without `ps -Z`) so that the state of the machine can be reset.
And then an instructor could specify policy that would affect which apps and containers run on the machines attached to the management domain (e.g. at test-taking time and also all of the times)
Comparing etckeeper and rpm-ostree's diffs of /etc lately
(where e.g. LVM or btrfs CoW snapshots would need eventual compaction)
The relevant question is whether students are empowered to do actual STEM work (in git, with python, and notebooks and/or an actual IDE) in application of the K12CS Q12 STEM curricula.
There should be a specification of things that we need the computers we buy for the students to support; a rubric to consider in acquisitions and discussions with vendors attempting to solve for the needs of education and learning.
Seriously, compare the results of canned flash games (with metrics) vs locally coding math to do a scientific experiment or solve immediately-graded exercises with code.
I've reviewed the curated app catalogs here and TBH the tragic gap is perhaps at "how to computer math" [in notebooks in a version controlled (e.g. git) repo] [in Python [with JupyterLite or vscode.dev w/o devpod/codespaces [due to Chromebooks]]]; just a video of Arithmetic in notebooks instead of a calculator.
GeoGebra and Desmos are neat. Geogebra has a CAS (Computer Algebra System) built-in, but it's not Python with test assertions. And when the canned math app e.g. fails with weird complex exponents of e, it's a good idea to reach for Python (or Julia, or R, or) instead of only the apps in the Play Store.
Exercise: Install Git and Python (maybe in Termux from Fdroid) on a Chromebook, then run repo2podman {with a FamilyLink account,}
Exercise 2: Create a Jupyter Notebook on a Chromebook and save it to work on from home {with Gapps Edu and Family Link} when Colab isn't allowable and JupyterLite doesn't have a gdrive plugin yet.
Containers in a local devpod (~codespaces) VM for the students might solve.
This should also work on computers to support real-world STEM workflows:
<alt-tab> make test
make clean
<alt-tab> <up arrow> <enter>
OK, you're simply having a different argument, and I frankly don't even disagree. "Schools should let kids run stuff in Crostini" is something I think we can both get behind.
But nonetheless schools don't (and never have, FWIW), and it's not the OS's fault that they use important and desirable manageability features to do it.
If schools could afford it, they would probably get value from GitHub/GitLab/Gitea for all students. And then would they get value by provisioning containers to grade students' code with Kubernetes (k8s) and GitHub/GitLab/Gitea?
Internally, Google has a huge git monorepo and gn and IIRC gerrit for code review (?) and externally many repos in various orgs on GitHub (some now contributed to e.g. the Linux Foundation's CNCF).
A Google Colab JupyterLite-edition could optionally integrate with Google Classroom.
There's an issue in a repo re: a JupyterLite gdrive extension and fs abstraction so that it would work with other non-git cloud storage providers.
vscode.dev supports git commit and git push from the browser tab to GitHub, which can then run the code with GitHub Actions. vscode.dev can also connect to GitHub Codespaces (or devpod). GitHub Codespaces spawns a container in GitHub's k8s cloud for interactive use with {VScode, Jupyter, and IIRC SSH} instead of batch headless use with logs like GitHub Actions.
You can write a simple git post-receive hook script in .git/config that runs a script on every push to the repo, and then later realize why each execution of a post-receive hook needs to run the job in an isolated container to prevent state leakage between invocations of the script; there should be process and data storage isolation for safe and secure GitOps.
Research institutions like e.g. CERN and ORNL have e.g. GitLab with k8s; there, pushing to a Pull Requests branch can cause a build to run the tests in a container auto-provisioned somewhere in the private cloud and report the results back to the Pull Request thread. UCBerkeley developed ottergrader (to succeed nbgrader and okpy) to grade Jupyter notebooks in containers (optionally with k8s, or locally) and post the scores to e.g. Canvas LMS.
Can the students grade their own work with containers in Crostini/Crouton//ARC without homomorpohic encryption (like Confidential Computing) running potentially signed but unknown code on their local workstations?
JupyterLite and vscode.dev+jupyter+pyodide work in Chrome on Chromebooks today, but it's really suboptimal from a mainframe-era perspective.
How does student loan forgiveness affect our national security?
> In particular, the Public Service Loan Forgiveness (PSLF) program grants loan forgiveness to individuals working within low-paying public careers, including teachers, first responders, government employees and nonprofit or community workers. Currently the program only grants forgiveness after an individual makes 10 consecutive years of repayments while working in public service; I argue the timeframe should be reduced to seven or even five years of public service. The policy change can be enacted quickly and would satisfy even the fiercest critics who insist that any loan forgiveness must be reciprocated in some manner. Above all, doing so will incentivize more Americans to pursue public service as a career, which simultaneously benefits society as a whole.
> "This reduction in the PSLF timeframe should also be extended to those in vocational or trade schools. Individuals wishing to acquire skills as mechanics, truck drivers or construction workers should also have access to affordable training and, more critically, be able to contribute to the national security of the United States in the form of public service.
> [...] Reducing the timeframe of the Public Service Loan Forgiveness program and extending the program to those in vocation schools will therefore help advance the national security of the United States.
Public Service Loan Forgiveness: https://en.wikipedia.org/wiki/Public_Service_Loan_Forgivenes...
This (SLFP terms updates) or just raise wages and no incentives to skill up in an accredited program?
What are the term-length terms of such a benefit like to work with?
What happens if you're doing SLFP and you miss a student loan payment due to government shutdown resulting in no paycheck for that month?
The world needs computational social science
What courses of study and applied methods could further computational social science? What are some good resources for foundations in the requisite methods and their impacts?
"Applied causal inference with observational data (for benevolent humans) and classical counterfactuals"
And then maybe something like,
"Quantum statistical foundations for causal analysis of emergent patterns in adaptive, nonlinear, possibly complex systems"
Though already suggested is:
"Validated Evidence Based Computational Thinking and Cybernetics" https://twitter.com/westurner/status/1118822798217101313
Causal inference > Approaches in social sciences https://en.wikipedia.org/wiki/Causal_inference#Approaches_in...
"Answering causal questions using observational data" (2021) [PDF] https://www.nobelprize.org/uploads/2021/10/advanced-economic... https://www.nobelprize.org/prizes/economic-sciences/2021/pop...
/? causal inference site:github.com awesome https://www.google.com/search?q=awesome%2Bcausal%2Binference...
/? causal inference from: https://westurner.github.io/hnlog/ :
https://news.ycombinator.com/item?id=20178068 ; Python packages for causal inference
"The limits of graphical causal discovery" (2021) https://towardsdatascience.com/the-limits-of-graphical-causa... :
> Structural causal models support three key operations: observations, interventions and counterfactuals.
What is the difference between counterfactuals in structural casual models and counterfactuals in (quantum) Constructor Theory?
How can causal inference identify causal linkages between nonlinear quantum complex adaptive systems (that are affected by other [super-]fluidic fields), in computational social science and data science?
Tire dust makes up the majority of ocean microplastics
I worked on a phd for 4 years looking into this problem indirectly and for a decade at Ohio State they have had working solutions to the tire dust issue but it is IMPOSSIBLE to get funding.
A Billion dollar industry and NO ONE cares about cleaning it up if it means increasing costs by 5% or more.
It has been WELL KNOWN for 50 years! We are basically aerosolizing carbon in MASSIVE amounts right where we live and work. Almost like we are purposefully manufacturing microplastics and dumping them in the air as fast as we can. Imagine taking every new tire and just grinding it down into a fine dust then blowing in into the air and dumping it into the rivers. That is what we are doing, AS FAST AS WE CAN.
(For anyone that cares, the solution is natural rubber, which costs slight more than synthetic rubber but lasts longer. So its better for consumers, cheaper all around, and 1,000x better for the environment but Goodyear, Firestone and Michelin flat out refuse to fund research or even block innovation in natural rubber.
Would it be still cheaper if all tires are made from natural rubber. We shouldn't have old forests destroyed for rubber plantations.
There are two main working solutions that have ALREADY been used and proven to work since WWII.
First: a fungal disease has wiped out ALL the rubber trees in south america, thats why we cant grow in it the Western hemisphere, a fungus. If we could grow it here we would and it would drop the price by A LOT. But, we already have a solution, a transgenic species that is resistant, nonsense Government regulation and moronic "public opinion" is the only thing stopping this from fixing the rubber problem overnight.
Second: sounds funny, but ever break a dandelion stem in half and see the white stuff come out? That latex, PURE high quality latex. Let that latex air dry and you rubber! No refinement necessary. During WWII they supplied most of the war effort with rubber from dandelions! Yes, it works, its not efficient but progress has been made and with ANY funding at all it could easily produce enough higher quality natural rubber for ALL our need and enough to export.
The ONLY problem is that companies make too much money producing low quality "disposable" tires that they will NEVER switch.
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6073688/ [2] https://hcs.osu.edu/our-people/dr-katrina-cornish
"Rubber Made From Dandelions is Making Tires More Sustainable – Truly a Wondrous Plant" (2021) https://www.goodnewsnetwork.org/dandelions-produce-more-sust... :
[I'll leave in the hashtags from when I wrote this up to remind myself at the time:]
#DandelionRubber Tires; #Taraxagum
> Aiding the bees and our environment
> Now, Continental Tires is producing #dandelion rubber tires called #Taraxagum (which is the genus name of the species). The bicycle version of their tires even won the German #Sustainability Award 2021 for sustainable design.,
> “The fact that we came out on top among 54 finalists shows that our Urban Taraxagum bicycle tire is a unique product that contributes to the development of a new, alternative and sustainable supply of raw materials,” stated Dr. Carla Recker, head of development for the Taraxagum project.
> The report from DW added that the performance of dandelion tires was better in some cases than natural rubber—which is typically blended with synthetic rubber.
> Capable of growing, as we all know, practically anywhere, dandelion needs very little accommodation in a country or business’s agriculture profile. The #Taraxagum research team at Continental hypothesizes they could even be grown in the polluted land on or around old industrial parks.
> Furthermore, the only additive needed during the rubber extraction process is hot #water, unlike Hevea which requires the use of organic solvents that pose a pollution risk if they’re not disposed of properly.
> Representing a critical early-season food supply for dwindling #bees and a valuable source of super-nutritious food for humans, dandelions can also be turned into coffee, give any child a good time blowing apart their seeds—and, now, as a new source for rubber in the world; truly a wondrous plant.
Taraxacum: https://en.wikipedia.org/wiki/Taraxacum
A less hashtag version: https://www.omafra.gov.on.ca/CropOp/en/indus_misc/chemical/r...
Other Common Names Include: Dandelion, Kazak dandelion, TKS, rubber root. Latin Name: Taraxacum kok-saghyz
Dandelion roots also contain substantial amounts of the starch inulin, which can be fermented to produce fuel ethanol.
USDA Report from 1947 entitled "Russian Dandelion, an Emergency Source of Natural Rubber" https://archive.org/details/russiandandelion618whal/page/n1/...
Years of creating artificial reefs from old tires are catching up. I wonder how sharp those people are? Next we should find about how those giant ships that blow their exhaust under water and kill all life.
> Years of creating artificial reefs from old tires are catching up.
People probably don't even realize that most tires are synthetic rubber and thus are also bad for the ocean.
Is there a program to map the locations of old synthetic tire reefs, and what is a suitable replacement for reef reconstruction and regrowth?
Physicists coax superconductivity and more from quasicrystals
"Superconductivity and strong interactions in a tunable moiré quasicrystal" (2023) https://dx.doi.org/10.1038/s41586-023-06294-z
Can quantum states be written into and read from [quasicrystal] lattices; and how for long could such information be recoverable?
50 years later, is two-phase locking the best we can do?
this is a great paper that provides a framework for comparing 2 phase and paxos.
https://lamport.azurewebsites.net/video/consensus-on-transac...
Two-phase locking is different from two-phase commit, in spite of an overlap in their naming. Two-phase commit is relevant to be compared against Paxos - both of which fall under the category of consensus protocols.
Two-phase locking is a concurrency control mechanism.
A little summary:
2PC: An algorithm used in the context of distributed transactions where each machine handles a different part of the transaction. This means that nothing is redundant - the success of each and every participant is required for the transaction to be committed.
Paxos/Raft/consensus: An algorithm usually used in the context of distributed replication. Since every participant is doing the same thing, it's tolerable if a few fail or give outputs that diverge from the majority.
2PL: A method of acquiring multiple locks such that first you acquire all the required locks (first phase), then you do what you need to do, and then you release all the locks (second phase). This is in contrast to a locking scheme where lock acquisitions and releases are interspersed. This isn't strictly limited to distributed systems, although it's common to see 2PC with 2PL.
If this piques your interest, read the Spanner paper! Spanner uses all three - 2PC with 2PL for distributed read-write transactions, and Paxos for replication.
PS: "Distributed" just means there's more than one machine involved, any of which may fail independently, and communication among these machines happens over unreliable wire.
Concurrency control > Concurrency control in databases > Why is concurrency control needed?: https://en.m.wikipedia.org/wiki/Concurrency_control#Why_is_c...?
Locks (computer science) > Disadvantages: https://en.wikipedia.org/wiki/Lock_(computer_science)#Disadv...
Two-phase locking (2PL) https://en.wikipedia.org/wiki/Two-phase_locking
Two-phase commit protocol (2PC) https://en.wikipedia.org/wiki/Two-phase_commit_protocol
Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science)
Raft: https://en.wikipedia.org/wiki/Raft_(algorithm)
Consensus (computer science) https://en.wikipedia.org/wiki/Consensus_(computer_science)
Spanner: https://en.wikipedia.org/wiki/Spanner_(database)
Non-blocking algorithm; "lock-free concurrency", "wait-free" https://en.wikipedia.org/wiki/Non-blocking_algorithm
"Ask HN: Why don't PCs have better entropy sources?" [for generating txids/uuids] https://news.ycombinator.com/item?id=30877296
"100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
Re: tests of randomness: https://mail.python.org/archives/list/python-ideas@python.or...
TIL there's a regular heartbeat in the quantum foam; there's a regular monotonic heartbeat in the quantum Rydberg wave packet interference; and that should be useful for distributed applications with and without vector clocks and an initial time synchronization service (WhiteRabbit > PTP > NTP Network Time Protocol) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> The [quantum time-keeping application of this research] relies on the unique fingerprint that is created by the time-dependent photoionization of these complex wave packets. These fingerprints determine how much time has passed since the wave packet was formed and provide an assurance that the measured time is correct. Unlike any other clock, this quantum watch does not utilize a counter and is fully quantum mechanical in its nature. The quantum watch has the potential to become an invaluable tool in pump-probe spectroscopy due to its simplicity, assurance of accuracy, and ability to provide an absolute timestamp, i.e., there is no need to find time zero.
IIUC a Rydberg antenna can read and/or write such noise?
"Patterns of Distributed Systems (2022)" https://news.ycombinator.com/item?id=36504073
Recommended Architectures for PostgreSQL in Kubernetes
From TA:
> My advice for running Postgres in Kubernetes is then to:
> - Rely on PostgreSQL replication to synchronize the state within and across Kubernetes clusters – in Kubernetes lingo: choose application level replication (Postgres), instead of storage level replication
> - Fully exploit availability zones in Kubernetes, instead of “siloing” data centers in separate Kubernetes clusters, in order to automatically achieve zero data loss with very low RTO high availability within a single region, out-of-the-box [deploy across multiple availability zones within at least one region]
FWIW, this will create a podman pod with a raw disk mapped through to the whole pod, which systemd can log and respawn: `podman pod create --device=/dev/sdc:/dev/sdc:rwm` https://docs.podman.io/en/stable/markdown/podman-pod-create....
If you need encryption of data at rest for industry compliance for example, you would need to provision said storage device(s) and then manage key rotation and unattended upgrade and reboot for the hosts and/or containers that hold the drive keys.
Maybe this is where they explain that part of the iSCSI SAN;
> If you are reading this blog article, and you are thinking about using Postgres in a Cloud Native environment, without any Kubernetes background, my advice is to seek immediate professional assistance of a certified service provider from day 0 – just because everyone is going on Kubernetes doesn’t mean that’s necessarily a good idea for your organization!
USDA APHIS approves sale of genetically modified glowing Petunia
There's a wait-list sign up on their site; and /news: https://light.bio/news https://x.com/light_bio
Calling your computer a Turing Machine is controversial
Turing was familiar with the Jacquard loom for weaving patterns on punch cards and Babbage's, and was then tasked with brute-forcing a classical cipher.
Quantum wave theory existed but wasn't considered necessary for classical brute-forcing. For example, Shor's algorithm for integer factorization was published in 1994.
Turing's paper does not specify concurrency, local or distributed. (Though is non-distributed multi-core local concurrency just also a Turing Machine?)
Is anything sufficient?
From "Church–Turing–Deutsch principle" https://en.m.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%... :
> The principle was stated by Deutsch in 1985 with respect to finitary machines and processes. He observed that classical physics, which makes use of the concept of real numbers, cannot be simulated by a Turing machine, which can only represent computable reals. Deutsch proposed that quantum computers may actually obey the CTD principle, assuming that the laws of quantum physics can completely describe every physical process.
Do modern QC allow a continuum of computable reals either? Not really yet, no; not with qubits, qudits or qutrits is there a complete real (0,1) interval. There are mean Expectation Values even on actual QC instead of quantum circuit simulators where it's either 0 or 1 for that run of the sim.
TASK: Represent e.g. π+1e-10 on a classical and then a modern quantum computer.
How many quantizations of theta π can there be?
And then with Bloch sphere rotations we have every possible unitary quantum operator?
And still there can be side channels in formally-verified classical and classical+quantum computers.
DNA-based programmable gate arrays for general-purpose DNA computing
From the article:
> Abstract: [...] We further show that integration of a DPGA with an analog-to-digital converter can classify disease-related microRNAs. The ability to integrate large-scale DPGA networks without apparent signal attenuation marks a key step towards general-purpose DNA computing
Quantum dots can do TVs and also QC
"Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision" (2023) https://www.science.org/doi/10.1126/sciadv.adh8508 https://news.mit.edu/2023/arrays-quantum-rods-could-enhance-...
"AI system can generate novel proteins that meet structural design targets" (2023) https://news.mit.edu/2023/ai-system-can-generate-novel-prote... :
> These tunable proteins could be used to create new materials with specific mechanical properties, like toughness or flexibility
Lego axes plan to make bricks from recycled bottles
They've undoubtedly discovered the hard way just how difficult and expensive it is to achieve any sort of consistent product with recycled plastic.
You can produce "something" --- but it will usually be a blackish/gray mess that is hard to re-color with variable viscosity, melting point and other properties that make it hard to mold in mass --- and the result probably won't look like something that should end up in your kid's mouth.
There are some valid use cases for recycled plastics but kid's toys may not be one of them.
What LEGO brick colors can be made with unbleached biocomposites ("bioplastics") from e.g. soy, hemp, and algae?
/? lego hemp plastic: https://www.google.com/search?q=lego+hemp+plastic
/? stronger and lighter than steel and aluminum [hemp] https://www.google.com/search?q=Stronger+and+lighter+than+st...
And also TIL about /? 2DPA-1 melamine waterproof coating: https://www.google.com/search?q=2DPA-1
Though TBH layers of recycled paper pulp and phenolic resin might work; https://news.ycombinator.com/item?id=36857929 :
> RichLite, Paperlite, and Paperstone make recycled paper outdoor waterproof vert ramps, countertops, siding, and flooring; but not yet roofing FWIU?
... and they're sandable and buffable.
Also FWIU, Hemp Wood is made with a sustainable (organic?) binder (in currently still one factory in KY, USA), which is a different process than High Pressure Injection Molding or 3d printing with organic filament.
Interlocking hempcrete blocks by: JustBiofiber https://www.youtube.com/results?search_query=just+biofiber
"Eco-Friendly Construction Breakthrough: Lego-like Hempcrete Blocks That Don't Require Framing https://www.core77.com/posts/91260/Eco-Friendly-Construction... :
> The benefits of the hempcrete bricks is that they're lightweight (easier to transport), fireproof, serve as natural insulators that also regulate moisture, and they're eco-friendly--hemp plants lock up C02 as they grow. [Carbon sequestion, Nitrogen fixation]
> The downside of hempcrete bricks is that they were not load-bearing, and could only be installed with a framed structure, as shown below. [...]
> However, Canadian company JustBioFiber has managed to eliminate that downside, while also adding an innovation that makes the bricks even easier to assemble into a structure. What they've done is to integrate a proprietary structure hidden within each brick itself, so that they can actually function as load-bearing. "We can currently build 3-4 story buildings, with each project having 3rd party structural engineering signoff," the company claims.
Rounded radial block dimensions would be helpful; round homes (and siloes) minimize wind shearing force.
LEGO Bricktales does 3D CAD with Design Verification with Simulation; build a bridge and test it out, journey to other worlds, etc. https://www.google.com/search?q=lego+bricktales
Verification and validation > Categories, Aspects: https://en.wikipedia.org/wiki/Verification_and_validation#Ca...
"Algae-based blocks could make for a more sustainable building" (2023) but what about LEGOs? https://news.ycombinator.com/item?id=37693225
It's sad, but to be fair it's "reduce reuse recycle" and there's no toy more reusable than Lego.
I wonder where all the bricks go. I still have the lego bricks my parents got for my older sister which I inherited. It’s a huge box, most of it was passed down to our family from other families .
My daughter has already a massive brick collection just because of the sets she was presented with.
By now she very child should have no need anymore to buy Lego bricks so many should be in circulation. I just fear they are being thrown away.
Nobody throws away Lego bricks, there is huge market, you can sell them in various ways (by kilograms, by pieces).
Interactive GCC (igcc) is a read-eval-print loop (REPL) for C/C++
Similar to Cling[1] from ROOT.
Xeus-cling is a Jupyter Kernel for C/C++: https://github.com/jupyter-xeus/xeus-cling#a-c-notebook
With the xeus-cling Jupyter Kernel for C/C++, variable redefinitions in subsequent notebook input cells do not raise a compiler warning or error.
There's JsRoot, which may already work with JupyterLite in WASM in a browser tab?
There's a ROOT kernel for Jupyter, too: https://github.com/root-project/root/tree/master/bindings/ju...
Instead of the ROOT Jupyter Kernel, you can just call into ROOT from Python with PyRoot (from a notebook that specifies e.g. ipykernel, xeus-python, or pyodide Jupyter kernels).
"ROOT has its Jupyter Kernel!" (2015) https://root.cern/blog/root-has-its-jupyter-kernel/
IDK if there are Apache Arrow bindings for ROOT?; though there certainly are for C/C++, Python, and other languages
You must install jupyter_console to use Jupyter kernels from the CLI like IPython with ipykernel.
In addition to IPython/Jupyter notebook, jupyterlab, vscode, and vscode.dev+devpod;
awesome-cpp#debug: https://github.com/fffaraz/awesome-cpp#debug
"Debugging a Mixed Python and C Language Stack" (2023) https://news.ycombinator.com/item?id=35710350
ROOT: https://en.wikipedia.org/wiki/ROOT
"Root: CERN's scientific data analysis framework for C++" (2019) because if PyRoot to C++ https://news.ycombinator.com/item?id=20691614 :
`conda install -c conda-forge -y root jupyterlab jupyter_console xeus-cling jupyterlite`
SymPy's lambdify() doesn't support ROOT but does support many other ML and NN frameworks; From "Stem formulas" (2023) https://news.ycombinator.com/item?id=36839748 :
> sympy.utilities.lambdify.lambdify() https://github.com/sympy/sympy/blob/a76b02fcd3a8b7f79b3a88df... :
>> """Convert a SymPy expression into a function that allows for fast numeric evaluation [e.g. the CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, SymPy, numexpr,]
> JsRoot in JupyterLab
"jsroot and JupyterLab" https://github.com/root-project/jsroot/issues/166
JupyterLite docs > Create a custom kernel: https://jupyterlite.readthedocs.io/en/stable/howto/extension...
`jupyter lite` builds a set of packages into WASM WebAssembly with emscripten empack.
JupyterLite docs > Configuring the pyodide kernel > Adding wheels https://jupyterlite.readthedocs.io/en/stable/howto/index.htm...
Vscode.dev also supports the pyodide kernel.
Does `pip install root` work in JupyterLite (in the pyodide Python kernel)? Probably not because root is not a plain python package.
- [ ] Create emscripten-forge recipes for JsRoot and root, so that root is usable with the pyodide kernel supported by JupyterLite and pyodide
emscripten-forge recipes are compiled, packaged, and hosted.
emscripten-forge/recipes//recipes/recipes_emscripten/picomamba/recipe.yaml: https://github.com/emscripten-forge/recipes/blob/main/recipe...
Antimatter Reacts to Gravity in the Same Way as Ordinary Matter, Physicists Find
Damnit. There goes my last holdout hope for exotic matter capable of powering the Alcubierre drive.
Guess we're stuck here now.
A-drives require negative mass, not Antimatter. Antimatter has positive mass, so there was always good reason to believe it would react to gravity the same as normal matter.
Though, you should probably just assume it won't work given the fuel requirements even after optimization are insane for the exotic matter required. And it relies on a quirk in general relativity that may very well be plugged when we discover a unifying theory.
https://news.ycombinator.com/item?id=37227511 :
> SQG Superfluid Quantum Gravity says there needn't be antimatter; there is pressure and there are phases of matter in a varyingly Superfluidic cosmos. The Dirac wikipedia article also mentions SQG.
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017) https://hal.science/hal-01248015/
Fluid equations:
Bernoulli's:
Biot-Savart:
Navier-Stokes:
Anosov flows and CFD ((Quantum) Computational Fluid Dynamics) fluid predictability: https://github.com/dynamicslab/pysindy/issues/383#issuecomme...
Desalination system could produce fresh water that is cheaper than tap water
How destructive is hot dense brine (the waste from) desalination systems? Seems like it would sink to the floor and create a dead zone.
It's not good, desalination plants have to mitigate this with long diffusers that regulate the rate and concentration of the brine that is being put back into the ocean.
Here is one for sale: https://www.jains.com/Pipefittings/JainPEPipes/spacial%20fit...
Desalination: https://en.wikipedia.org/wiki/Desalination
Brine > Uses: https://en.wikipedia.org/wiki/Brine :
> Culinary, Chlorine generation, Refrigerating fluid, Water softening and purification, De-icing, Quenching
Uses for Brine / NaCl not listed on Wikipedia:
Hypochlorite generation. Hypochlorite is the sanitizing primary component of household bleach. Hypochlorite can be made with a 5V USB Hypochlorite generator, salt, water, and watts of electricity.
Salt-based cleaning products; "Non-Toxic Cleaners and EPA Disinfectants" https://saltbased.com/
Energy storage; thermal battery (as heated by concentrated solar, for example)
Energy storage; /? brine NaCl batteries:
Sodium-ion Battery: https://en.wikipedia.org/wiki/Sodium-ion_battery
/? Proton battery brine / sodium
Not brine, but if you're already processing seawater:
Diesel can be made by processing lots of seawater
Hydrolysis and Electrolysis; [Green] Hydrogen production
Nuclear Fusion; to extract D, T, He3, and He4 from (Helion,)
What can be made with Brine and/or NaCl with modern sustainable production processes involving e.g. lasers and fusion heat?
Salt belt: https://en.wikipedia.org/wiki/Salt_Belt :
> The Salt Belt is the U.S. region in which road salt is used in winter to control snow and ice.
Nebraska roadways are treated with brine to pre-treat and de-ice roadways (instead of rock salt, which corrodes many metals).
Though listed as a DIY weed killer ingredient, sodium is a dessicant which dries and prevents plant growth, so salt on the lawn will kill weeds but then leave a dead patch.
Does discharge of fresh water into the ocean by desalination plants, for example, affect the thermal content of the water due to formation of halocines and other thermochemical effects?
Solar pond: https://en.wikipedia.org/wiki/Solar_pond :
> A solar pond is a pool of saltwater which collects and stores solar thermal energy. The saltwater naturally forms a vertical salinity gradient also known as a "halocline", in which low-salinity water floats on top of high-salinity water.
The SR-71 Blackbird Astro-Nav System worked by tracking the stars
You can see one of these up close - both the device, and the airplane! - at the Evergreen Aerospace Museum in McMinnville, OR.
They also have on display another Blackbird payload labeled DEF-H. It’s a nondescript white box which you are allowed to look at, but not allowed to know what it does. XD
There are a lot of these on display it seems. The one we saw was on display was at the SAC museum outside of Omaha.
The SAC Museum Wikipedia page links to the website page on this craft: https://en.wikipedia.org/wiki/Strategic_Air_Command_%26_Aero... https://www.sacmuseum.org/what-to-see/aircraft/sr-71a-blackb...
Quantum navigation systems could also determines position from quasi-stable astral signal sources FWIU. https://news.ycombinator.com/item?id=36222625
Sunstone: https://en.wikipedia.org/wiki/Sunstone_(medieval)
Amplituhedra and cost of calculation: https://en.wikipedia.org/wiki/Amplituhedron
Biden Launches the American Climate Corps and AmeriCorps NCCC Forest Corps
What are some good resources for a new #ActOnClimate and #CleanEnergy jobs & training & service program administered through AmeriCorps?
"Drones that can plant 100k trees a day" https://news.ycombinator.com/item?id=16260892
FWIU, to grow a forest to prevent and fight desertification, you have to plant more that trees; trees and all of the other little plants.
Forest ecology > Matter and energy flows: https://en.wikipedia.org/wiki/Forest_ecology
Ecological succession > Succession by habitat type > Forest, Wetland, Grassland: https://en.wikipedia.org/wiki/Ecological_succession#Successi...
/? desertification: https://www.youtube.com/results?sp=mAEA&search_query=Deserti...
TIL about residue mulching and no till farming. You can just mulch leaves around the trees with a lawnmower if you don't get into the roots (and that fertilizes the tree, prevents mud, and keeps the lawn from getting marshy from counterproductive leaf removal)
YouTube channels / people who might have some ideas for a new Climate Corps program:
Andrew Millison: https://youtube.com/@amillison
Practical Engineering: https://youtube.com/@PracticalEngineeringChannel
Dr Elaine Ingham: https://youtube.com/@soilfoodwebschool
Wranglerstar: "How the US Forest Service splits firewood" https://youtube.com/shorts/LgiIZBlRL1U?si=nEP3hYxBd9sxFwlk USFS RED bag https://youtube.com/watch?v=JBVX-KWsJ6I&si=BQSPctpVTN6Ynn-Z
"xPrize Wildfire – $11M Prize Competition": https://news.ycombinator.com/item?id=35657181 https://westurner.github.io/hnlog/#story-35657181
For information system Services:
USDS US Digital Services Playbook: https://playbook.cio.gov/
https://techfarhub.usds.gov/get-started/ :
- What is a digital service?
- > Welcome to the newly refreshed TechFAR Hub, updated in January 2023, a resource to help government acquisition and program professionals buy, build, and deliver modern digital services while staying on the correct side of compliance! We have reorganized TechFAR Hub since its original release [...] Pre-Solicitation, Solicitation, Evaluation, Contract Administration
USDS: https://en.wikipedia.org/wiki/United_States_Digital_Service
18F: https://en.wikipedia.org/wiki/18F
USWDS US Web Design Standards > Implementations https://designsystem.digital.gov/documentation/implementatio...
(PBS) "Leave it to Beavers" [to irrigate the world; they are drawn to a walkman cassette of running water; they have bee airdropped] https://news.ycombinator.com/item?id=30738322
"Zero energy ready homes are coming" (2023-03) https://news.ycombinator.com/item?id=35064493
JPL Open Source Rover Project
Dumb question, kind of unrelated: In the process of building a remote controlled cart to haul small heavy loads offroad. Have no experience in engineering or robotics. Where could one go to learn what's needed, say, starting at the motor/drivetrain? (ex: what kind of motor to use, exterior or interior gearing, how to calculate motor size/voltage/gearing needed to haul X weight up Y gradient, steering geometry, actuators for remote steering, etc)
Robot ethics: https://en.wikipedia.org/wiki/Robot_ethics
Workplace robotic safety > Hazards, Hazards controls: https://en.wikipedia.org/wiki/Workplace_robotics_safety :
> There are four types of accidents that can occur with robots: impact or collision accidents, crushing and trapping accidents, mechanical part accidents, and other accidents [...]
> There are seven sources of hazards that are associated with human interaction with robots and machines: human errors, control errors, unauthorized access, mechanical failures, environmental sources, power systems, and improper installation.
LEGO Boost Creative Toolbox has a short mandatory safety lesson in the app.
What's a good robotics safety program?
--
Pybricks documentation: https://docs.pybricks.com/en/latest/
"I Put 44 Mechanisms In 1 LEGO® Machine..." https://youtube.com/watch?v=ykIYP4z-eMk&si=1hh8KvoRxCVamH4y
"Lego Technic Gearing Ratio Calculator Tool" https://youtube.com/watch?v=yUcNnj5NClY&si=hKB-b1pB3HuVM6ed
--
ROS2 docs > Tutorial https://docs.ros.org/en/iron/Tutorials.html
ROS Industrial Training > Session 1 - ROS Concepts and Fundamentals (ROS2) https://industrial-training-master.readthedocs.io/en/latest/...
--
MoveIt2: https://moveit.picknik.ai/main/index.html https://github.com/azalutsky/MoveIt2Docker
--
RoboStack > Jupyterlab-ROS, Jupyter ROS https://robostack.github.io/JupyterRos.html
I’m thinking someone could make this run in 3d graphics saving me a few $$
Sure, just use gazebo
Gazebosim > ROS with Gazebo: https://github.com/gazebosim#using-ros-with-gazebo
"Show HN: Ghidra Plays Mario" (2023) https://news.ycombinator.com/item?id=37475761 :
[RL, ..., MuZero unplugged w/ PyTorch ]
> Farama-Foundation/Gymnasium is a fork of OpenAI/gym and it has support for additional Environments like MuJoCo: https://github.com/Farama-Foundation/Gymnasium#environments
> Farama-Foundatiom/MO-Gymnasiun: "Multi-objective Gymnasium environments for reinforcement learning": https://github.com/Farama-Foundation/MO-Gymnasium
Idea: Generate code like BlenderGPT to generate drone rover sim scenarios and environments like the Moon and Mars
TIL about teh "Bush Winch" which mounts to a tire for off-road vehicle recovery. Note the winch line blankets, snatch blocks, and tree protectors in this video about off-road vehicle recovery: https://youtu.be/OXxLh8shMu8?si=bv59t8T1or07-K7l
Also RIL about o3de, which does PhysX: https://github.com/bernhard-42/jupyter-cadquery/issues/99#is...
O3de has an ROS2 module: https://www.docs.o3de.org/docs/api/gems/ros2/
NASA/openmct: https://github.com/nasa/openmct
Demand for Green Skills Outpacing Supply – But There's Hope
[deleted]
From the article:
> Bycer and I connected recently to discuss LinkedIn’s 2023 Global Green Skills Report and how workers in traditional careers can develop the kind of skills needed for a greener future. Here’s what we covered.
> Green skills in demand LinkedIn’s research found that skills in carbon accounting, carbon credits, emissions trading, impact assessment and sustainability reporting are currently among the fastest-growing green skills in the U.S.
LinkedIn > "Global Green Skills Report 2023" https://economicgraph.linkedin.com/research/global-green-ski... PDF: https://economicgraph.linkedin.com/content/dam/me/economicgr...
Luiz André Barroso has died
"The Datacenter as a Computer: Designing Warehouse-Scale Machines" (3rd Edition) (2018) https://www.google.com/search?q=The+Datacenter+as+a+Computer... https://books.google.com/books?id=b951DwAAQBAJ
Also the co-author of The Tail at Scale, one of the more interesting papers on web-scale architecture that I've read: https://research.google/pubs/pub40801/
[deleted]
Ask HN: Why don't datacenters have passive rooflines like Net Zero homes?
E.g. the "shed roof" is a passive roofline?
And other questions: "Ask HN: Does mounting servers parallel with the temperature gradient trap heat?" (2020) https://news.ycombinator.com/item?id=23033210
Do you have any examples of the kind of datacenter roofs you think are problematic?
OTOH from memory, datacenter roofs are typically flat with HVAC gear.
They don't have windcatchers or motionless rooftop wind energy turbines.
And they aren't angled to optimize thermal flow through the facility.
What are some examples of passive rooflines?
The shed roof is an example of a passive roofline; which angles up towards one side and causes passive thermal exchange. FWIU you can just open and close vents at the peak of the roof according to time of day, internal and external temp and humidity, and the weather forecats
Passive cooling > Modulation and heat dissipation techniques: https://en.wikipedia.org/wiki/Passive_cooling#Modulation_and...
/? Passive roofline: https://www.youtube.com/results?sp=mAEA&search_query=Passive...
There's probably some reason that there aren't data centers with angled roofs other than that there's typically HVAC equipment up there?
More surface area at the top of the structure should make for greater passive thermal exchange; though a more focused extreme thermal gradient is probably most useful for thermal energy reclamation with arrays of solid-state thermoelectric heat engines?
ok so "flat with HVAC gear" is that you consider "not passive"? Better question for you: why aren't datacenter roofs covered in solar, especially in places like Sparks and Clark, NV?
What open source tools could we use to model the impact on thermal exchange efficiency of: roof surface area given prevailing wind, the impact of rooftop renewables on laminar flow, features that cause passive airflow though the building,?
FWIU wind turbines and solar panels have a local heating effect at ground level; but does that help exchange wasted heat with the air above the datacenter? (The heat is wasted inefficiently instead of thermoelectric recapture)
Really white paint reflects thermal radiation, but is probably very bright for techs maintaining units on the moon. Some PV (and maybe TPV? [Agrivoltaic]) panels have an underside with an additional layer of inverted panels.
awesome-fluid-dynamics: https://github.com/lento234/awesome-fluid-dynamics
TIL about Anosov flows? https://github.com/dynamicslab/pysindy/issues/383#event-1002...
awesome-machine-learning-fluid-mechanics > Opensource CFD codes lists openfoam and a few others https://github.com/ikespand/awesome-machine-learning-fluid-m...
Open Sustainable Technology: https://github.com/protontypes/open-sustainable-technology#o...
SunPy: https://sunpy.org/
The urgent need for memory safety in software products
Genuine question: do advisories like this ever directly result in action being taken? Do they carry weight with stakeholders?
(looking for real-world anecdotes, not preexisting assumptions)
Fedora 40 Eyes Dropping Gnome X11 Session Support
Well, if they can make CUDA and Wayland work simultaneously...
(Baseline Nvidia drivers without CUDA already work fine with Wayland).
From "Wayland does not support screen savers" (2023) https://news.ycombinator.com/item?id=37385627 :
> the NVIDIA proprietary Linux module for NVIDIA GPUs hardware video decode doesn't work on Wayland; along with a number of other things: "NVIDIA Accelerated Linux Graphics Driver README and Installation Guide > Appendix L. Wayland Known Issues" https://download.nvidia.com/XFree86/Linux-x86_64/535.54.03/R...
What is NVIDIA's annual developer salary commitment to non-HPC Linux compared to AMD with ROCm and Intel?
(EDIT)
Most nvidia driver Linux kernel module re-packaging projects are not on GitHub which supports Sponsors.yml for specifying how to donate.
Trusted builds of the GPU modules are necessary; To run with SecureBoot on and the Silverblue Fedora ostree distribution requires local module signing after every upgrade of the GPU modules and adding an additional local key to the SecureBoot BIOS keystore.
I don't think this is a https://SLSA.dev -compliant GPU module software supply chain:
NVIDIA Src -> RPMfusion pkg & builder -> rpm-ostree toolbox dnf install && Local_module_signing_with_local_secureboot_key
I don’t know, but ROCm is a joke and unusable. 1200 lines of some very complicated C++ for a simple FFT… That’s ridiculous. And nobody seems to be doing anything about it — and then they wonder, why they have 1% or so market share in AI.
It’s not a hardware problem, it is an API design problem.
It has nearly the same interface design as cuFFT, admittedly with less documentation. I’m not sure I understand this complaint versus the competition. https://rocm.docs.amd.com/projects/hipFFT/en/latest/api.html
If you’re complaining about cuFFTs design, how BLAS like interfaces are outdated, and a lack of a proper hypothetical heterogenous array programming language?, sure. But it’s not much better in Nvidia land.
https://www.amd.com/en/graphics/servers-solutions-rocm-hpc :
> ROCm™-optimized libraries currently include BLAS, FFT, RNG, Sparse, NCCL (RCCL) and Eigen
... and Numba, which you can compile a symbolic expression for with sympy.utilities.lambdify. https://docs.sympy.org/latest/modules/numeric-computation.ht...
AMD ROCm does do HPC. Which desktop/workstation AMD GPUs are now supported with optimization?
There are libraries, yes. I was talking about using it directly.
No, I mean for writing new code. I know that FFT is already in the libraries, I was using it as an example. About 200 LoC in CUDA, and 1200 lines of code in ROCm. ROCm is boilerplate-ridden and hardly usable for new code.
That's a contrived example, then. Also because there's already an optimized version of FFT in their libraries.
At least with open-source AMD code, it can be fixed with Pull Requests.
FWIU, OpenCL is insufficient, CUDA is the closed-source fanboy favorite that the industry can't move away from, and Intel OneAPI may be the most portable but not the most performant.
Impact-wise, contributing to the ROCm and OneAPI tools to help them be more competitive is in consumers' interest.
The industry can't move from CUDA because you can easily write anything in CUDA, as opposed to ROCm. I needed once to write a lattice Boltzman CFD simulation, it is so much easier in CUDA compared to ROCm, I wouldn't even start the latter unless I am forced to. Eveything takes 5x amount of code and 5x amount of time.
ROCm is bad API design, and no amount of gradual tinkering will save it. I wonder why AMD can't design something better. HIP exists, but it is a "lesser CUDA" (CUDA is actually a mediocre API, we can design things much better than that now).
What was the difference in runtime performance, and did you try CuPy?
https://github.com/cupy/cupy :
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
Projects using CuPy: https://github.com/cupy/cupy/wiki/Projects-using-CuPy
Show HN: ElectricSQL, Postgres to SQLite active-active sync for local-first apps
Hi HN, James, Valter, Sam and the team from ElectricSQL here.
We're really excited to be sharing ElectricSQL with you today. It's an open source, local-first sync layer that can be used to build reactive, realtime, offline-capable apps directly on Postgres with two way active-active sync to SQLite (including with WASM in the browser).
Electric comprises a sync layer (built with Elixir) placed in front of your Postgres database and a type safe client that allows you to bidirectionally sync data from your Postgres to local SQLite databases. This sync is CRDT-based, resilient to conflicting edits from multiple nodes at the same time, and works after being offline for extended periods.
Some good links to get started:
- website: https://electric-sql.com
- docs: https://electric-sql.com/docs
- code: https://github.com/electric-sql/electric
- introducing post: https://electric-sql.com/blog/2023/09/20/introducing-electri...
You can also see some demo applications:
- Linear clone: https://linear-lite.electric-sql.com
- Realtime demo: https://electric-sql.com/docs/intro/multi-user
- Conflict-free offline: https://electric-sql.com/docs/intro/offline
The Electric team actually includes two of the inventors of CRDTs, Marc Shapiro and Nuno Preguiça, and a number of their collaborators who've pioneered a lot of tech underpinning local-first software. We are privileged to be building on their research and delighted to be surfacing so much work in a product you can now try out.
I saw this recently and I am really excited for, hopefully, a renaissance in local-first apps; it's been quite a long time coming. That said, it seems like there's still a lot of problems to work on in this space and it'll take a while for different approaches and implementations to approach their local maxima.
I am curious about encryption. Assuming that ElectricSQL is handling essentially all of the syncing, is it possible to build applications that use end-to-end encryption for some of their state? I am wondering specifically because CRDTs seem to simplify E2EE a lot since well, any client that can write can resolve conflicts any time. I haven't looked very deeply yet as I've been very busy, but to me that's the main thing I've been really curious about. I know you can at least do this with Y.js which helpfully provides a way to use symmetric encryption for WebRTC synchronization, but this obviously is a whole lot more than that.
You can always encrypt local data even if the metadata is not encrypted. But that restricts it to the device itself by default, not even your other devices on the same account can read it.
If you want encrypted and shared with other devices or users you have to surface key management to users which is very difficult to do. Signal and a few others do this but they’re very bespoke to the specific business logic. To provide a general purpose abstraction layer is still an open problem, afaik.
I'm not asking for the problem of key derivation to be solved by ElectricSQL per se, just wondering if it can handle E2E encrypted data in any way. That said, I disagree with how difficult it is: there are a lot of approaches that are not too ridiculous. For example, a simple approach to E2EE is to use a PAKE like SRP to do password authentication, then since the password is kept secret, you can use a normal KDF to derive a symmetric key from the password. From here, you can e.g. store symmetric and public key matter on a server or synced across your clients, encrypted using the password/KDF. If this sounds familiar, it's exactly how password managers work and the main downsides are that it requires password authentication (can't use any auth mechanism that doesn't somehow discretely convey a secret to the client) and the forgot password situation is more complicated (if you have no backups of the key, you can't recover any data. However, that's not an unreasonable compromise in exchange for E2EE.)
Just like TLS, it'd probably be bad if most people implemented SRP from scratch. That said, I did write my own implementation of SRP-6a (a variant of SRP based on the RSA cryptosystem) in TypeScript and I found it fairly simple to do. There are also PAKEs that provide even better security properties than SRP-6a, but good implementations of them are still lacking for now.
What Matrix does seems pretty simple, it just has a lot of moving parts and nuance; the underlying keys are essentially sent directly between clients after a key exchange is performed and verified out of band to ensure there is no eavesdropping and that the two clients are connected to who they think they are.
As far as I know, Signal sidesteps key management almost entirely and only allows one logged in device, and everything else must proxy through it. That's pretty lame, though I understand the stakes are very high to get it 'right' and keeping the moving parts to a minimum was likely high priority.
[IETF] "RFC 9420 a.k.a. Messaging Layer Security" does key revocation and rebroadcast fwiu https://news.ycombinator.com/item?id=36815705
W3C DID pubkeys can be stored in a W3C SOLID LDPC: https://solid.github.io/did-method-solid/
Re: W3C DIDs and PKI systems; CT Certificate Transparency w/ Merkle hashes in google/trillian and edns, keybase pgp -h, blockchain-certificates/cert-verifier-js,: https://news.ycombinator.com/item?id=36146424 https://github.com/blockchain-certificates
Oh, thanks for this, I hadn't heard of RFC 9420 until now and it looks very interesting.
Towards a new SymPy
Part II made the front page yesterday: https://news.ycombinator.com/item?id=37426080
A comment there makes what I think is a very good point about "the lack of consolidation of computer algebra efforts": https://news.ycombinator.com/item?id=37430437
I don't know what might drive or foster such consolidation. Maybe Category Theory? Bridging syntax?
How about Lean? [0] There's a whole library of mathematics written down in Lean called Mathlib, which spans most of the undergraduate maths curriculum upto some cutting edge research-level maths. I've commented under the Part II post you linked to as well, describing how I think Mathlib could help the CAS ecosystem.
That's an entirely different thing though. Good luck getting Lean to help you do any symbolic computation whatever. You can use it to prove that a given manipulation is correct. You cannot use it to find a result (there may be a symbolic math library for Lean eventually but currently there isn't).
In Coq, the tactics system supports writing solvers than can both find a result and give you the proof that it's correct. If Lean also has a similar tactics system, then solvers could be written in that.
Lean does have a tactics system (very similar to Coq). However, there is no general tactic that will do, e.g., "sin^2 + cos^2 = 1" for you. (Writing a tactic specifically for that simplification is easy, of course, but that's not what a CAS is). (I doubt there is one in Coq that will do this, although I'm not sure). A symbolic algebra system is expected to do this simplification (and much more advanced ones) automatically. All in all, proof assistants aren't computer algebra systems. You could make computer algebra systems in those frameworks but I'm not aware of any.
Typically, there are 'group', 'ring' and 'field' tactics that will perform simplification in the relevant algebraic structures. Going from that to solving sin^2 x + cos^2 x is simply a matter of writing more such tactics, operating on e.g. real fields or whatever is applicable in any given case.
This is a lot less ad-hoc than whatever CAS's do, but it's also less error prone.
Do any existing CAS systems have configurable axioms? OTOH: Conway's surreal infinities, Do not early eliminate terms next to infinity, Each instance of infinity might should have a unique identity, configurable Order of operations,
All of the axiomatic transformations applied by a CAS like SymPy should/must be in Lean Mathlib somewhere? If nothing else, a lookup_lean_mathlib_definition(expr, 'path/to/mathlib-v0.0.2') or find_similar(expr, AxiomDB) would be useful.
How do CAS differ from Production Rule Systems? https://en.wikipedia.org/wiki/Production_system_(computer_sc...
CAS > Simplification: https://en.wikipedia.org/wiki/Computer_algebra#Simplificatio...
Rewriting: https://en.wikipedia.org/wiki/Rewriting
Because the rulesets are expected to change, rules engines have functionality to compile rules into a tree or better for performance.
eBPF is not a rules engine, but it does optimize filter sets IIRC?
Julia's Symbolics.jl allows users to easily add custom rewrite rules. However, it's not a state-of-the-art CAS. I don't have that much experience with others. Mathematica doesn't allow it, I think.
FWIU Wolfram's searching for a unified model with the Wolfram Physics Project, too; e.g. "The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics" (2022) https://www.wolframscience.com/metamathematics/ https://www.wolframphysics.org/bulletins/
Are fundamental constants other-valued in any Many Worlds interpretations, or are e, i, and Pi always e, i, and pi with the same relations?
Countability and continuua (in a Hilbert space of degree n, where n is or is not inconstant like the many forms of [quantum discord] entropy and the energy that represents them)
TIL the separable states problem is considered NP-hard, and many models specify independence of observation as necessary.
I don't see how that relates to the question of "does Mathematica allow users to define simplification rules?"
https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_g... :
> In this game, you get own version of the natural numbers, called `mynat`, in an interactive theorem prover called Lean. Your version of the natural numbers satisfies something called the principle of mathematical induction, and a couple of other things too (Peano's axioms).
Such axioms are hard-coded in SymPy, SageMath, and Mathematica but not in Lean Mathlib (which is not optimized for performance)
If there are an infinite number of unitary transformations (as rotations on a Bloch sphere for example), I find it unlikely that we've yet discovered all of the requisite operators and axioms and coded them into any human CAS that exists in present day.
To build ships that break ice, U.S. must relearn to cut steel
"This new idea could reduce steel’s carbon emissions by 90%" https://www.freethink.com/energy/decarbonizing-steel
Why is steel necessary for ships?
From "Plant-based epoxy enables recyclable carbon fiber" (2022) https://news.ycombinator.com/item?id=30138954 :
> And once it’s completed you have an insanely strong material that makes steel look weak and brittle. It’s reported to be as much as ten times stronger than steel, yet is lighter than fiberglass
Make it out of hemp aerogel and you could carry the boat.
Carbon fiber materials have excellent tensile strengths, especially under steady-ish loads.
The front of an icebreaker's hull mostly faces compressive loads, with lots of shocks from hitting ice.
Try searching for "OceanGate Titan" if you want to read more about the performance of carbon fiber composites under large compressive loads in a marine environment.
Reading further from that thread about hemp on an article about carbon fiber: https://news.ycombinator.com/item?id=30147414
> Have you here contested claims that hemp - biocomposite with resin - has greater Tensile Strength and Compressive Strength than steel and aluminum (and carbon fiber with sustainable binder)?
You need mass to break ice.
> You need mass to break ice.
this is one of the narrow cases where pounds comes off the bench to say, "actually, you need weight"
(icebreakers slide/ride up on top of the ice and the weight of the ship breaks breaks it ice downward)
On Earth's surface, it's not like gravity is optional.
To get real technical about it, gravity is inconstant at constant altitudes when varying latitude and longitude.
9.8m/s^2 at sea level, and it decreases with altitude all the way out to Lagrangian points where the gravitational attraction of the sun (and other local masses) is equal to that of earth; but solar pressure presumably displaces objects at zero-gravity rest.
Gravity is 'optional' with quantum locking. Gravity is a weak force.
Gravity is maybe the least lossy form of potential energy storage?
A key feature of ice-breakers is that they operate at sea level, and in fact rely on gravity.
A boat you could carry couldn’t break ice.
78% MNIST accuracy using GZIP in under 10 lines of code
For a comparison with others techniques:
Linear SVC (best performance): 92 %
SVC rbf (best performance): 96.4 %
SVC poly (best performance): 94.5 %
Logistic regression (prev assignment): 89 %
Naive Bayes (prev assignment): 81 %
From this blog page: https://dmkothari.github.io/Machine-Learning-Projects/SVM_wi...
Also it seems from reading online articles that people are able to obtain much better results just by using K-NN, so I imagine that the author just made his job harder by using gzip but I could be wrong about this.
Ask HN: Which school produces the best programmers or software engineers?
This may well be entirely anecdotal because I don't think there is any official data to this somewhat ambiguous question.
But based on your experience, if you have worked with quite a few number of programmers and engineers from a few schools, which school tends to produce the most well rounded programmers/engineers (as much as a school can train someone out of the box)?
Which programs offer Formal Methods and TLA+?
Define a relative metric for comparison
Programmer's Competency Matrix: https://cuamckuu.github.io/index.html
Coding Interview University: https://github.com/jwasham/coding-interview-university
Coding Guidelines: https://awesome-safety-critical.readthedocs.io/en/latest/#co...
Predict software quality and/or career success
Predict applied ML/AI failure due to insufficient Data Science fundamentals
By well-rounded do you mean the ACM Computer Science Curriculum; or a strong liberal arts program which emphasizes critical thinking and effective communication; or Emotional Intelligence, Servant Leadership, and Project Management?
InfoSec; Computer Security > Careers: https://en.wikipedia.org/wiki/Computer_security#Careers
The NIST NICE Framework describes Categories (7), Specialty Areas (33), Work Roles (52) and Knowledge, Skills, and Abilities which are in demand in cybersecurity: https://niccs.cisa.gov/workforce-development/nice-framework
You and/or a good program can help you find projects and jobs where you can learn and demonstrate application of KSA's (before you present a certificate for your first job and finally begin your career in lifelong learning). A good program teaches you how to learn; study skills, personal people skills, computer skills.
Is managing a team of engineers engineering, and will they still let me do engineering if I do management (which none of us have taken a course in)?
Which programs have QIS Quantum Information Science in their cirricula as more than a paragraph and a quiz question?
DevSecOps > DevSecOps, shifting security left: https://en.wikipedia.org/wiki/DevOps
Build123d: A Python CAD programming library
> Build123d is a python-based, parametric, boundary representation (BREP) modeling framework for 2D and 3D CAD. It's built on the Open Cascade geometric kernel and allows for the creation of complex models using a simple and intuitive python syntax. Build123d can be used to create models for 3D printing, CNC machining, laser cutting, and other manufacturing processes. Models can be exported to a wide variety of popular CAD tools such as FreeCAD and SolidWorks.
> Build123d could be considered as an evolution of CadQuery where the somewhat restrictive Fluent API (method chaining) is replaced with stateful context managers - e.g. with blocks - thus enabling the full python toolbox: for loops, references to objects, object sorting and filtering, etc
Is there a jupyter-cadquery -like tool for build123d?
"Render with Blender and/or o3de" https://github.com/bernhard-42/jupyter-cadquery/issues/99
Is there a blenderGPT-like tool trained on build123d Python models?
examples/heat_exchanger.py: https://github.com/gumyr/build123d/blob/dev/examples/heat_ex...
Seeking comments on the Data Catalog (DCAT) US standard v3.0
- [ ] DOC: The JSON-LD context file links 404: https://doi-do.github.io/dcat-us/#json-ld-context
Also, it says the schema for physical units are specified by this spec?
FWIU, QUDT Quantities, Units, Dimensions, and Types URIs MAY be used with https://Schema.org/QuantitativeValue; and neither CSVW nor Model for Tabular Data and Metadata on the Web specify how to indicate physical quantities and units with a controlled vocabulary with URIs?
Ask HN: How to do literal web searches after Google destroyed the “ ” feature?
I used this quite frequently but since Google """"improved"""" it last year (there was a popular HN post complaining about this) it doesn't work anymore. Search for a domain name with quotation marks for example just recombines the contents of the domain and returns a bunch of unrelated content completely cluttering what I am looking for. Until last year it used to return no search results if there weren't any exact matches, which is the whole point.
Does someone have a work around for this phenomenal Google decision?
Can you give an example? Searching for
"news.ycombinator.com"
works just fine for me. No unrelated content.How do the search results for a site: query compare to just quoting what appears to be a DNS domain containing punctuation tokens?
inurl:news.ycombinator.com
site:news.ycombinator.com
"news.ycombinator.com"
news.ycombinator.com
Linear code is more readable
It’s a matter of style, and like cooking, either too much or too little salt will ruin a dish.
In this case I hope nobody is proposing a single 1000-line god function. Nor is a maximum of 5 lines per function going to read well. So where do we split things?
This requires judgment, and yes, good taste. Also iteration. Just because the first place you tried to carve an abstraction didn’t work well, doesn’t mean you give up on abstractions; after refactoring a few times you’ll get an API that makes sense, hopefully with classes that match the business domain clearly.
But at the same time, don’t be over-eager to abstract, or mortally offended by a few lines of duplication. Premature abstraction often ends up coupling code that should not have to evolve together.
As a stylistic device, extracting a function which will only be called in one place to abstract away a unit of work can really clean up an algorithm; especially if you can hide boilerplate or prevent mixing of infra and domain concerns like business logic and DB connection handling. But again I’d recommend using this judiciously, and avoiding breaking up steps that should really be at the same level of abstraction.
> So where do we split things?
Cyclomatic complexity: https://en.wikipedia.org/wiki/Cyclomatic_complexity
Overhead: https://en.wikipedia.org/wiki/Overhead_(computing)
Some programming language implementations and operating systems have more overhead for function calls, green threads, threads, and processes.
If each function call creates a new scope, and it's not a stackless language implementation, there's probably a hashmap/dict/object for each function call unless TCO Tail-Call Optimization has occurred.
Though, function call overhead may be less important than Readability and Maintainability
The compiler or interpreter can in some cases minimize e.g. function call overhead with a second pass or "peephole optimization".
Peephole optimization: https://em.wikipedia.org/wiki/Peephole_optimization
Code linting tools measure [McCabe,] Cyclomatic Complexity but not Algorithmic Complexity (or the overhead of O(1) lookup after data structure initialization).
MaplePad – RP2040 Dreamcast controller, VMU, and Purupuru (rumble pack) emulator
Hoping to get controller support for MicroPython w/ LEGO Boost, RP2040js etc, I researched a bunch of Bluetooth BLE + C/Python links here: https://github.com/pybricks/support/issues/262#issuecomment-... :
> https://github.com/DJm00n/ControllersInfo: HID profiles for Xbox 360, Xbox One, PS4, PS5, Stadia, and Switch
> "HID over GATT Profile (HOGP) 1.0" https://www.bluetooth.org/docman/handlers/downloaddoc.ashx?d...
- [ ] MicroPython: wrap the Bluetooth HID over GATT Profile events and buttons
Perhaps your Dreamcast controller could do BLE and VMU wireless.
Really interested in putting together some kind of USB host to HID over GATT profile or Bluetooth HID to i2c/UART bridge.
We’re (Pimoroni) closing on shipping a dual RP2040 HDMI board (PicoVision) that’s great for bedroom coder style games, but lack of full fat Bluetooth support or USB HID host in MicroPython - and the relative pain of making and supporting a build that includes these features - make adding controllers… tricky.
Now I wish I’d kept the Dreamcast controller I modded with a custom cable to poke Maple bus. Hindsight!
Show HN: Deploying subdomain-based routing like github.io
Neat. One big nitpick: I think this is solved better by other solutions. One small nitpick: why serve under /websites instead of under /srv?
/srv is a standard [0] directory.
[0]: https://en.wikipedia.org/wiki/Filesystem_hierarchy_standard
I spent 15 years in webhosting and I've literally never seen /srv used. MAYBE for sftp and I'm misremembering but I rarely had to deal with that. It's always /var/www/$url.
> It's always /var/www/$url.
Or in a shared hosting environment /home/<user>/public_html/ or /home/<user>/www/
One reason to place static files and cgi-bin executables in /var/www instead of /home/user homedir is: /var/www it's already labeled httpd_sys_content_t or httpd_sys_script_exec_t or instead of user_home_t (so that you don't need to `setsebool httpd_enable_homedirs on` for all users and `chcon httpd_user_content_t` if it's not necessary).
Electric cooling could shrink quantum computers
How does electric cooling compare to say optoelectronic laser cooling in terms of cost and efficiency?
Laser cooling: https://en.wikipedia.org/wiki/Laser_cooling
They have very different applications. Laser cooling has extremely low cooling power but can reach low temperatures. It’s good for cooling a gas of atoms or even individually trapped atoms.
A dilution refrigerator, as are used in superconducting quantum computing and other low temperature condensed matter experiments, could cool a sandwich. We’re talking maybe a milliwatt of cooling power. Electronic cooling — I don’t know.
Dilution refrigerator: https://en.wikipedia.org/wiki/Dilution_refrigerator
Timeline of low-temperature technology: https://en.wikipedia.org/wiki/Timeline_of_low-temperature_te...
Thanks, I've been meaning to delve deeper into this topic for a while now and that timeline is gold
Is any of this used in practice? I don't see a "(Practical) Applications" section.
QC and Quantum Simulation:
/? laser cooling quantum simulation https://www.google.com/search?q=Laser+cooling+quantum+simula...
From https://phys.org/news/2016-04-laser-cool-quantum-liquid.amp :
> In the experiments, the team created a superfluid helium film on a silicon chip.
> They then used a bright laser beam to draw energy out of waves on the surface of the superfluid, cooling them.
> In addition to laser cooling, the research team showed that combining superfluid with microphotonics allows extremely precise measurements of superfluid waves
Additionally, FWIU there are now inexpensive integrated lasers from which a laser cooling array could be built to enclose a QC sim
How do Quantum Algorithms affect global security?
New solutions to computationally hard problems.
Quantum supremacy: https://en.wikipedia.org/wiki/Quantum_supremacy
Quantum algorithm: https://en.wikipedia.org/wiki/Quantum_algorithm
Quantum information > Applications https://en.wikipedia.org/wiki/Quantum_information#Applicatio...
Quantum machine learning > Implementations and experiments: https://en.wikipedia.org/wiki/Quantum_machine_learning
Applications of quantum mechanics: https://en.wikipedia.org/wiki/Applications_of_quantum_mechan...
Hopefully Quantum Algorithms and QC will be helpful for developing more efficient solar and energy harvesting, for example.
How can Quantum Algorithms help solve for existence needs like food, water, and shelter; how do we quantum the #GlobalGoals?
Physicists create elusive particles that remember their pasts
Do photons have "memory of their pasta" as well?
From _ and EM Wave Polarization Transductions (1999):
> Time As Energy and Why It Is Very Dense Energy: In addition to the three spatial polarizations of photons and EM waves, there is a very, very useful t-polarization along the time axis. In this polarization, the 3-spatial energy is not oscillating at all. Instead, the time or time- energy is oscillating. Time can be taken to be energy compressed by at least c2, so it has at least the same energy density as mass. In other words, one second is 9x10^16 joules of time- energy (energy compressed into time). The t- polarized photon or EM wave is called the scalar photon or scalar EM wave, respectively. [... MKS units ... 1999 ... probably real]
Circularly-polarized light: https://youtu.be/QCX62YJCmGk?si=9Vqx7RB6v74WG5bv
"Physicists use a 350-year-old theorem to reveal new properties of light waves" (2023) https://news.ycombinator.com/item?id=37226121
Effect of breathwork on stress and mental health: A meta-analysis of RCTs
4-7-8 breathing has origins in Pranayama FWIU?
"Effects of sleep deprivation and 4-7-8 breathing control on heart rate variability, blood pressure, blood glucose, and endothelial function in healthy young adults" (2022) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9277512/ https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=Eff...
Pranayama: https://en.wikipedia.org/wiki/Pranayama
Yeah it's somewhat of a more broad term though? Like, the freakiest pranayama gets is rechaka pranayama, where you breathe out the air from your lungs and try to hold it out, depleting your blood-oxygen levels (hypoxia mimesis). Has also been practiced in qigong[1], thankfully you can't actually induce self-unconsciousness this way (at least not without, say, breathing into a bag[2]). But all to say that my understanding is that pranayama is just kind of yoga slang for "breathwork", there's not just one kind of it as far as I know?
1. https://www.pacificcollege.edu/news/blog/2020/03/26/the-link... 2. https://pubmed.ncbi.nlm.nih.gov/17324641/
Visualizing the CPython release process
The Python Devguide should explain all of this, too.
CPython Devguide > Developer Workflow > Development Cycle: https://devguide.python.org/developer-workflow/development-c...
sphinxcontrib-seqdiag works with Sphinx: http://blockdiag.com/en/seqdiag/sphinxcontrib.html#:~:text=s....
Mermaid supports Sequence Diagrams and works with GitHub: https://mermaid.js.org/syntax/sequenceDiagram.html
Digital supply chain security: https://en.wikipedia.org/wiki/Digital_supply_chain_security
SLSA Framework: Supply-chain Levels for Software Artifacts: https://github.com/slsa-framework
New world record with an electric racing car: From 0 to 100 in 0.956 seconds
The impressive thing to me is the level of traction needed to do something like this. It seems they're taking old tricks and building a "sucker car".
https://www.roadandtrack.com/motorsports/a32350/jim-hall-cha...
Still neat stuff.
Race sanctioning bodies will need to update their rules or these will just be one off bragging rights.
In Micromouse (https://en.wikipedia.org/wiki/Micromouse) - a competition where autonomous devices solve a mouse maze - the fastest competitors are all using fans to increase traction (and thus speed).
radicalbyte, you're absolutely right that the use of fans in Micromouse increases the traction, and therefore the speed at which the maze is solved. Suction allows for impressive performances.
However, a small caveat that might be worth considering is that while the suction indeed increases speed, it might be more accurate to say that it primarily improves acceleration instead of car speed: The issue often lies with achieving rapid acceleration rather than with maintaining high speed. Even systems with relatively low traction can reach high speeds given enough time and distance, but the ability to accelerate quickly is crucial in competitions like Micromouse.
Speed = ∫(acceleration) dt
Derivatives of relative displacement as defined by a distance metric in a [e.g. metric tensor] space:
Length = Point2 - Point1
Length * Time^-1 = Velocity or Speed
Length * Time^-2 = Acceleration
Length * Time^-3 = Jerk
Length * Time^-4 = Snap or Jounce
Length * Time^-5 = Crackle
Length * Time^-6 = Pop
Displacement (geometry) > Derivatives: https://en.wikipedia.org/wiki/Displacement_(geometry)#Deriva...Fourth, fifth, and sixth derivatives of position: https://en.wikipedia.org/wiki/Fourth,_fifth,_and_sixth_deriv...
Something weird about this is that humans very often ascribe the sensation of d^ns/dt^n, to feeling d^n-1s/dt^n-1.
So people will say that something which accelerates quickly is 'fast'. Or they will say, when they feel themselves initially being pressed back into their seat as a plane starts its takeoff roll, that they are experiencing 'acceleration' when what they are experiencing is actually jerk.
The thing is, you can't actually feel motion at a constant speed - so the only thing that tells you you are acquiring speed is your body's experience of acceleration - so when you feel yourself accelerating, you associate that with speed. Likewise, your body also can't really tell the difference between constant acceleration and just... being at a different angle, and maybe a bit heavier than normal. So it's when you experience changes in the apparent direction of 'down' and the overall 'weight' you're feeling that you think 'oh, we're accelerating'.
My favorite way to get a sense of what acceleration, jerk and snap feel like is to focus on what happens when you're in a car that's braking hard. You're decelerating at a relatively constant rate while the brakes are applied - it feels as if 'down' is pointing slightly forward, meaning you'd be sliding off the seat if it weren't for your seatbelt holding you back. When the car finally stops though, there's a very abrupt change in acceleration - a 'jerk'. 'Down' switches to pointing straight down again, very quickly. You're pulled back into your seat. That's jerk. And specifically the sudden onset of that swing in what direction 'down' is pointing, and then its rapid disappearance is snap. Your body feels like it's being 'jerked' around when the car stops precisely because that motion has high snap - you experience a sudden high amount of jerk, then the jerk ends.
>The thing is, you can't actually feel motion at a constant speed
In a vacuum. If you roll the window down, or if the atmosphere starts ablating you into a hot plasma then you may realize you're going very fast.
Oh cmon, who rolls down the windows in an airplane? ;)
Or the air's going very fast past you. Who can tell?
Well, if you're on a 2D plain, it's pretty easy to tell. If the solid plain is stationary to your motion, then the air is fast. If the solid plain is moving, then it's a little tricker to figure out.
Unless you’re in a wind tunnel or a hurricane, you can be pretty confident in the reason for the fast air.
Then you're experiencing acceleration, just in the opposite direction.
Not if you’re moving at constant speed. The air molecules are experiencing acceleration.
Please define "drag" in a way that somehow contradicts what I said...
The setup for our thought experiment here is someone traveling at constant speed.
By definition they are not experiencing any acceleration.
If they are experiencing a drag force they are also experiencing a thrust force that is equal to it because they are traveling at a constant speed.
‘Feeling the effect of a force’ is not the same thing as ‘experiencing an acceleration’.
Hmmm, from a Newtonian perspective I would have argued (or at least my personal impression is) the only thing we actually perceive is force (i.e. acceleration). All the situations you described are just (higher-order) changes of acceleration and even if you give them a name: At the end of the day, the only thing that matters to our bodies is the force.
Sure, but my argument is that you only really take note of changes in the acceleration you experience. Yes, your internal sense of ‘which way am I accelerating’ is the sensor you’re using (also ‘which way and how hard are my limbs being pulled’ and ‘how much force am I feeling in my joints and on the parts of my body that are touching the objects around me’) - but when those all indicate a generally constant vector with a magnitude close to 10m/s^2 your body just intuits that that direction is ‘up’. If that vector is changing your body figures you must be moving because ‘up’ normally doesn’t move.
So it’s changes in acceleration that create movement sensations, not acceleration itself.
Interesting thought but I'm not entirely convinced yet: 10m/s² down (along your body axis) is not the same as 10m/s² in any other direction. I would assume the body is perfectly able to distinguish between those.
Yes, you know if you are standing up or lying down. But your body doesn’t interpret either of those sensations as movement.
If you flip from one to the other suddenly, your brain figures out ‘we’re rotating’ pretty quickly though.
Painting wind turbine blades black help birds avoid deadly collisions (2020)
Serious question: what is threshold at which the number of these turbines sufficiently disturbs normal wind to induce climate change?
To induce? Wind power is effectively solar power in another form. They remove energy from the weather system, not add.
The wind patterns are already substantially affected by the collapsing of the polar vortex as the reflective ice cover in the Arctic decreases, reducing the (previously consistent) cycling of air around the globe. This is one of the major contributors to more severe weather.
All this to say: it's an interesting question, but I think it's already too late to wonder about any effect on "normal" wind. "Normal" wind is a thing of the past.
Infrared thermal energy is conserved with wind turbines, too. Though, there is apparently local ground-level higher heat around both wind turbines and solar panels.
Some of the air movement due to high and low pressure and humidity and turbulence becomes infrared radiation (heat) due to mechanical friction and presumably also due to turbulence and convection causing friction in the air.
What about Zebra stripes. I remember that thy keep the flies away.
What about painting eyes on wind turbines to avert birds?
/? eyes on things at the airport birds https://www.google.com/search?q=eyes+on+things+at+the+airpor...
"Wide-eyed glare scares raptors: From laboratory evidence to applied management" (2018) https://journals.plos.org/plosone/article?id=10.1371/journal...
Raptors (birds) > Common names: https://en.wikipedia.org/wiki/Bird_of_prey
Perhaps drones can save birds?
IIRC Festo has some of the longer flight times with their animal-inspired drones?
Would a covered charging dock/cradle for an autonomous or semi-autonomous bird-saving system would save [wind turbine] owners money?
"Never landing drone: Autonomous soaring of a unmanned aerial vehicle in front of a moving obstacle” (2023) https://journals.sagepub.com/doi/full/10.1177/17568293211060... https://scholar.google.com/scholar?cites=1139887687601566150...
"Orographic"; Orographic: https://en.wikipedia.org/wiki/Orography
Show HN: Ghidra Plays Mario
I've been exploring new ways of testing Ghidra processor modules. In this repo, I was able to emulate NES ROMs in Ghidra to test its 6502 specification, which resulted in finding and fixing some bugs.
Context: Ghidra is used for reverse engineering binary executables, complementing the usual disassembly view with function decompilation. Each supported architecture has a SLEIGH specification, which provides semantics for parsing and emulating instructions, not unlike the dispatch handlers you would find in interpreters written for console emulators.
Emulator devs have long had extensive test ROMs for popular consoles, but Ghidra only provides CPU emulation, so it can't run them without additional setup. What I did here is bridge the gap: by modifying a console emulator to instead delegate CPU execution to Ghidra, we can now use these same ROMs to validate Ghidra processor modules.
Previously [1], I went with a trace log diffing approach, where any hardware specific behaviour that affected CPU execution was also encoded in trace logs. However, it required writing hardware specific logic, and is still not complete. With the delegation approach, most of this effort is avoided, since it's easier to hook and delegate memory accesses.
I plan on continuing research in this space and generalizing my approaches, since it shows potencial for complementing existing test coverage provided by pcodetest. If a simple architecture like 6502 had a few bugs, who knows how many are in more complex architectures! I wasn't able to find similar attempts (outside of diffing and coverage analysis from trace logs), please let me know if I missed something, and any suggestions for improvements.
[1]: https://github.com/nevesnunes/ghidra-tlcs900h#emulation
Very old ML project from a master's student's thesis which is what originally got me into CS. He taught it to play NES games other than mario and had a good breakdown of his results.
The latest on winning Atari with RL+ FWIU:
https://github.com/openai/retro:
> Gym Retro lets you turn classic video games into Gym environments for reinforcement learning and comes with integrations for ~1000 games. It uses various emulators that support the Libretro API, making it fairly easy to add new emulators.
.nes is listed in the supported ROM types: https://retro.readthedocs.io/en/latest/integration.html#supp...
> Integrating a Game: To integrate a game you need to define a done condition and a reward function. The done condition lets Gym Retro know when to end a game session, while the reward function provides a simple numeric goal for machine learning agents to maximize.
> To define these, you find variables from the game’s memory, such as the player’s current score and lives remaining, and use those to create the done condition and reward function. An example done condition is when the `lives` variable is equal to 0, an example reward function is the change in the `score` variable.
PPO Proximal Policy Optimization and OpenAI/baselines: https://retro.readthedocs.io/en/latest/getting_started.html#...
MuZero: https://en.wikipedia.org/wiki/MuZero
MuZero-unplugged with PyTorch: https://github.com/DHDev0/Muzero-unplugged
Farama-Foundation/Gymnasium is a fork of OpenAI/gym and it has support for additional Environments like MuJoCo: https://github.com/Farama-Foundation/Gymnasium#environments
Farama-Foundatiom/MO-Gymnasiun: "Multi-objective Gymnasium environments for reinforcement learning": https://github.com/Farama-Foundation/MO-Gymnasium
RestGPT
"Gorilla: Large Language Model Connected with Massive APIs" (2023) https://gorilla.cs.berkeley.edu/ :
> Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them!
eval/: https://github.com/ShishirPatil/gorilla/tree/main/eval
- "Gorilla: Large Language Model connected with massive APIs" (2023-05) https://news.ycombinator.com/item?id=36073241
- "Gorilla: Large Language Model Connected with APIs" (2023-06) https://news.ycombinator.com/item?id=36333290
- "Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500 APIs (github.com/gorilla-llm)" (2023-06) https://news.ycombinator.com/item?id=36524078
Training and aligning LLMs with RLHF and RLHF alternatives
I read here that Yann LeCun claimed that even with RLHF, LLMs will still hallucinate - that it's an unavoidable consequence of their autoregressive nature
https://www.hopsworks.ai/dictionary/rlhf-reinforcement-learn...
Likely yes. But "solving" hallucinations is not really important as long as mitigating it to some sufficiently low level is possible.
When the next token is a URL, and the URL does not match the preceding anchor text.
Additional layers of these 'LLMs' could read the responses and determine whether their premises are valid and their logic is sound as necessary to support the presented conclusion(s), and then just suggest a different citation URL for the preceding text.
"#StructuredPremises"
Code-gov: A collection point for all Code.gov repositories
Shared this why?
Seems a more accurate source, tho harder to search through: https://code.gov/agencies
There are code.json generators and a scraper written in Python:
GSA/code-gov//docs/code_json_generators.md > "Code.gov Metadata Schema 2.0.0 Requirements" https://github.com/GSA/code-gov/blob/master/docs/code_json_g...
code.gov/agency-compliance/compliance/procurement: https://code.gov/agency-compliance/compliance/procurement
Looks like there's a JSON-LD @context for /data.json now: https://project-open-data.cio.gov/v1.1/metadata-resources/
Prometheus-like data pull system might've been better for COVID reporting w/ e.g. the hastily-added CDCPMDRecord and SpecialAnnouncement.
Prometheus: https://en.wikipedia.org/wiki/Prometheus_(software)
CDCPMDRecord: https://schema.org/CDCPMDRecord
"git-scraping" https://www.google.com/search?q=git+scraping
gh topic: git-scraping: https://github.com/topics/git-scraping
That page also hasn’t been updated in years.
GSA used to maintain a combined catalog that was refreshed a few times per year and searchable.
I’m also not sure if there’s still a requirement for agencies to keep their own code.json up to date. It’s hard to tell when each department refreshes, but it seems like HHS hasn’t updated theirs since March of 2022.
Do people like incentives or penalties?
What incentive could there be to keep reusable [federal] open source software inventoried?
I think incentives work better.
In this case, GSA used to scrape and combine and then stopped. Also GSA used to ask agencies to update their inventory and then stopped.
I think if GSA just asked, it would increase the recency and completeness of the code.jsons.
Also, what’s kind of funny is that since open source projects, by their nature, are publicly visible, GSA could probably just scrape and combine together and not rely on lots of different agencies to have their own processes.
"Git scraping: track changes over time by scraping to a Git repository" (2020) https://simonwillison.net/2020/Oct/9/git-scraping/ :
> Every 20 minutes it grabs the latest copy of that JSON endpoint, pretty-prints it (for diff readability) using jq and commits it back to the repo if it has changed.
> This means I now have a commit log of changes to that information
A static site builder can rebuild just the pages of the site that need to be changed once in a Github Action that updates the site when a Pull Request is merged to main.
Though, if the data quality is insufficient because the data sources are not updated, then downstream apps and static sites that depend upon the data are also insufficient.
There are ways to do this, but GSA just doesn’t do it.
Years ago they used to have a system that would combine all the code.jsons into a single db and provide a query interface. They stopped funding that system and redesigned this static site. But could have used GitHub actions or something to fetch and combine the code.jsons and do everything client side. That still wouldn’t have needed maintenance costs.
Here's a way to scrape URLs to JSON/YAML and then build static HTML with Hugo in a GitHub Action: https://github.com/jackyzha0/hugo-obsidian
datasette is a webapp and CLI built on SQLite and Python. datasette-lite is the pyodide + WebAssembly build of datasette which can be served as static HTML, JS, and WASM SQlite.
datasette: https://github.com/simonw/datasette :
> Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API.
> Datasette is aimed at data journalists, museum curators, archivists, local governments, scientists, researchers and anyone else who has data that they wish to share with the world.
From "Deploying a live Datasette demo when the tests pass" (2022) https://til.simonwillison.net/github-actions/deploy-live-dem... :
datasette publish vercel fixtures.db [...]
The `datasette publish` command supports
Google Cloud Run, Heroku, Vercel, Fly, [Full Container or Serverless]
https://docs.datasette.io/en/stable/publish.htmldatasette-lite: https://github.com/simonw/datasette-lite :
> You can use this tool to open any SQLite database file that is hosted online and served with a `access-control-allow-origin: ` CORS header. Files served by GitHub Pages automatically include this header, as do database files that have been published online using `datasette publish`.*
> [...] You can paste in the "raw" URL to a file, but Datasette Lite also has a shortcut: if you paste in the URL to a page on GitHub or a Gist it will automatically convert it to the "raw" URL for you
> To load a Parquet file, pass a URL to `?parquet=`
> [...] https://lite.datasette.io/?parquet=https://github.com/Terada...*
There are various *-to-sqlite utilities that load data into a SQLite database for use with e.g. datasette. E.g. Pandas with `dtype_backend='arrow'` saves to Parquet.
datasette plugins are written in Python and/or JS w/ pluggy: https://docs.datasette.io/en/stable/plugins.html https://datasette.io/plugins
datasette-scraper scrapes sitemaps.xml and crawls though it could surely be repurposed to instead scrape a list of code.json URLs within the datasette process, which is powered by asyncio and the asynchronous uvicorn ASGI HTTP web server.
datasette-scraper/#architecture: https://github.com/cldellow/datasette-scraper/#architecture
(TIL datasette-scraper parses HTML with selectolax; and Selectolax with Modest or Lexbor is ~25x faster at HTML parsing than BeautifulSoup in the selectolax benchmark: https://github.com/rushter/selectolax#simple-benchmark )
(Apache Nutch is a Java-based web crawler which supports e.g. CommonCrawl (which backs various foundational LLMs)) https://en.wikipedia.org/wiki/Apache_Nutch#Search_engines_bu... . But extruct extracts more types of metadata and data than Nutch AFAIU: https://github.com/scrapinghub/extruct )
datasette-graphql adds a GraphQL HTTP API to a SQLite database: https://datasette.io/plugins/datasette-graphql
plugins?q=sqlite: https://datasette.io/plugins?q=sqlite
datasette-sqlite-fts4: https://datasette.io/plugins/datasette-sqlite-fts4 ; Full-Text Search with SQLite
datasette-ripgrep: "deploy a regular expression search engine for your source code": https://github.com/simonw/datasette-ripgrep
Seeing as there's already a JSONLD @context (schema) for code.json, CSVW as JSONLD and/or YAMLLD would be an easy way merge Linked Data graphs of tabular data: https://github.com/semantalytics/awesome-semantic-web#csvw
A GitHub Action would run regularly, fetch each code.json, save each to a git repo, and then upsert each into a SQLite database to be published with e.g. datasette or datasette-lite.
Asking 60 LLMs a set of 20 questions
In case anyone's interested in running their own benchmark across many LLMs, I've built a generic harness for this at https://github.com/promptfoo/promptfoo.
I encourage people considering LLM applications to test the models on their _own data and examples_ rather than extrapolating general benchmarks.
This library supports OpenAI, Anthropic, Google, Llama and Codellama, any model on Replicate, and any model on Ollama, etc. out of the box. As an example, I wrote up an example benchmark comparing GPT model censorship with Llama models here: https://promptfoo.dev/docs/guides/llama2-uncensored-benchmar.... Hope this helps someone.
ChainForge has similar functionality for comparing : https://github.com/ianarawjo/ChainForge
LocalAI creates a GPT-compatible HTTP API for local LLMs: https://github.com/go-skynet/LocalAI
Is it necessary to have an HTTP API for each model in a comparative study?
Thanks for sharing this, this is awesome!
I noticed on the evaluations, you're looking at the structure of the responses (and I agree this is important.) But how do I check the factual content of the responses automatically? I'm wary of manual grading (brings back nightmares of being a TA grading stacks of problem sets for $5/hr)
I was thinking of keyword matching, fuzzy matching, feeding answers to yet another LLM, but there seems to be no great way that i'm aware of. Any suggestions on tooling here?
The library supports the model-graded factuality prompt used by OpenAI in their own evals. So, you can do automatic grading if you wish (using GPT 4 by default, or your preferred LLM).
Example here: https://promptfoo.dev/docs/guides/factuality-eval
OpenAI/evals > Building an eval: https://github.com/openai/evals/blob/main/docs/build-eval.md
"Robustness of Model-Graded Evaluations and Automated Interpretability" (2023) https://www.lesswrong.com/posts/ZbjyCuqpwCMMND4fv/robustness... :
> The results inspire future work and should caution against unqualified trust in evaluations and automated interpretability.
From https://news.ycombinator.com/item?id=37451534 : add'l benchmarks: TheoremQA, Legalbench
Additional benchmarks:
- "TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/wenhuchen/TheoremQA#leaderboard
- from https://news.ycombinator.com/item?id=36038440: > Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law https://github.com/maastrichtlawtech/awesome-legal-nlp#bench...
Ask HN: Looking for a resource on Linux kernel module development
I'd like to break into this area in my next role and I'm looking for resources to learn about this over the next few weeks so I can try to get a role in the area. My context is embedded linux.
EDIT: I don’t know what I don’t know so I’m looking for a “point in the right direction” and anything the fine people of HN think is a good resource
- "The Linux Kernel Module Programming Guide" (2023) https://news.ycombinator.com/item?id=35782630
- Still trying to find the How To Build a Formally Verified Kernel Module from Zero tut
Edit:
- /? Kernel module https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
- "Linux Kernel Module written in Scratch (a visual programming language for kids)" (2022) https://news.ycombinator.com/item?id=31921996 (EduBlocks does Scratch with Python blocks and/or text code)
- "How to Discover and Prevent Linux Kernel Zero-day Exploit using Formal Verification" (2021) [w/ Coq] http://digamma.ai/blog/discover-prevent-linux-kernel-zero-da... https://news.ycombinator.com/item?id=31617335
Using LD_PRELOAD to cheat, inject features and investigate programs
Fun fact: proxychains uses LD_PRELOAD [0] to hook the necessary syscalls [1] for setting up a "proxy environment" for the wrapped program, e.g. `connect`, `gethostbyname`, `gethostbyaddr`, etc. (Note this also implies that it could be leaky in some cases when applied to a program that uses alternative syscalls to make an external connection, or a program that is not dynamically linked. I would not recommend depending on proxychains for any sort of opsec, even though it's often recommended as such a tool.)
[0] https://github.com/haad/proxychains/blob/master/src/proxycha...
[1] https://github.com/haad/proxychains/blob/master/src/libproxy...
> proxychains uses LD_PRELOAD [0] to hook the necessary syscalls [1]
Technically, it uses LD_PRELOAD to hook the necessary libc functions. As, at least on x86-64, a syscall is just a CPU instruction like any other, you can't hook into it through LD_PRELOAD or any other tricks that don't involve the kernel (apart from rewriting the program before you execute it). That's also why it doesn't work on e.g. Go programs, as they don't use libc.
> As, at least on x86-64, a syscall is just a CPU instruction like any other, you can't hook into it through LD_PRELOAD or any other tricks that don't involve the kernel (apart from rewriting the program before you execute it).
On most Unix systems, you can use ptrace to intercept system calls from another process. This is how tools like rr or strace work.
Ptrace is one of a number of ways to hook syscalls without LD_PRELOAD.
The gVisor docs list 3 ways: KVM, systrap, ptrace https://gvisor.dev/docs/architecture_guide/platforms/ :
> systrap: The systrap platform relies seccomp’s SECCOMP_RET_TRAP feature in order to intercept system calls. This makes the kernel send SIGSYS to the triggering thread, which hands over control to gVisor to handle the system call. For more details, please see the systrap README file.
> systrap replaced ptrace as the default gVisor platform in mid-2023. If you depend on ptrace, and systrap doesn’t fulfill your needs, please voice your feedback.
> ptrace: The ptrace platform uses PTRACE_SYSEMU to execute user code without allowing it to execute host system calls. This platform can run anywhere that ptrace works (even VMs without nested virtualization), which is ubiquitous.
> Unfortunately, the ptrace platform has high context switch overhead, so system call-heavy applications may pay a performance penalty. For this reason, systrap is almost always the better choice.
The Falco docs list 3 syscall event drivers: Kernel module, Classic eBPF probe, and Modern eBPF probe: https://falco.org/docs/event-sources/kernel/
Dynamic linker > Systems using ELF: https://en.wikipedia.org/wiki/Dynamic_linker#Systems_using_E...
Specialized astrocytes mediate glutamatergic gliotransmission in the CNS
From "MSG is the most misunderstood ingredient of the century. That’s finally changing" (2023) https://news.ycombinator.com/item?id=35885204 :
- The G in MSG is for Glutamate.
- > High glutamine-to-glutamate ratio predicts the ability to sustain motivation: The researchers found that individuals with a higher glutamine-to-glutamate ratio had a higher success rate and a lower perception of effort. https://neuroscienceschool.com/2020/10/11/how-to-sustain-mot...
Low glutamate = depression.
If you want to turn your glutamate into glutamime take some manganese.
I have the problem of high glutamate and my mother and I would freak out after eating at asian restaurants in the 70's. No one believed us.
MSG: https://en.wikipedia.org/wiki/Monosodium_glutamate
Manganese: https://en.wikipedia.org/wiki/Manganese :
> For manganese labeling purposes 100% of the Daily Value was 2.0 mg, but as of 27 May 2016 it was revised to 2.3 mg to bring it into agreement with the RDA (in the US, for males >= 19).
https://www.hsph.harvard.edu/nutritionsource/manganese/ :
> RDA: The Recommended Dietary Allowance (RDA) for adults 19+ years is 2.3 mg a day for men and 1.8 mg for women. For women who are pregnant or lactating, the RDA is 2.0 mg and 2.6 mg, respectively.
> UL: The Tolerable Upper Intake Level (UL) for manganese for all adults 19+ years of age and pregnant and lactating women is 11 mg daily; a UL is the maximum daily intake unlikely to cause harmful effects on health.
Looks like EU proposed 3mg/day for Mn in 2013.
https://pubmed.ncbi.nlm.nih.gov/27385039/ :
> The mean intake of Mn was 6·07 (sd 2·94) mg/d for men (n 998) and 5·13 (sd 2·65) mg/d for women (n 1113). Rice (>42 %) was the main food source of Mn. The prevalence of the MetS was 28·0 % (590/2111). Higher Mn intake was associated with a decreased risk of the MetS in men (Q4 v. Q1 OR 0·62; 95 % CI 0·42, 0·92; P trend=0·043) but an increased risk in women (Q4 v. Q1 OR 1·56; 95 % CI 1·02, 2·45; P trend=0·078)
That says dietary intake of Manganese in e.g. China is more than double average intake here in the US and also in the EU, where "No MSG" also results in much lower MSG dietary intake.
Manganese deficiency: https://en.wikipedia.org/wiki/Manganese_deficiency_(medicine... :
> Manganese is found in leafy green vegetables, fruits, nuts, cinnamon and whole grains. The nutritious kernel, called wheat germ, which contains the most minerals and vitamins of the grain, has been removed from most processed grains (such as white bread). The wheat germ is often sold as livestock feed. Many common vitamin and mineral supplement products fail to include manganese in their compositions. Relatively high dietary intake of other minerals such as iron, magnesium, and calcium may inhibit the proper intake of manganese.
MSG > Stigma in Western countries: https://en.wikipedia.org/wiki/Monosodium_glutamate#Stigma_in...
FDA says that MSG is GRAS: Generally Recognized as Safe.
I grew up in the US with a fear of MSG.
At times I take 20 to 30mg of Manganese a day, but not for long. Do not try this at home, I know what I am doing, and I know my genetics need it.
The manganese, by reducing glutamate, lowers my blood pressure from 160/90 to 123/79. My doctors are amazed but don't care since I have had unresponsive hypertension for 10 years.
NYPD spent millions to contract with firm banned by Meta for fake profiles
Police should not be able to lie without a warrant.
If they do not disclose credentials, and they are defrauding and/or identity-thefting and thereby sabotaging one, is one fairly regarded as hostile and what is a fair use of force in self defense?
The force they use to kill you when you try this will be considered valid in court for sure.
Scenario: A harasses/assails/defrauds/identity_thefts/sabotages B. (A does not disclose any credential of legal authority to B, who has the right to check the validity of A's credentials if claimed as material prior to such altercation.) Given self-defense right due to the initial positive hostile action of A, B is not legally obligated to request that the or a state pursue Due Process against A in the immediate or before the Statute of Limitations.
So, if someone is defrauding you, you have the right to self defense (and also Equal Protection of your Equal Rights).
Maybe Rust isn’t a good tool for massively concurrent, userspace software
I find myself in this weird corner when it comes to async rust.
The guy's got a point in that doing a bunch of Arc, RwLock, and general sharing of state is going to get messy. Especially once you are sprinkling 'static all over the place, it infects everything, much like colored functions. I did this whole thing once back when I was starting off where I would Arc<RwLock> stuff, and try to be smart about borrow lifetimes. Total mess.
But then rust also has channels. When you read about it, it talks about "messages", which to me means little objects. Like a few bytes little. This is the solution, pretty much everything I write now is just a few tasks that service some channels. They look at what's arrived and if there's something to output, they will put a message on the appropriate channel for another task to deal with. No sharing objects or anything. If there's a large object that more than one task needs, either you put it in a task that sends messages containing the relevant query result, or you let each task construct its own copy from the stream of messages.
And yet I see a heck of a lot of articles about how to Arc or what to do about lifetimes. They seem to be things that the language needs, especially if you are implementing the async runtime, but I don't understand why the average library user needs to focus so much on this.
The author does mention that you should probably stop at using Threads and passing data around via channels... but then mentions the C10K problem and says that sometimes you need more... but does not answer the question that I think is begging to be asked: does using Rust async with all the complications (Arc, cloning, Mutex whatever) does actually outperform Threads/channels?? Even if it does, by how much? It would be really interesting to know the answer. I have a feeling that Threads/channels may be more performant in practice, despite the imagined overhead.
There's not a good distributed concurrent benchmark in the Techempower Web Framework benchmarks, because the Multiple Queries and Fortunes test programs don't use any parallelism or concurrency primitives to win at fast SQL queries. https://www.techempower.com/benchmarks/#section=data-r21&tes...
From https://news.ycombinator.com/item?id=37289579 :
> I haven't checked, but by the end of the day, I doubt eBPF is much slower than select() on a pipe()?
Channels have a per-platform implementation.
- "Patterns of Distributed Systems (2022)" (2023) https://news.ycombinator.com/item?id=36504073
Lean 4.0
There is no excuse anymore. I have to try it out.
I tried Lean twice and what caused me to stop both times was a severe lack of documentation. That's a shame, because it's an awesome language.
A https://learnxinyminutes.com/ for Lean and Lean Mathlib would be a helpful resource
On the contrary, I think the basics are covered pretty well by the official documentation, but it's lacking the more advanced stuff.
I agree; for simple and "beginning intermediate" tasks, Functional Programming in Lean has it all; but the more advanced features are only documented in the source code and doc-strings. There is also a lack of general listings: for data structures, for file access API, for algorithms etc.
Is there an eBNF+PEG or similar grammar for the language parser that a more complete language reference can be generated from?
Does Lean have docstrings with embedded markup like Python?
The Python Language Reference: https://docs.python.org/3/reference/
The Python Language Reference > 10. Full Grammar specification: https://docs.python.org/3/reference/grammar.html
LearnXinYminutes > Where X=Python: https://learnxinyminutes.com/docs/python/
In the language and then the docs, Python's collections.abc Abstract Base Classes did not initially exist. There's now a table of ABCs in the docs: https://docs.python.org/3/library/collections.abc.html#colle...
In Python, they're not interfaces, they're ABCs.
Transformers as Support Vector Machines
SVMs are randomly initialized (with arbitrary priors) and then are deterministic.
From "What Is the Random Seed on SVM Sklearn, and Why Does It Produce Different Results?" https://saturncloud.io/blog/what-is-the-random-seed-on-svm-s... :
> When you train an SVM model in sklearn, the algorithm uses a random initialization of the model parameters. This is necessary to avoid getting stuck in a local minimum during the optimization process.
> The random initialization is controlled by a parameter called the random seed. The random seed is a number that is used to initialize the random number generator. This ensures that the random initialization of the model parameters is consistent across different runs of the code
From "Random Initialization For Neural Networks : A Thing Of The Past" (2018) https://towardsdatascience.com/random-initialization-for-neu... :
> Lets look at three ways to initialize the weights between the layers before we start the forward, backward propagation to find the optimum weights.
> 1: zero initialization
> 2: random initialization
> 3: he-et-al initialization
Deep learning: https://en.wikipedia.org/wiki/Deep_learning
SVM: https://en.wikipedia.org/wiki/Support_vector_machine
Is it guaranteed that SVMs converge upon a solution regardless of random seed?
An SVM is a quadratic program, which is convex. This means that they should always converge and they should always converge to the same global optimum, regardless of initialization, as long as they are feasible, I.e. as long as the two classes can be separated by an SVM.
> as long as the two classes can be separated by an SVM.
Are the classes separable with e.g. the intertwined spiral dataset in the TensorFlow demo? Maybe only with a radial basis function kernel?
Separable state https://en.wikipedia.org/wiki/Separable_state :
> In quantum mechanics, separable states are quantum states belonging to a composite space that can be factored into individual states belonging to separate subspaces. A state is said to be entangled if it is not separable. In general, determining if a state is separable is not straightforward and the problem is classed as NP-hard.
An algorithm may converge upon the same wrong - or 'high error' - answer; regardless of a random seed parameter.
It looks like there is randomization for SVMs for e.g. Platt scaling [1], though I had confused Simulated Annealing with SVMs. And then re-read Quantum Annealing; what is the ground state of the Hamiltonian any why would I use a hyperplane instead?
The article you’ve linked is incorrect. As Dr_Birdbrain said, fitting an SVM is a convex problem with unique global optimum. sklearn.SVC relies on libsvm which initializes the weights to 0 [0]. The random state is only used to shuffle the data to make probability estimates with Platt scaling [1]. Of the random_state parameter, the sklearn documentation for SVC [2] says
Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False. Pass an int for reproducible output across multiple function calls. See Glossary.
[0] https://github.com/scikit-learn/scikit-learn/blob/2a2772a87b...
[1] https://en.wikipedia.org/wiki/Platt_scaling
[2] https://scikit-learn.org/stable/modules/generated/sklearn.sv...
Which article is incorrect? Indeed it looks like there is no random initialization in libsvm or thereby sklearn.svm.SVC or in sklearn.svm.*. I seem to have confused random initialization in Simulated Annealing with SVMs; though now TIL that there are annealing SVMs, and SVMs do work with wave functions (though it's optional to map the wave functions into feature space with quantum state tomography), and that there are SVMs for the D-Wave Quantum annealer QC.
From "Support vector machines on the D-Wave quantum annealer" (2020) https://www.sciencedirect.com/science/article/pii/S001046551... :
Kernel-based support vector machines (SVMs) are supervised machine learning algorithms for classification and regression problems. We introduce a method to train SVMs on a D-Wave 2000Q quantum annealer and study its performance in comparison to SVMs trained on conventional computers. The method is applied to both synthetic data and real data obtained from biology experiments. We find that the quantum annealer produces an ensemble of different solutions that often generalizes better to unseen data than the single global minimum of an SVM trained on a conventional computer, especially in cases where only limited training data is available. For cases with more training data than currently fits on the quantum annealer, we show that a combination of classifiers for subsets of the data almost always produces stronger joint classifiers than the conventional SVM for the same parameters.
My apologies for the ambiguity; I assumed it would be clear from context. The article at the link, https://saturncloud.io/blog/what-is-the-random-seed-on-svm-s..., is incorrect. Whoever wrote it seems to have confused support vector machines with neural networks.
For the D-Wave paper, I'm not sure it's fair that they are comparing an ensemble with a single classifier. I think it would be more fair if they compared their ensemble with a bagging ensemble of linear SVMs which each use the Nystroem kernel approximation [0] and which are each trained using stochastic sub-gradient descent [1].
[0] https://scikit-learn.org/stable/modules/generated/sklearn.ke...
[1] https://scikit-learn.org/stable/modules/sgd.html#classificat...
Nystroem method: https://en.wikipedia.org/wiki/Nystr%C3%B6m_method
6.7 Kernel Approximation > 6.7.1. Nystroem Method for Kernel Approximation https://scikit-learn.org/stable/modules/kernel_approximation...
Nystroem defaults to an rbf radial basis function and - from quantum logic - Bloch spheres are also radial. Perhaps that's nothing.
FWIU SVMs w/ kernel trick are graphical models, and NNs are too.
How much more resource-cost expensive is it to train an ensemble of SVMs than one graphical model with typed relations? What about compared to deep learning for feature synthesis and selection and gradient boosting with xgboost to find the coefficients/exponents of the identified terms of the expression which are not prematurely excluded by feature selection?
There are algorithmic complexity and algorithmic efficiency metrics that should be relevant to AutoML solution ranking. Opcode cost may loosely correspond to algorithmic complexity.
[Dask] + Scikeras + Auto-sklearn 2.0 may or may not modify NN topology metaparameters like number of layers and nodes therein at runtime? https://twitter.com/westurner/status/1697270946506170638
Universal function approximator == universal function approximator
Universal approximation theorem: https://en.wikipedia.org/wiki/Universal_approximation_theore...
NAND is a universal logic gate; from which all classical functions can be approximated.
CCNOT and Hadamard are universal logic gates with which all (?) quantum functions/transforms can be approximated.
Cubic splines are universal function approximators. As is piecewise linear interpolation. The universal function approximator thing is way overblown.
FFT describes everything with sinusoids by default. For QFT, it's wave functions. Wavelets are more like NN neurons that match (scale-invariant?) quantized sections of waveforms.
Fluids are decomposed into things with curl.
A classical universal function approximator is probably not sufficient to approximate quantum systems [...] https://news.ycombinator.com/item?id=37379123
> CCNOT and Hadamard are universal logic gates with which all (?) quantum functions/transforms can be approximated.
CNOT, H, S, and T are universal for approximating any quantum operation.
A classical universal function approximator is probably not sufficient to approximate quantum systems (unless there is IDK a geometric breakthrough in classical-quantum correspondence similar to the Amplituhedron).
IIUC Church-Turing and Church-Turing-Deutsch say that Turing complete is enough for classical computing, and that a qubit computer can simulate the same quantum logic circuits as any qudit or qutrit computer; but is it ever shown that Quantum Logic is indeed the correct and sufficient logic for propositional calculus and also for all physical systems?
From "Quantum logic gate > Universal quantum gates": https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q... :
> Some universal quantum gate sets include:
> - The rotation operators Rx(θ), Ry(θ), Rz(θ), the phase shift gate P(φ)[c] and CNOT are commonly used to form a universal quantum gate set.
> - The Clifford set {CNOT, H, S} + T gate. The Clifford set alone is not a universal quantum gate set, as it can be efficiently simulated classically according to the Gottesman–Knill theorem.
> - The Toffoli gate + Hadamard gate.[17] The Toffoli gate alone forms a set of universal gates for reversible boolean algebraic logic circuits which encompasses all classical computation.
[...]
> - The parametrized three-qubit Deutsch gate D(θ)
> A universal logic gate for reversible classical computing, the Toffoli gate, is reducible to the Deutsch gate, D(π/2), thus showing that all reversible classical logic operations can be performed on a universal quantum computer.
CCNOT: https://en.wikipedia.org/wiki/Toffoli_gate https://en.wikipedia.org/wiki/Quantum_logic_gate#Toffoli_(CC...
CNOT: https://en.wikipedia.org/wiki/Controlled_NOT_gate
H: https://en.wikipedia.org/wiki/Quantum_logic_gate#Hadamard_ga...
S: https://en.wikipedia.org/wiki/Quantum_logic_gate#Phase_shift...
T: https://en.wikipedia.org/wiki/Quantum_logic_gate#Phase_shift...
Implicit to a quantum approximator would be at least Quantum statistical mechanics and maybe also Quantum logic:
Quantum statistical mechanics: https://en.wikipedia.org/wiki/Quantum_statistical_mechanics
Quantum logic: https://en.wikipedia.org/wiki/Quantum_logic
The Federal Helium reserve is for sale
Why is it for sale?
10 years ago, Congress decided we needed to transition off of it. Large amounts of the stockpile were sold off from 2013-2018 and this is the final part of the plan.
https://www.blm.gov/programs/energy-and-minerals/helium/fede...
Transition off of it? Dont we need it for superconducting coolant and some other things?
FWIU Nuclear Fusion can use (and produce) 4He ('He4') as fuel.
I think it makes sense to retain our nation's helium reserves.
You are thinking of helium-3, of which there is none stored here.
It is very much harder to fuse helium-4.
I think it's both Helium-3 and Helium-4?
(Tritium (3H) decays into 3He with a 12 year half life. Before 12*n years, tritium is radioactive and denser than water and so it will pool and concentrate in ocean cavities for example at the seafloor. Tritium also probably has affinity for certain types of trash floating in the ocean, which Seabin and The Ocean Cleanup are addressing.)
Is laser-based nuclear transmutation of e.g. He4 into He3 easier with the high heat of a nuclear fusion reaction?
Helion's fusion process involves both He3 and He4: https://news.ycombinator.com/item?id=37403522
According to the GSA themselves, the value of the helium there is ~$100,000,000 [0] though that is 2019 prices, I believe it has gone up since then.
Apparently for diving purposes it's as high as $2.0 per cubic foot. [1]
[0]: https://pubs.usgs.gov/periodicals/mcs2020/mcs2020-helium.pdf
Depending on who you ask, the value of the crude helium there is around $300M to $360M. The $2-$3/cf for refined helium isn't really that relevant, since what is included is unrefined, if it were refined it'd be worth many billions.
What is it worth as isotopes He3 and He4; e.g. for Nuclear Fusion, superfluidity and superconductivity experiments, and medical imaging that probably can be done without Helium?
Not much because the total market cap isn’t high for isotopes..
Helion Energy https://en.wikipedia.org/wiki/Helion_Energy :
> They are developing a magneto-inertial fusion technology to produce helium-3 and fusion power via aneutronic fusion, [2][3] which could produce low-cost clean electric energy using a fuel that is derived exclusively from water. [4]
Here's the part of the Real Engineering video where it is explained how 3He and 4He Helium isotopes are fusion byproducts and fuel used to generate electricity with nuclear fusion: https://www.youtube.com/watch?v=_bDXXWQxK38&t=12m40s
(From "When Will Fusion Energy Light Our Homes?" (2023) https://news.ycombinator.com/item?id=34582798)
Doesn't this new nuclear fusion process also change the valuation of the Tritium in [Fukushima Daiichi,] heavy water? Why would you dump money into the ocean?
> Doesn't this new nuclear fusion process also change the valuation of the Tritium in [Fukushima Daiichi,] heavy water? Why would you dump money into the ocean?
This video provides background to explain some of that, but essentially, 1) It's hard to extract the heavy water from regular water. If you could do that efficiently there wouldn't have to be dumping of tritium into the ocean. 2) There's so little tritium in the released heavy water to be useful for nuclear fusion that isn't commercially viable yet. You'd be holding onto large quantities of water for a long time, as they already were.
FWIU most synthesized tritium is from TPBAR rods (and also separated from drained reactor fluid); so it is possible, there just aren't many research institutions or indeed any production operations that do isotope separation from water?
FWIU evaporation doesn't work because Tritium/He3 crawls up the walls of the container it was in, because gravity.
Presumably nuclear research scientists have already considered centrifugation, titration, pressure / heat / boiling and other phase state transitions, Laser Nuclear Transmutation (*), reuse in a reactor with TPBAR rods designed to collect Tritium for later processing, and as fuel for peaceful civilian e.g. a D-T + (He3, He4) nuclear fusion electricity generation.
https://en.wikipedia.org/wiki/Watts_Bar_Nuclear_Plant > Tritium production (w/ TPBAR rods and waste casks that aren't yet repurposed for fusion research)
Fusion that takes heavy water as an input e.g. at a first stage facility that processes radioactive material and yields nonradioactives for a 'second stage' (?) facility would be great.
FWIU, that is what Helion does; though there aren't yet separate stages.
Do old casks of heavy water (dangerous nuclear waste from an old-gen nuclear reactor) contain significant amounts of recoverable Helium-3 due to the 12.3 year typical (*) half-life of Tritium?
Again, Helium-3 is a viable nonradioactive input to nuclear fusion reactions.
[deleted]
Multiple Notepad++ Flaws Let Attackers Execute Arbitrary Code
I would certainly accept this vulnerability in order to have Notepad++ on Linux. Nothing like it.
What does nppp give you that vscode, sublime, or the gnome editor doesn't?
VsCode, or Codium, which is what I use, does not have a macro facility. That for me is the killer feature of Notepad++. I have no experience with the other editors, but I suspect they also have no macro capability, or if they do it doesn't measure up.
- Microsoft/vscode#4490: "Macro recording" (2016-) https://github.com/microsoft/vscode/issues/4490
It looks like there are a number of vscode extensions for recording macros:
- https://www.google.com/search?q=vscode+macro+recorder
- https://marketplace.visualstudio.com/search?term=Macro&targe...
- the macro-commander README explains its JSON-based macro language. YAML might be easier to maintain than JSON. https://github.com/jeff-hykin/macro-commander#what-are-some-...
For teams with multiple editors, you can specify workflow automation scripts with shell scripts, or e.g. hubflow, or CI container/cmd YAML, and/or a pre-commit.yml instead of with an IDE-specific tool.
Isn't there native real-time collaboration functionality in vscode/vscodium that would be useful for a native macro recording feature? (Edit) Live Share can't be installed in vscodium. https://github.com/VSCodium/vscodium/issues/128
Support for jupyter-collaboration Y.js CRDT RTC could be added to vscode-jupyter and/or a more generic extension: "Support for real-time collaboration in the extension?" https://github.com/microsoft/vscode-jupyter/discussions/1293...
jupyterlab/jupyter-collaboration: https://github.com/jupyterlab/jupyter-collaboration
And then vscode sessions and thus macros could be recorded from ide events and appended to a jupyter_ydoc: https://github.com/jupyter-server/jupyter_ydoc
Wayland does not support screen savers
And X doesn't support multi-finger gestures though Wayland does without an app like touchegg: https://github.com/JoseExposito/touchegg
And the NVIDIA proprietary Linux module for NVIDIA GPUs hardware video decode doesn't work on Wayland; along with a number of other things: "NVIDIA Accelerated Linux Graphics Driver README and Installation Guide > Appendix L. Wayland Known Issues" https://download.nvidia.com/XFree86/Linux-x86_64/535.54.03/R...
We usually use our computer with a keyboard and a mouse.
Recursively summarizing enables long-term dialogue memory in LLMs
To my intuition, all of these ways of building memory in “text space” seem super hacky.
It seems intuitive to me that memory would be best stored in dense embedding space that can preserve full semantic meaning for the model rather than as some hacked on process of continually regenerating summaries.
And similarly, the model needs to be trained in a setting where it is aware of the memory and how to use it. Preferably that would be from the very beginning (ie. the train on text).
It does seem hacky, but then again the whole concept of conversational LLMs is. You're just asking it to add an extra word to a given conversation and after a bit, it spits out an end token that tells your application to hand control back to the user.
I think latent space and text space aren't as far apart as you think. LLMs are pretty stupid, but very good at speech. They are good at writing code because that's very similar, but fall apart in things that need some actual abstract thinking, like math.
Those text space hacks do tend to work and stuff like "think step by step" has become common because of that.
LoRAs are closer to what you mean and they're great at packing a lot of understanding into very little data. But adjusting weights for a single conversation just isn't feasible yet, so we're exploring text space for that purpose. Maybe someome will transfer the methods we discover in text space to embedding space to make them more efficient, but that's for the future.
Space travel via tether between asteroids
Space tether missions: https://en.wikipedia.org/wiki/Space_tether_missions :
> A number of space tethers have been deployed in space missions.[1] Tether satellites can be used for various purposes including research into tether propulsion, tidal stabilisation and orbital plasma dynamics.
Tether propulsion -> Space tether: https://en.wikipedia.org/wiki/Tether_propulsion
How many RPMs/Hz are necessary for DC (or AC) motors?
How different are the physics for a [WebGL] simulator for body kinematics and Poi/Glowstringing and a simulator for space tethers in n-body gravity?
Poi (performance art) / Glowstringinging https://en.wikipedia.org/wiki/Poi_(performance_art)
Poi spinning > Types of poi https://simple.wikipedia.org/wiki/Poi_spinning
Q: Would there be advantages to a space tether lunar cycler? (Aldrin 1985)
Lunar cycler: https://en.wikipedia.org/wiki/Lunar_cycler
(Edit)
Gravity; IDK maybe cooling of solar panels painted white on the underside. But then the complexity of ungainly rendezvous
Police Seized Innocent Peoples Property, Kept It for Years. What Will SCOTUS Do?
Severely limiting the scope of civil forfeiture seems like one of those political issues that everyone across the political spectrum can agree on.
I'm constantly hearing horror stories like the ones in this article and I've never heard any reasonable defense of our current system. In fact, I've never even heard an unreasonable defense.
> In fact, I've never even heard an unreasonable defense.
I suspect like many things, there was some logic to the introduction. But it's just gotten to really batshit levels that there is probably nothing to do but stop the practice.
As I understand it, some of the origins were that for debts owed to the government, with the person not present in the country, it was much more practical to target assets that may exist within the country than try and get the individual.
Even in the introduction in the criminal context, I sort of understand the logic. In criminal organizations, there are likely people profiting a lot, that have engineered enough doubt that proving their connection to the beyond a reasonable standard is so difficult. But meeting a preponderance of the evidence standard in the civil context to reclaim those assets I can understand the argument.
So I think there are some bad and weak arguments, but they certainly don't outweigh what's happening now.
I think one of the worst argument I heard for not returning the money, was the police department wasn't able to follow the judges order, because they deposited the money into a bank account and transferred to another agency. And they'll never be able to find the exact same bills again that they gave to the bank, so are unable to follow the court order to return the money they took to the rightful owner.
Civil forfeiture in the United States > History > Holder / Obama: https://en.wikipedia.org/wiki/Civil_forfeiture_in_the_United...
Account for the seized assets.
Interledger Protocol works with any type of (digital) asset ledger; W3C ILP.
The FTC CAT Consolidated Audit Trail system does not uniquely identify individual bills / [ledger] dollars, either FWIU. But are photons uniquely identifiable either
Due process: https://en.wikipedia.org/wiki/Due_process :
> Due process developed from clause 39 of Magna Carta in England. Reference to due process first appeared in a statutory rendition of clause 39 in 1354 thus: "No man of what state or condition he be, shall be put out of his lands or tenements nor taken, nor disinherited, nor put to death, without he be brought to answer by due process of law." [3] When English and American law gradually diverged, due process was not upheld in England but became incorporated in the US Constitution.
Whether it's possible to disarm and incapacitate without larceny.
Criminal Justice Reform > Arguments on criminal justice reform > Support for reform https://en.wikipedia.org/wiki/Criminal_justice_reform_in_the...
Save the children.
School-to-prison pipeline > Alternative approaches, Mental Health: https://en.wikipedia.org/wiki/School-to-prison_pipeline
Articles 11, 17, 6 and 7 of the Universal Declaration of Human Rights specify rights to Due Process of law, Property, and Equal Protection of such rights.
Universal Declaration of Human Rights: https://en.wikipedia.org/wiki/Universal_Declaration_of_Human...
https://www.un.org/en/about-us/universal-declaration-of-huma...
There are 549 translations of the UDHR. https://www.ohchr.org/en/search?f%5B0%5D=event_type_taxonomy...
> Article 11: Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defence.
> No one shall be held guilty of any penal offence on account of any act or omission which did not constitute a penal offence, under national or international law, at the time when it was committed. Nor shall a heavier penalty be imposed than the one that was applicable at the time the penal offence was committed.
> Article 17: Everyone has the right to own property alone as well as in association with others.
> No one shall be arbitrarily deprived of his property.
> Article 7: All are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination.
NASA officials sound alarm over future of the Deep Space Network
Ok so where do we donate money?
AMZN! Because they have AWS Ground Station https://aws.amazon.com/ground-station/ :
> AWS Ground Station Easily control satellites and ingest data with fully managed Ground Station as a Service
But that doesn't solve for limited availability of regulated spectra or spectra regulation.
Can ionizing radiation affect trapped ions in crystal lattice quantum sensors, for fiber optics?
FWIU degree of collinearity is the degree of quantum entanglement for photons and probably also phonons in a vacuum?
DSN: Deep Space Network: https://en.wikipedia.org/wiki/NASA_Deep_Space_Network
Do the good people at the Flat Earth Society have any response to 'splain this away? =D
The Holographic Principle says that spacetime is 2D. https://en.wikipedia.org/wiki/Holographic_principle
(Indeed, though, all of the other rotating bodies in n-body gravity fluid spacetime which are visible from here appear to have assumed the shape of a sphere probably like ~fixed-point attraction; and there's no way to swim to the edge)
Not so much that it is 2D, as that there’s a mathematical transformation you can do that lets you calculate its evolution in a 2D space. Which can be useful.
The “is” question is harder to evaluate, but to the degree it’s a meaningful question I’d say “probably not”.
Is there a transform between Minkowski 4-space rotations and 2D Holographic transformation(s)?
Mustn't they be reversible and locally unitary
(Edit)
PROMPT/QUERY: Generate SymPy with pytest.mark.parametrize tests to _ teach the transform between Minkowski 4-space rotations and 2D Holographic transformation(s)
- https://g.co/bard/share/f69e27dd9acd
- https://chat.openai.com/share/7bbda216-f232-4080-99ab-814bf6...
-
Do these converge upon a solution when you hit jumble?
(Edit)
From "Minkowski space" https://en.wikipedia.org/wiki/Minkowski_space :
> In 3-dimensional Euclidean space, the isometry group (the maps preserving the regular Euclidean distance) is the Euclidean group. It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light.
> Spacetime is equipped with an indefinite non-degenerate bilinear form, variously called the Minkowski metric,[2] the Minkowski norm squared or Minkowski inner product depending on the context.[nb 2] The Minkowski inner product is defined so as to yield the spacetime interval between two events when given their coordinate difference vector as argument.[3] Equipped with this inner product, the mathematical model of spacetime is called Minkowski space. The group of transformations for Minkowski space that preserve the spacetime interval (as opposed to the spatial Euclidean distance) is the Poincaré group (as opposed to the isometry group).
But then how does Minkowski space help understand signals in spacetime with nonlocality and superfluid phases in deep space?
(Edit)
Q: Can a thing causally affect things outside of its light cone?
A: Yes because Nonlocal entanglement
Q: is Minkowski space wrong or inappropriate then? And, Are causal counterfactuals the same as constructor theory counterfactuals?
The DSN is just part of the conspiracy. Those antennas don't actually exist. The images are CGI, and the people who claim to work there have been brainwashed.
But how did they write the simulation for the game on all of their screens?
AI technology provided by aliens who are being held captive in area 51.
Honestly, don't they teach you kids anything in school these days?
Loophole-free solutions that do not violate Bell's; entanglement communication: Entangled satellites and QKD repeaters do exist.
Godel had a few interesting spacetime solutions that may be helpful for Deep Space Communications.
Evolved Antenna: https://en.wikipedia.org/wiki/Evolved_antenna
Rogue wave: https://en.wikipedia.org/wiki/Rogue_wave
Can DSN be scaled? Or would it be best to use quantum radio?
Hawking radiation: https://en.wikipedia.org/wiki/Hawking_radiation
Perhaps if Hawking radiation is in all the things, Hawking radiation could be used for DSN-like communications.
When phenomena in the quantum foam "dissolve", is there an ~ejection fraction? Couldn't there be ±t per minimally perturbable effect in the quantum foam, though? Maybe internet/p2p-like routing algorithms, or, which field/wave/fluid perturbations are omnidirectional?
Rydberg antenna / Rydberg sensor: https://en.wikipedia.org/wiki/Rydberg_atom
> The Rydberg sensor can reliably detect signals over the entire spectrum and compare favourably with other established electric field sensor technologies, such as electro-optic crystals and dipole antenna-coupled passive electronics.[59][60]
Additional methods that could potentially be useful if Godel-like wormholes and massless particles are still considered infeasible:
Ambient backscatter: https://en.wikipedia.org/wiki/Ambient_backscatter
Backscatter: https://en.wikipedia.org/wiki/Backscatter
Passive W-Fi: https://en.m.wikipedia.org/wiki/Passive_Wi-Fi
LiFi: https://en.wikipedia.org/wiki/Li-Fi
With ambient backscatter, passive WiFi is already implemented; and maybe someday backscatter LiFi could achieve very high signal efficiency at low-energy in deep space, too?
.
Could there be deviations from stable patterns in the CMB: Cosmic Microwave Background?
Quantum navigation maps such signal sources such that inexpensive sensors can achieve something like inertial navigation FWIU?
.
"Smaller, more versatile antenna could be a communications game-changer" (2022) https://news.ycombinator.com/item?id=37337628 :
> LightSlingers use volume-distributed polarization currents, animated within a dielectric to faster-than-light [FTL] speeds, to emit electromagnetic waves. (By contrast, traditional antennas employ surface currents of subluminally moving massive particles on localized metallic elements such as dipoles.) Owing to the superluminal motion of the radiation source, LightSlingers are capable of “slinging” tightly focused wave packets with high precision toward a location of choice. This gives them potential advantages over phased arrays in secure communications such as 4G and 5G local networks
.
/? Curved photon beams: https://www.google.com/search?q=curved+photon+beam
.
Newer Waveguide approaches;
- "Experiment demonstrates continuously operating optical fiber made of thin air" (2023) https://news.ycombinator.com/item?id=35812168
- https://news.ycombinator.com/item?id=36408561
- https://phys.org/news/2023-06-trillionths-photon-pairs-compr... :
> The physicists chose the incidence angles and frequencies so that the co-propagating electrons, which fly through vacuum at half the speed of light, overlap with optical wave crests and troughs of exactly the same speed
.
PROMPT: Read/write nonlocal spacetime with minimal perturbations at safe energy levels for high-throughput data transmission over astronomical-scale distances
Teaching with AI
My sister (who is a middle school teacher) and I developed a real training program for teachers, and this "guide" from OpenAI is quite underwhelming. It doesn't address 90% of the problems teachers actually face with AI...this is mostly a brochure on how to use ChatGPT to get info.
If you are a teacher or know a teacher who is struggling to adapt this school year, I'd be honored to speak with them and see if we can help.
TIL about "CoderMindz Game for AI Learners! NBC Featured: First Ever Board Game for Boys and Girls Age 6+. Teaches Artificial Intelligence and Computer Programming Through Fun Robot and Neural Adventure!" https://www.codermindz.com/ https://www.amazon.com/gp/aw/d/B07FTG78C3/
Codermindz AI Curriculum: https://www.codermindz.com/stem-school/
https://K12CS.org K12 CS Curriculum (and code.org, and Khanmigo,) SHOULD/MUST incorporate AI SAFETY and Ethics curricula.
A Jupyter-book of autogradeable notebooks (for AI SAFETY first, ML, AutoML, AGI,) would be a great resource.
jupyter-edx-grader-xblock https://github.com/ibleducation/jupyter-edx-grader-xblock , Otter-Grader https://otter-grader.readthedocs.io/en/latest/ , JupyterLite because Chromebooks
What are some additional K12 CS/AI and QIS Curricula resources?
Smaller, more versatile antenna could be a communications game-changer
From TA https://discover.lanl.gov/publications/connections/2022-june... :
> LightSlingers use volume-distributed polarization currents, animated within a dielectric to faster-than-light [FTL] speeds, to emit electromagnetic waves. (By contrast, traditional antennas employ surface currents of subluminally moving massive particles on localized metallic elements such as dipoles.) Owing to the superluminal motion of the radiation source, LightSlingers are capable of “slinging” tightly focused wave packets with high precision toward a location of choice. This gives them potential advantages over phased arrays in secure communications such as 4G and 5G local networks
> Several prototypes of Lightslinger have been tested in lab environments and in the field over distances of up to 76 km. Also, three of them were independently validated by a U.S. telecommunications company. Los Alamos is now looking to transition the antennas to commercial prototypes that can be field tested and mass-produced by additive manufacturing and robotic processing.
Phononic quantum computing probably needs LightSlinger sensors, too?: https://www.google.com/search?q=phononic+quantum+computing
Secure Boot on ESP32 Platforms
Informative, but if you’re building ordinary consumer hardware please don’t enable secure boot on your ESP32 device. There’s a thriving ecosystem of open source software (for example ESPHome) that gives flexibility to how a device is used and long term support long after your project or company has failed. We don’t need more electronic landfill rubbish when motivated individuals could tinker with them instead.
The future of computing is that all code running on a device is one of the two S's: Signed or Sandboxed.
To do otherwise presents unnecessary risk.
That’s fine, assuming we pass laws mandating that signing keys can be controlled by end users. Otherwise we end up where no-one really owns their devices any more, every device would be merely temporarily rented.
How could that work? A consumer would be required to build the firmware and flash it to the product themselves?
Of course not. Having the ability to add or modify root CAs in browsers doesn’t imply a requirement to sign every webpage yourself either.
NVIDIA GPU Linux kernel modules must be self-signed to work with SecureBoot enabled; they must be self-signed every time they're updated by an akmod package upgrade.
So, it is necessary to remove the MS SecureBoot ~CApubkey and add the OS and local ~CApubkeys to the SecureBoot cert list with BIOS, and re-sign every module install|&build in order to work with NVIDIA (and probably also AMD?) in containers.
It's necessary and a fair expectation that users will continue to be able to remove and add x86-64 SecureBoot bootloader signing keys.
SELinux in Linux 6.6 Removes References to Its Origins at the US NSA
https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for:?
systemd-analyze security
> Is there a tool like `audit2allow` for systemd units?
selinux/python/audit2allow/audit2allow: https://github.com/SELinuxProject/selinux/blob/master/python... How to share a NumPy array between processes
Though deprecated probably in favor of more of a database/DBMS like DuckDB, the arrow plasma store holds handles to objects as a separate process:
$ plasma_store -m 1000000000 -s /tmp/plasma
Arrow arrays are like NumPy arrays but they're made for zero copy e.g. IPC Interprocess Communication. There's a dtype_backend kwarg to the Pandas DataFrame constructor and read_ methods:df = pandas.Dataframe(dtype_backend="arrow")
The Plasma In-Memory Object Store > Using Arrow and Pandas with Plasma > Storing Arrow Objects in Plasma https://arrow.apache.org/docs/dev/python/plasma.html#storing...
Streaming, Serialization, and IPC > https://arrow.apache.org/docs/python/ipc.html
"DuckDB quacks Arrow: A zero-copy data integration between Apache Arrow and DuckDB" (2021) https://duckdb.org/2021/12/03/duck-arrow.html
What's the latency delta of reading from a local database instance versus reading from memory - is there much additional overhead to slow things down?
With most databases, a `SELECT * FROM tblname` has to serialize a copy of the query result and let the ODBC or similar database driver unserialize, so the query result unavoidably gets copied at least once at query time.
Plasma and e.g. DuckDB do zero copy so that there is no: unmarshal of the complete database file into RAM, table scan in order to unindexed query, and then marshal/unmarshal (serialize/deserialize) of the query result for the database driver (which could allow access to a DB/DBMS over a local pipe(), a TCP socket, HTTP(S), protobufs over HTTPS)
If the memory is held in RAM by the plasma store, I'm not sure which queries are possible to service by handing an object reference to where the full non-virtual table is allocated in RAM (on only one node)? Presumably if there's no filtering or transformation in the query, and the query does not access "virtual" or "materialized" tables
Ubus (OpenWrt micro bus architecture)
As someone having to debug issues on OpenWRT derivatives, I wish I had a time machine to tell the inventors of Unix to never add other forms of IPC than pipes.
It's all a big pile of stateful daemons notifying other daemons with a billion race conditions and zero debugging capabilities, like a parody of how not to create reliable systems. When you have shell scripts parsing JSON messages, you know it's over.
But then how to add datatypes - schema - to pipes? How do we add a column to the messages on the pipe?
Newline-delimited != SOTA
https://en.wikipedia.org/wiki/Apache_Arrow :
> Arrow allows for zero-copy reads and fast data access and interchange without serialization overhead between these languages and systems
JSON lines formatted messages can be UTF-8 JSON-LD: https://jsonlines.org/
Linux networking is now all built on eBPF.
The same way you add it to any other SOCK_STREAM? How do we add types and schemas to TCP connections?
But then when I need to scale to more than one node with multiple cores, do pipes scale?
Yes, with some work: https://mazzo.li/posts/fast-pipes.html
https://mazzo.li/posts/fast-pipes.html#splicing :
> In general when we write to a socket, or a file, or in our case a pipe, we’re first writing to a buffer somewhere in the kernel, and then let the kernel do its work. In the case of pipes, the pipe is a series of buffers in the kernel. All this copying is undesirable if we’re in the business of performance.
> Luckily, Linux includes system calls to speed things up when we want to move data to and from pipes, without copying. Specifically:
> - splice moves data from a pipe to a file descriptor, and vice-versa.
> - vmsplice moves data from user memory into a pipe.
> Crucially, both operations work without copying anything.
But then to scale to more than one node, everything has to be copied from the pipe buffer to the network socket buffer; unless splice()'ing from a pipe to a socket is Zero-copy like sendfile() (which doesn't work with pipes IIRC).
"SOCKMAP - TCP splicing of the future" (2019) https://blog.cloudflare.com/sockmap-tcp-splicing-of-the-futu...
sendfile is implemented with splice, but the scenario in which splice is actually zero-copy is not well defined (man pages just say "it's generally avoided"), so you'd have to dig into your kernel source to know for sure. That being said, chances are the cost of copying the data would be far outpaced by the cost of serializing over the network anyways, so you would want to architect your application in a way that you're sending as little data over the network as possible if performance is of maximum concern.
Show HN: RISC-V Linux Terminal emulated via WASM
Weekend creation: A Linux terminal on top of a RISC-V emulator running in the browser via WebAssembly, powered by the Cartesi Machine. Check cool commands to experiment in the project page https://github.com/edubart/cartesi-wasm-term
webvm has [Tailscale] sockets-over-WebSockets for networking: https://github.com/leaningtech/webvm and:
> runs an unmodified Debian distribution including many native development toolchains.
> WebVM is powered by the CheerpX virtualization engine, and enables safe, sandboxed client-side execution of x86 binaries on any browser. CheerpX includes an x86-to-WebAssembly JIT compiler, a virtual block-based file system, and a Linux syscall emulator.
How do CheerpX and Cartesi Machine compare?
Main differences in emulation context:
- The Cartesi Machine can run any Linux distribution with RISC-V support, it emulates RISC-V ISA, while CheerpX emulates x86. For instance the Cartesi Machine can run Alpine, Ubuntu, Debian, etc.
- The Cartesi Machine performs a full Linux emulation, while CheerpX emulates only Linux syscalls.
- The Cartesi Machine has no JIT, it only interprets RISC-V instructions, so it is expected to be slower than CheerpX.
- The Cartesi Machine is isolated from the external world and this is intentional, so there is no networking support.
- The Cartesi Machine is a deterministic machine, with the possibility to take a snapshot of the whole machine state so it can be resumed later, CheerpX was not designed for this use case.
Perhaps if it's possible to generate a RISC-V CPU in 5 hours, it's also possible to generate a JIT for RISC-V with a similar approach? https://news.ycombinator.com/item?id=36566578
"Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" (2023) https://news.ycombinator.com/item?id=37086102 ; emu86 supports WASM, MIPS, RISC-V but not yet ARM64/Aarch64
Is there a "Record and replay" or a "Time-travel debugger" at the emulator level or does e.g. rr just work because Cartesi Machine is a complete syscall emulator?
MOOC: Reducing Internet Latency: Why and How
Bufferbloat > Solutions and mitigations https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti...
AQM Active Queue Management: https://en.wikipedia.org/wiki/Active_queue_management
CoDel > Derived algorithms > CAKE: https://en.wikipedia.org/wiki/CoDel#Derived_algorithms :
> Common Applications Kept Enhanced (CAKE; sch_cake in Linux code) is a combined traffic shaper and AQM algorithm presented by the bufferbloat project in 2018. It builds on the experience of using fq_codel with the HTB (Hierarchy Token Bucket) traffic shaper. It improves over the Linux htb+fq_codel implementation by reducing hash collisions between flows, reducing CPU utilization in traffic shaping, and in a few other ways. [18]
Re: the `dslreports_8dn` netperf test, bufferbloat, RRUL and the fluent GUI: https://news.ycombinator.com/item?id=26596395
OpenWRT has a SQM "Smart Queue Management" package and LuCI web UI config that defaults to the CAKE scheduler. SQM is not installed by default.
There is an x86 container with OpenWRT that can probably be used with load simulation and additional network interfaces on the container. IIRC I had to figure out how to start procd (~systemd) in the openwrt container myself.
/? OpenWRT bufferbloat https://www.google.com/search?q=openwrt+bufferbloat
/? automatically optimize bufferbloat https://www.google.com/search?q=automatically%2Boptimize%2Bb... :
sqm-autorate/sqm-autorate https://github.com/sqm-autorate/sqm-autorate :
> sqm-autorate is a program for OpenWRT that actively manages the CAKE Smart Queue Management (SQM) bandwidth settings through measurments of traffic load and latency. It is designed for variable bandwidth connections such as DOCIS/cable and LTE/wireless, and is not so useful for connections that have a stable, fixed bandwidth.
speedtest-netperf.sh: https://github.com/openwrt/packages/blob/master/net/speedtes... :
opkg install speedtest-netperf
speedtest-netperf.sh [-4 | -6] [-H netperf-server] [-t duration] [-p host-to-ping] [-n simultaneous-streams ] [-s | -c]
You can run the Flent CLI in a Jupyter notebook with a heading for each test location. There are various ways to template Jupyter notebooks; e.g. `cp nbtemplate.ipynb wifitest_yyymmddhhss+z.ipynb`i386 in Ubuntu won't die
I just want to point out something a bit pedantic, but I feel is an important distinction: Linux community and devs need to stop saying i386 and say i586 or even i686 as this is closer to what is meant. Linux doesn’t support 386 anymore, you can make it work on 486 but most distributions don’t enable support for this. Pentiums (586) are usually the minimum and some even push toward Pentium MMX/Pentium II which would be 686.
Tell that the big vendors. Only mingw AFAIK ships i686 packages (with simd support), all others stay in the safe i386 camp still, without simd support.
Gentoo ebuild USE and CFLAGS flags allow the user to specify custom compilation settings as necessary for optimization on a given processor architecture like i386, i486, i586, or i686.
For example, with ebuild (or dpkg-repack or rpmrebuild) a person could build the compiler packages with all of the appropriate optimization flags for the chipset on the box where the package will be installed: https://packages.gentoo.org/useflags/custom-cflags
Cross-compilation is easier with clean build containers.
With distrobox (and qemu, qemu-user-static, and binfmt-support) https://github.com/89luca89/distrobox/blob/main/docs/useful_... :
$ uname -m
x86_64
$ distrobox create -i aarch64/fedora -n fedora-arm64
$ distrobox enter fedora-arm64
user@fedora-arm64:$ uname -m
aarch64
Robert's Rules of Order
From "Free and Open Source Governance" https://fossgovernance.org/ :
> An indexed collection of governance documents from Free and Open Source Software (FOSS) projects.
> FOSS Governance Zotero Collection: https://www.zotero.org/groups/2310183/foss_governance/item-l...
Open-source governance: https://en.wikipedia.org/wiki/Open-source_governance
Communication in distributed software development > Forms of communication > Synchronous, Asynchronous, Hybrid: https://en.wikipedia.org/wiki/Communication_in_distributed_s...
Consumers have yet to develop a taste for Kernza, the environmental wonder grain
I had never heard of it.
From the article:
> Billed as the new wonder grain — a wheatgrass with a nutty, graham or rye-like flavor — Kernza uses very little nitrogen fertilizer, and its extremely long roots make it a powerhouse at soaking up nitrogen that would otherwise seep into groundwater, research has shown. It's the type of eco-crop promoted for farm country to help cut the nitrate leaching from nitrogen fertilizers into the state's waters.
From https://en.wikipedia.org/wiki/Thinopyrum_intermedium :
> Nutritional values and use of Kernza: Kernza contains higher values of protein, ash content and dietary fiber content when compared with wheat. Further 100 gram uncooked Kernza provides 1540 kilojoule (368 kcal) of food energy and is a good source of calcium (120 mg) as well as iron (5.5 mg). Comparing Kernza to white wheat berries, calcium contents are 4.8 times higher and iron values are more than double. Kernza contains gluten but is deficient in high molecular weight glutenin, which limits its use especially in baking. The higher fat content in Kernza may increase overall rancidity, but a higher antioxidant content than wheat may offer a protective effect. [31][32] There are existing products with Kernza such as Honey Toasted Kernza by Cascadian Farms[33] and Patagonia Provisions’ Kernza beer.
I'm curious about the PDCAAS of the protein in Kernza but I can't find any data for that.
Interpretable graph neural networks for tabular data
> At the same time, the results show that the explanations obtained from IGNNet are aligned with the true Shapley values of the features without incurring any additional computational overhead
TabPFN: https://github.com/automl/TabPFN https://twitter.com/FrankRHutter/status/1583410845307977733
"TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second" (2022) https://arxiv.org/abs/2308.08945
FWIU TabPFN is Bayesian-calibrated/trained with better performance than xgboost for non-categorical data
Right, the significance of the original article and the related field of research is that ChatGPT-like models don't handle tabular data well and there's a lot of need for things that do.
There are multiple metrics to optimize for when optimizing.
FWIU, from the diagram in the photo in the linked tweet, which is similar to a diagram on page 16 of the TabPFN paper [1], on the OpenML-CC18, TabPFN has a better ROC Receiver Operating Characteristic after 1 second than XGboost, Catboost, LightGBM, KNN, SAINT, Reg. Cocktail, and Autogluon after any amount of time, but Auto-sklearn 2.0 required 5 minutes to reach ~ROC parity with TabPFN.
1. "TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second" (May 2023) https://arxiv.org/abs/2207.01848
2. "Interpretable Graph Neural Networks for Tabular Data" (Aug 2023) https://arxiv.org/abs/2308.08945
Asyncio, twisted, tornado, gevent walk into a bar
Eventlet (Linden Labs) actually owns the bar, then.
Edit
Green threads > Green threads in other languages: https://en.wikipedia.org/wiki/Green_thread#Green_threads_in_...
Coroutine > Comparison with > Threads, Generators: https://en.wikipedia.org/wiki/Coroutine#Implementations_for_... :
> Generators, also known as semicoroutines, [8] are a subset of coroutines. Specifically, while both can yield multiple times, suspending their execution and allowing re-entry at multiple entry points, they differ in coroutines' ability to control where execution continues immediately after they yield, while generators cannot, instead transferring control back to the generator's caller.[9] That is, since generators are primarily used to simplify the writing of iterators, the yield statement in a generator does not specify a coroutine to jump to, but rather passes a value back to a parent routine.
> (However, it is still possible to implement coroutines on top of a generator facility)
Asynchronous I/O > Forms > Light-weight processes or threads: https://en.wikipedia.org/wiki/Asynchronous_I/O#Light-weight_...
Async/Await > History, Benefits and criticisms: https://en.wikipedia.org/wiki/Async/await
> That is, since generators are primarily used to simplify the writing of iterators, the yield statement in a generator does not specify a coroutine to jump to, but rather passes a value back to a parent routine.
Another term for this asymmetric coroutine, as opposed to symmetric coroutine. Coroutines in Lua are asymmetric, but they're not generally referred to as generators as they're much more capable than what are called generators in other languages, like Python. This is largely because Lua's coroutines are stackful rather than stackless, which is an orthogonal, more pertinent dimension when implementing concurrency frameworks.
You can implement symmetric coroutines using asymmetric coroutines, and vice versa, by implementing a higher-level library that implements one using the other. So in principle they have equivalent expressive power, formally speaking. Ultimately the distinction between these terms, including generator, comes down to implementation details and your objective. And just because a language name-drops one of these terms doesn't mean what they provide will be as useful or convenient in practice as a similarly named feature in another language. Other dimensions--stack semantics, type system integration, etc--can easily prove the determining factor in how useful they are.
PROMPT: Generate a table of per- process/thread/greenthread/coroutine costs in terms of stack/less/heap overhead, CPU cache thrashing impact, and compatibility with various OS schedulers that are or are not preferred to eBPF
Is it necessary to use a library like trio for nonblocking io in lua, or are the stdlib methods all nonblocking with big-O complexity as URIs in the docstrings?
Trio docs > notes on async generators: https://trio.readthedocs.io/en/stable/reference-core.html#no...
Can you please rewrite this in English?
IIRC the history of the async things in TLA in order: Twisted (callbacks), Eventlet (for Second Life by Linden Labs), tornado, gevent; gunicorn, Python 3.5+ asyncio
[ tornado (FriendFeed, IPython Notebook (ZeroMQ (libzmq)),), Sanic (asyncio), fastapi, Django Channels (ASGI), ASGI: Asynchronous Server Gateway Interface, django-ninja, uvicorn (uvloop (libuv from nodejs)), ]
The Async/await keywords were in F# (2007), then C# (2011), Haskell (2012), ... Python (2015), and JS/ES ECMAScript (2017) FWICS from the wikipedia article.
When we talk about concurrency and parallelism, what are the different ~async patterns and language features?
Processes, Threads, "Green Threads", 'generator coroutines'
Necessarily, we attempt to define such terms but the implementations of the now more specifically-named software patterns have different interpretations of same, so what does Wikipedia have or teach on this is worth the time.
Uvicorn docs > Deployment > Gunicorn:
https://www.uvicorn.org/deployment/#gunicorn :
> The following will start Gunicorn with four worker processes:
gunicorn -w 4 -k uvicorn.workers.UvicornWorker
> The UvicornWorker implementation uses the uvloop and httptools implementations.PROMPT: Generate a minimal ASGI app and run it with Gunicorn+Uvicorn. What is uvloop?
PROMPT: How does uvloop compare to libzmq and eBPF? Which asynchronous patterns do they support?
PROMPT: Which Uvicorn (security) http headers work with which k8s Ingress pod YAML attributes and kubectl?
PROMPT: (Generate an Ansible Role and Playbook to) Host an ASGI webapp with Kubernetes (k8s) Ingress maybe with ~k3d/microshift locally
If you want to write the history of Python async things in order, you'll need to put Twisted in many more of the places: it was callbacks only in the very beginning, then worked via generators, and once Python added async syntax used that. "Twisted (callbacks)" is a simplification that ignores the influence Twisted has had even on non-Python ecosystems; Deferreds have been imitated all over the place.
Before callbacks, there were function pointers and there was no Garbage Collection; And before function pointers, there were JMPs, trampolines, and labels in ASM.
I don't share any reverence for the Twisted callback patterns that AJAX also implements. And, the article isn't about callbacks.
Promises in JS have a separate success and error functions. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guid...
MDN > Async function: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... :
> The async function declaration creates a binding of a new async function to a given name.
> The await keyword is permitted within the function body, enabling asynchronous, promise-based behavior to be written in a cleaner style and avoiding the need to explicitly configure promise chains
Show HN: PlotAI – Create Plots in Python and Matplotlib with LLM
Vega and Vega-lite visualization grammars: https://en.wikipedia.org/wiki/Vega_and_Vega-Lite_visualisati...
FWIU Vega/voyager suggests similar charts with CompassQL: https://github.com/vega/voyager
From http://vega.github.io/ re: CompassQL:
> COMPASSQL is a visualization recommendation engine. Given user query, it suggests visualizations, ranked by both data properties and perceptual principles
Altair is one implementation of Vega-lite in Python; for rendering charts with JS.
mpld3 does matplotlib with d3.js: https://github.com/mpld3/mpld3
Princeton ‘AI Snake Oil’ authors say GenAI hype has ‘spiraled out of control’
The key way I try to distinguish "Snake Oil" is: Is anyone actually using this routinely, and are the problems it solves self-evident to new non-zealot users after a demo?
Take blockchain for example; people spent years and millions of dollars looking for additional usages for the technology beside coins/coin-contracts; and aside from the forementioned usages it didn't really gain routine adoption. That was why the non-coin-blockchain always struct me as a "hype train" or "snake oil" because it was a solution in search of a problem. NFTs are even worse since they haven't found any purpose to exist yet (money laundering?).
Contrast that with the "GPT" offerings (ChatGPT/Bing/Copilot/Bard/etc). Multiple of my colleagues are actively using them routinely every workday and when you demo it to another person, they understand it, and they too start utilizing it in their workflow. Heck, my mom discovered it herself on Bing's homepage and was telling me I should check it out.
That's the opposite of "snake oil" or a hype train, it is arguably a competitive threat to search engines.
PS - Small disclaimer: "AI" is a nebulous term. I'm talking specifically about LLMs. Other types of "AI" have already been here for a long time making our lives better (e.g. computer vision).
Are people durably achieving a competitive advantage through the GPT?
I think we've seen some people striving for a competitive advantage, and willing to see one exist, but I don't know if we've seen any "AI company" turn a meaningful and durable profit yet (open to being wrong on this).
To me (so far), it seems like GPT use cases that apply it as a table stakes feature of some greater application (rather than an ecosystem or a platform), are the ones that are actually showing promise in terms of expected utility + value capture.
If GPT4+ level engines become as cheap to execute as a SQLite query, I think that's where things get interesting (you can start executing this stuff at the edge).
But I still can't see new companies (a la OpenAI, MidJourney, etc.) making a lot of money in this scenario, it seems to overwhelmingly favor companies that already have distribution.
We're likely way too early to call this one. Based on current hardware trends, GPT-4 will run on your phone within 6-10 years. Right now, folks are feeling out the edges of what makes sense and what doesn't make sense at extreme R&D and opex expense. In 10 years, we'd expect the winners of today to still be winners and have great margins due to declining compute costs.
Granted, if you are spending 10x the future value of the product your are offering ... then even a 10x decline in compute costs won't get you where you need to be.
> Based on current hardware trends, GPT-4 will run on your phone within 6-10 years
they quantized model from 16 bits to 4 bits which was low hanging fruit, and looks like they can't quantize it anymore to 2 bits..
CPU/GPUs are general purpose, if enough workload demand exists specialized Transformer cores will be designed. Likewise, its not at all clear that current O(N^2) self-attention is the ideal setup for larger context lengths. All to say, I'd believe we have another 8-10x algorithmic improvement in inference costs over the next 10 years. In addition to whatever Moore's law brings.
Mobile TPUs/NPUs:
Pixel 6+ phones have TPUs (in addition to CPUs and an iGPU/dGPU).
Tensor Processing Unit > Products > Google Tensor https://en.wikipedia.org/wiki/Tensor_Processing_Unit
TensorFlow lite; tflite: https://www.tensorflow.org/lite
From https://github.com/hollance/neural-engine :
> The Apple Neural Engine (or ANE) is a type of NPU, which stands for Neural Processing Unit.
From https://github.com/basicmi/AI-Chip :
> A list of ICs and IPs for AI, Machine Learning and Deep Learning
Interferometric imaging of amplitude and phase of spatial biphoton states
From "Interferometric imaging of amplitude and phase of spatial biphoton states" (2023) https://www.nature.com/articles/s41566-023-01272-3 :
> [...] The number of projective measurements necessary for a full-state tomography scales quadratically with the dimensionality of the Hilbert space under consideration [2]. This issue can be tackled with adaptive tomographic approaches [3,4,5] or compressive techniques [6,7], which are, however, constrained by a priori hypotheses on the quantum state under study. Moreover, quantum state tomography via projective measurement becomes challenging when the dimension of the quantum state is not a power of a prime number [8]. Here we try to tackle the tomographic challenge, in the specific contest of spatially correlated biphoton states, looking for an interferometric approach inspired by digital holography [9,10,11], familiar in classical optics. We show that the coincidence imaging of the superposition of two biphoton states, one unknown and one used as a reference state, allows retrieving the spatial distribution of phase and amplitude of the unknown biphoton wavefunction. Coincidence imaging can be achieved with [... Quantum Imaging]
From "Physicists use a 350-year-old theorem to reveal new properties of light waves" (yesterday, 2023) https://news.ycombinator.com/item?id=37226121 :
>> proves for the first time that a light wave's degree of non-quantum entanglement exists in a direct and complementary relationship with its degree of polarization. As one rises, the other falls, enabling the level of entanglement to be inferred directly from the level of polarization, and vice versa. This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity
[deleted]
H.R.2673 – American Innovation and R&D Competitiveness Act of 2023
Tax Cuts and Jobs Act (2017): https://en.wikipedia.org/wiki/Tax_Cuts_and_Jobs_Act
> The Act contains provisions that would open 1.5 million acres (6,100 km2) in the Arctic National Wildlife Refuge to oil and gas drilling
These people tried to sell our national parks, cut revenue, and increased expenses.
And also this bill changed the rules for R&E Amortization.
From "Ask HN: How are you handling Section 174 changes for bootstrapped companies?" (2023) https://news.ycombinator.com/item?id=34627712 :
> To be clear, you will eventually get the taxes you pay back over the next 5 years. But how are bootstrapped companies without access to large capital reserves or investment supposed to come up with the money to pay these tax bills while they wait it out? For every dollar you spend on making software, you've now got to have 30+ cents in reserve just to pay the tax bill for the year!
Q: "Did the 2017 tax cut—the Tax Cuts and Jobs Act—pay for itself?" (2020) https://www.brookings.edu/articles/did-the-2017-tax-cut-the-...
A: No. The TCJA did not pay for itself.
> Did the TCJA spur enough growth to maintain federal revenue levels?
> The right question: What would revenues have been without the TCJA?
Coffee grounds make concrete 30% stronger
If this helps to reduce the quantity of concrete used that can only be a good thing. The cement industry is responsible for 8% of all green house gas emissions and an enormous quality of particulates in the atmosphere, we really need to use less concrete!
I wander if there are other food waste products that can also be used? It would be awesome if they found somethings that was a waste product from food mass-production, something what is available in bulk with simple logistics.
"Baking Soda Saves the World: New Additive in Concrete Mix Could Slash Carbon Emissions" (2023) https://scitechdaily.com/baking-soda-saves-the-world-new-add... :
"Cementing CO2 into C-S-H: A step toward concrete carbon neutrality" (2023) https://academic.oup.com/pnasnexus/article/2/3/pgad052/70895...
Looks like Baking Soda / Sodium Bicarbonate may also be useful for Hydrogen storage:
"A baking soda solution for clean hydrogen storage" (2023) PNNL scientists investigate the promising properties of a common, Earth-abundant salt https://www.sciencedaily.com/releases/2023/06/230612200329.h...
Sodium Bicarbonate is made from Sodium Carbonate.
Sodium Bicarbonate: https://en.wikipedia.org/wiki/Sodium_bicarbonate
Sodium Carbonate: https://en.wikipedia.org/wiki/Sodium_carbonate :
> Historically, it was extracted from the ashes of plants grown in sodium-rich soils. Because the ashes of these sodium-rich plants were noticeably different from ashes of wood (once used to produce potash), sodium carbonate became known as "soda ash".[12][full citation needed] It is produced in large quantities from sodium chloride and limestone by the Solvay process, as well as by carbonating sodium hydroxide
Physicists use a 350-year-old theorem to reveal new properties of light waves
From https://phys.org/news/2023-08-physicists-year-old-theorem-re... :
> The work, led by Xiaofeng Qian, assistant professor of physics at Stevens and reported in the August 17 online issue of Physical Review Research, also proves for the first time that a light wave's degree of non-quantum entanglement exists in a direct and complementary relationship with its degree of polarization. As one rises, the other falls, enabling the level of entanglement to be inferred directly from the level of polarization, and vice versa. This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
> [..] Qian's team interpreted the intensity of a light as the equivalent of a physical object's mass, then mapped those measurements onto a coordinate system that could be interpreted using Huygens' mechanical theorem. "Essentially, we found a way to translate an optical system so we could visualize it as a mechanical system, then describe it using well-established physical equations," explained Qian.
> Once the team visualized a light wave as part of a mechanical system, new connections between the wave's properties immediately became apparent—including the fact that entanglement and polarization stood in a clear relationship with one another.
Does light have mass?
Photons cannot have mass in GR General Relativity because E=mc^2?
How do solar sails achieve thrust from photons and solar pressure if they are massless?
Photons are massless but have momentum.
So things with momentum but no mass can transfer such force though they are massless?
Yes (some people would say the photon has a "mass equivalence" but that's really just conceptual)! One of the greatest scientific discoveries of all time:
https://en.wikipedia.org/wiki/Radiation_pressure
"""Johannes Kepler put forward the concept of radiation pressure in 1619 to explain the observation that a tail of a comet always points away from the Sun.[9]
The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface that is exposed to it was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900[10] and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901."""
Mass-energy equivalence: https://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalenc...
E=mc^2
E=(m)(c^2)
E=(0)(c^2)
E=0
But that's for at-rest inertia. Photons are only very rarely if ever "at rest".(Are photons "at relative rest" in Lagrangian points, accretion discs, superfluids, 0 Kelvin, and/or when created from just n-body gravity, etc?)
Energy-Momentum relation: https://en.wikipedia.org/wiki/Energy%E2%80%93momentum_relati... :
> [Energy-Momentum relation] is the extension of mass–energy equivalence for bodies or systems with non-zero momentum. It can be written as the following equation:
E^2 = (p^2)(c^2)+(m^2)(c^4)
E^2 = ppcc+(0)(c^4)
E = sqrt(ppcc)
E = sqrt((p^2)(c^2))
c is the speed of light in a vacuum; without superfluidity (which occurs in helium in deep space for example); and without nonlocal entanglement; and without loophole-free solutions to spacetime (quantum teleportation).Anyways, further from "Energy-Momentum relation":
> The Dirac sea model, which was used to predict the existence of antimatter, is closely related to the energy–momentum relation
From "Dirac Sea" https://en.wikipedia.org/wiki/Dirac_sea :
> The Dirac sea interpretation and the modern QFT interpretation are related by what may be thought of as a very simple Bogoliubov transformation, an identification between the creation and annihilation operators of two different free field theories.
> [...] Dirac sea theory has been displaced by quantum field theory, though they are mathematically compatible
SQG Superfluid Quantum Gravity says there needn't be antimatter; there is pressure and there are phases of matter in a varyingly Superfluidic cosmos. The Dirac wikipedia article also mentions SQG.
And further from "Energy-Momentum relation":
> The quantities E, p, E′, p′ are all related by a Lorentz transformation. The relation allows one to sidestep Lorentz transformations when determining only the magnitudes of the energy and momenta by equating the relations in the different frames
> But that's for at-rest inertia. Photons are only very rarely if ever "at rest".
Not just rarely; photons are never at rest, period. In circumstances where one might intuitively think they might change speed, instead they change frequency.
That includes that they do not change speed in materials with a different index of refraction than a vacuum, but in that case, they do seem to, so as a shorthand it's often said that the speed of light in e.g. glass is slower than in a vacuum, but that's really just a shorthand -- so that's a can of worms.
Anyway your second equation is the full version of your much more famous first equation precisely because we need the extra term for photons (or any other massless things).
"Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://dx.doi.org/10.1103/PhysRevResearch.5.033110
Christiaan Huygens > Legacy: https://en.wikipedia.org/wiki/Christiaan_Huygens
Horologium Oscillatorium by Huygens: https://en.wikipedia.org/wiki/Horologium_Oscillatorium
Parallel axis theorem: https://en.wikipedia.org/wiki/Parallel_axis_theorem :
> The parallel axis theorem, also known as Huygens–Steiner theorem, or just as Steiner's theorem,[1] named after Christiaan Huygens and Jakob Steiner, can be used to determine the moment of inertia or the second moment of area of a rigid body about any axis, given the body's moment of inertia about a parallel axis through the object's center of gravity and the perpendicular distance between the axes
...
Harmonic oscillator: https://en.wikipedia.org/wiki/Harmonic_oscillator
Quantum Harmonic Oscillator: https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
Pendulum (mechanics) https://en.wikipedia.org/wiki/Pendulum_(mechanics)
Artificial intelligence is ineffective and potentially harmful for fact checking
What about systems like pdfgpt and knowledge_gpt; that, in returning citations from the trained texts, might be useful for legal discovery with admissibility?
pdfgpt: https://github.com/bhaskatripathi/pdfGPT
knowledge_gpt: https://github.com/mmz-001/knowledge_gpt
Are LLM tools better or worse than e.g. meilisearch or elasticsearch for searching with snippets over a set of document resources?
How does search compare to generating things with citations?
[deleted]
Use both ChatGPT and Bard in one window - Save 5 hours a week
ChainForge also does LLM output comparisons
"Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation" https://news.ycombinator.com/item?id=37038053
[deleted]
US judge: Art created solely by artificial intelligence cannot be copyrighted
Cell therapy repairs cornea damage with patient's stem cells gives trial results
Which brand of CL (Contact Lens) did they use for the limbal stem cell delivery?
From "Sight for sore eyes" (2009) https://newsroom.unsw.edu.au/news/health/sight-sore-eyes :
"A contact lens-based technique for expansion and transplantation of autologous epithelial progenitors for ocular surface reconstruction" (2009) http://dx.doi.org/10.1097/TP.0b013e3181a4bbf2
In this study, they found that only one brand (Bausch and Lomb IIRC) of contact lens worked well as a scaffold for the SC
"Contact lens delivery of stem cells for restoring the ocular surface." (2016) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=Con...
List of soft contact lens materials: https://en.wikipedia.org/wiki/List_of_soft_contact_lens_mate... ; Hydrogels
All of Physics in 9 Lines
> The 9 lines contain all natural sciences. They contain physics, chemistry, material science, biology, medicine, geology, astronomy and all engineering disciplines.
so, can you go from these to predict the structure of DNA/RNA and predicting how they work?
also, I don't like the use of "nature" as some sort of concious entity that prefers some behaviors to others.
nature is left as an exercise to the reader
Neuroscientists successfully test theory that forgetting is a form of learning
"Forgetful? It might actually make you smarter, study says" (2019) https://www.cnn.com/2017/06/30/health/poor-memory-smarter-st... :
> [...] as new brain cells are formed in the hippocampus -- a region of the brain associated with learning new things -- those new connections overwrite old memories and make them harder to access.
Adult neurogenesis > Implications > Role in learning: https://en.wikipedia.org/wiki/Adult_neurogenesis
"The Absent-Minded Professor" (1961) https://en.wikipedia.org/wiki/The_Absent-Minded_Professor
Absent-minded professor (stock character) https://en.wikipedia.org/wiki/Absent-minded_professor
Sargablock: Bricks from Seaweed
In short: sargassum seaweed is washed onto shores anyway, why not use it to make bricks? They form the bricks using a standard block-making machine, dry them at the sun, build small houses out of these, and plan to donate these houses to local families in need.
They say that they use 40% of seaweed in the bricks, and 60% is "other organic material". I wonder what material is this.
I also wonder how well the sargassum bricks withstand weather.
> 60% is "other organic material". I wonder what material is this.
IIRC from the YT video in my top-level comment, it's dirt.
> I also wonder how well the sargassum bricks withstand weather.
dopa42365's comment (https://news.ycombinator.com/item?id=37176903) links a paper with the following claim:
> Studies carried out by the Secretariat of Ecology and Environment of Quintana Roo state, Mexico, found that the resistance of the bricks is 75–120 kgf/cm2, while its durability can be up to 120 years, regardless of the region or climate type where they are used (Desrochers et al., 2020). Currently there is no other sustainable material as resistant, or with such a long useful life. Sargabricks are therefore seen as a viable and safe material for the creation of ecofriendly buildings.
(edit: But the OP link also contains this 120y assertion. I haven't checked the paper's citations, so I'm not sure if everyone's depending on the cited study's analysis, or if that study is laundering claims by the people making the bricks)
> I also wonder how well the sargassum bricks withstand weather.
In the video they mention the first house built using these bricks was in 2018. It's still standing and looks like it's in good condition. They also mention the homes they build with these bricks are strong enough to withstand hurricanes.
Round homes without sharp corners fare better than rectangular homes in hurricanes probably because the wind shearing force is lower. Deltec Homes sells round, radial homes: https://www.google.com/search?q=deltec+youtube+hurricane
Hempcrete on structural forms like LEGOs another type of CEB Compressed Earth Block: https://en.wikipedia.org/wiki/Compressed_earth_block
Radial blocks make turning corners with various radii easier; so that the wind doesn't grab the ~K'Nex cube of the structure and shear it under the roof and around the stairs.
The link seems to be Slashdotted into oblivion.
Here's an article in the Mexico News Daily explaining it:
https://mexiconewsdaily.com/news/mr-sargassum-built-13-house...
From the article https://fortomorrow.org/explore-solutions/sargablock ::
> I adjusted a machine designed to make adobe bricks so that it can process a mix of 40% sargassum and 60% other organic materials for the Sargablock. The machine can turn out 1,000 blocks a day, and after four hours of baking in the sun, they are dried and ready to be used
Bali rice experiment cuts greenhouse gas emissions and increases yields
The father of this method is Masanobu Fukuoka - One Straw Revolution, aka natural farming.
https://en.wikipedia.org/wiki/Masanobu_Fukuoka
"in 1947 he took up natural farming again with success, using no-till farming methods to raise rice and barley ... organic and chemical-free rice farming"
Nope. Nope. Nope.
"Organic" farming doesn't scale. You still need fertilizer to achieve reasonable yields long-term, though the new method optimizes the fertilizer use.
Historically, this was done by letting fields to flood, bringing in new topsoil rich in nutrients. If you can do that, then it's fine. But there's simply not enough regularly flooded land for this to work.
You seem very certain that fertilizer is required. Given what we know of agriculture today, that seems reasonable.
But something, somehow grew 4-8ft of deep topsoil on the prairies in the US that are now prime farmland. Haber Bosch wasn't around 300 or 3000 years ago, nor were people tilling and planting. How did these plants get enough nitrogen to thrive?
How do current grass fed beef ranchers grow enough grass to grow cattle today? Surely they aren't fertilizing the grass.
Some folks say that there are free living nitrogen fixing bacteria that can live in the soil, provided that they're getting sugars from growing plants. It seems reasonable.
> Haber Bosch wasn't around 300 or 3000 years ago, nor were people tilling and planting. How did these plants get enough nitrogen to thrive?
Are you claiming that 9 billion people lived 300 or 3000 years ago and planted and regularly harvested high yield crops to feed themselves?
> How do current grass fed beef ranchers grow enough grass to grow cattle today? Surely they aren't fertilizing the grass.
Yes, yes they are.
Food security has probably been an issue since forever.
History of Agriculture: https://en.wikipedia.org/wiki/History_of_agriculture :
> The development of agriculture about 12,000 years ago changed the way humans lived. They switched from nomadic hunter-gatherer lifestyles to permanent settlements and farming. [1]
World population: https://en.wikipedia.org/wiki/World_population :
> It took over 200,000 years of human prehistory and history for the human population to reach one billion and only 219 years more to reach 8 billion. [3]
Food security: https://en.wikipedia.org/wiki/Food_security
Soil fertility > Soil depletion: https://en.wikipedia.org/wiki/Soil_fertility :
> Topsoil depletion occurs when the nutrient-rich organic topsoil, which takes hundreds to thousands of years to build up under natural conditions, is eroded or depleted of its original organic material.[13] Historically, many past civilizations' collapses can be attributed to the depletion of the topsoil. Since the beginning of agricultural production in the Great Plains of North America in the 1880s, about one-half of its topsoil has disappeared. [14]
> Depletion may occur through a variety of other effects, including overtillage (which damages soil structure), underuse of nutrient inputs which leads to mining of the soil nutrient bank, and salinization of soil.
Soil management : https://en.wikipedia.org/wiki/Soil_management :
> Soils can sequester carbon dioxide (CO2) from the atmosphere, primarily by storing carbon as soil organic carbon (SOC) through the process of photosynthesis. CO2 can also be stored as inorganic carbon but this is less common. Converting natural land to agricultural land releases carbon back into the atmosphere. The amount of carbon a soil can sequester depends on the climate and current and historical land-use and management.[6] Cropland has the potential to sequester 0.5–1.2 Pg C/year and grazing and pasture land could sequester 0.3–0.7 Pg C/year.[7] Agricultural practices that sequester carbon can help mitigate climate change.[8] Intensive farming deteriorates the functionality of soils.
> Methods that significantly enhance carbon sequestration in soil include no-till farming, residue mulching, cover cropping, and crop rotation, all of which are more widely used in organic farming than in conventional farming.[9][10] Because only 5% of US farmland currently uses no-till and residue mulching, there is a large potential for carbon sequestration
FWIU new methods of onsite green ammonia production for fertilizer could help solve food security, but the nitrogen runoff is causing algal blooms in waterways.
A (soap-like) surfactant like JADAM Wetting Agent (JWA) which causes the applied treatments to stick to the plants might reduce fertilizer runoff levels; but Nitrogen-based fertilizer alone does not regenerate all of the components of topsoil. https://www.google.com/search?q=jadam+jwa
Mycorrhizae fungus in the soil help get nutrients to plant roots, and they need to be damp in order to prevent soil from turning to dirt due to solar radiation and oxidation. https://youtube.com/@soilfoodwebschool
Urea breaks down into Nitrogen and CO2 in the soil. Maybe that used to scale (before we had 8b people on here).
TIL that rainwater is higher in nitrogen from falling though air (which contains H2O, CO2, N, O2, CO).
Irrigation > Challenges https://en.wikipedia.org/wiki/Irrigation :
> However, if the soil is under irrigated, it gives poor soil salinity control which leads to increased soil salinity with the consequent buildup of toxic salts on the soil surface in areas with high evaporation. This requires either leaching to remove these salts and a method of drainage to carry the salts away. Irrigation with saline or high-sodium water may damage soil structure owing to the formation of alkaline soil.
From "Idiocracy":
> "For the last time, I'm pretty sure what's killing the crops is this Brawndo stuff.
>> But Brawndo's got what plants crave. It's got electrolytes. [Sodium (NaCl), Potassium (K), Water (H2O))
TIL that basic soil and plant health and farming things are new material for me, an otherwise qualified eater.
Agree, very interesting stuff. Before I get to the soils, animal ag intro.
https://www.colorado.edu/ecenter/2022/03/15/it-may-be-uncomf...
The animal agriculture industry is the leading cause of most environmental degradation that is currently occurring. These detrimental effects happen due to overgrazing, habitat loss, overfishing, and more. We are currently in the next mass extinction and animal agriculture is only fueling this catastrophe. Waste in the meat industry, too, is a major problem in of itself.
https://www.pnas.org/doi/10.1073/pnas.142033699
Tracking the ecological overshoot of the human economy
Our accounts indicate that human demand may well have exceeded the biosphere's regenerative capacity since the 1980s. According to this preliminary and exploratory assessment, humanity's load corresponded to 70% of the capacity of the global biosphere in 1961, and grew to 120% in 1999.
https://www.nature.com/articles/s41893-020-00603-4
The carbon opportunity cost of animal-sourced food production on land ... shifts in global food production to plant-based diets by 2050 could lead to sequestration of 332–547 GtCO2, equivalent to 99–163% of the CO2 emissions budget consistent with a 66% chance of limiting warming to 1.5 °C
And back to soils ...
https://www.smithsonianmag.com/science-nature/soil-has-micro...
Soil Has a Microbiome, Too - The unique mix of microbes in soil has a profound effect on which plants thrive and which ones die
https://www.theguardian.com/environment/2022/may/07/secret-w...
The secret world beneath our feet is mind-blowing – and the key to our planet’s future - Don’t dismiss soil: its unknowable wonders could ensure the survival of our species
https://news.harvard.edu/gazette/story/2023/05/how-climate-c...
Getting to root of possible carbon storage changes due to climate change
https://regenerationinternational.org/2023/02/16/maximizing-...
Maximizing Photosynthesis and Root Exudates through Regenerative Agriculture to Increase Soil Organic Carbon to Mitigate Climate Change
https://www.theguardian.com/environment/2023/jul/04/improvin...
Improving soil could keep world within 1.5C heating target, research suggests
Better farming techniques across the world could lead to storage of 31 gigatonnes of carbon dioxide a year, data shows
The US climate law is fueling a factory frenzy
It's insane to me that the Democrats got this bill passed, along with the infrastructure law, CHIPS and others. Were any significant laws passed during the previous presidential administration besides Tax Cut and Jobs Act? I feel like nothing happened in those years.
Well they did put tariffs on imported solar panels, which was intended to (and to some degree did in fact) encourage local production, but there was a lot of opposition.
There's a saying: "Only Nixon could go to China", from the fact that only a president renowned for vehement anti-communism could open relations with communist China. The Democrats, at least since Bill Clinton's administration, have been very pro-free trade, after 2016 the Republicans were ready to do this sort of thing, but the Democrats were still pro-globalization. It's because the Democrats were the second party to swing against the idea that all trade is good no matter what the country or industry, that they were able to get stuff passed better.
It should also be said that covid closing of borders, post-covid supply chain problems, and Russia's invasion of Ukraine all swayed opinions as well.
Did the solar industry ask for such trade protectionism instead of free trade?
Are industries with tariff subsidies competitive at international price points? Given domestic subsidy, will they be efficient enough to compete in international markets?
How do non- free-trade, non- fair-trade agreements affect other trade agreements?
Trade Protectionism: https://en.wikipedia.org/wiki/Protectionism
Yeah, I am a protectionist, for sure. Especially for industries like power generation or semiconductors that impact many other industries, and especially if the nation is in danger of losing that industry entirely, such that the skills necessary to compete in it in the future can be lost.
Is it ever shown that the net effect of subsidy is preferable to competing at international prices?
"Can’t build a decent car, can’t make a TV set or a VCR worth a f, got no steel industry left, can’t educate our young people, can’t get health care to our old people, but we can [... war euphemism]" -- George Carlin (1992)
Have subsidies saved comparably inefficient [domestic manufacturing] markets, or have trade protectionist subsidies resulted in net unintended consequences when the externalities of "all you peoples' special deals" are accounted for?
Are information services and financial services a viable strategy for long-term growth in GDP?
Privacy friendly ESP32 smart doorbell with Home Assistant local integration
Nice. I like the lights.
You can also slap a reed switch (e.g. a normal door open/close sensor) near your dumb doorbell's magnetic coils that do the ringer and have it send the info to home assistant.
I hooked a $0.50 reed switch up to my dumb doorbell and ran it to a digital IO port on a ESP that's powered from the same power source as the doorbell coils. When it senses a doorbell press, it sends me an email snapshot from my local-only front door cam and plays a recording of the doorbell chime on my upstairs stereo (where otherwise I can't really hear the chime). Very convenient and fun.
Originally, I tried monitoring voltage on the coils with the analog input, but it was way too unreliable. The simpler reed switch method of detecting current is rock solid.
> same power source as the doorbell coils
Beautiful thinking. I wonder what other obscure sources for voltage in a house could be used? The HVAC head unit, analog telephone, ...
The POTS [(Plain Old Telephone Service)] phone line, with all phones on-hook, should measure around 48 volts DC. This drops down to the 3 to 9 volt range when a telephone on the line goes off-hook. An off-hook telephone typically draws about 20 milliamps of DC current to operate, at a DC resistance around 180 ohms. The remaining voltage drop occurs over the copper wire path and over the telephone company circuits where there is usually 200 to 400 ohms of series resistance to protect from short circuits and decouple the audiocircuits.
If the POTS cables daisy-chained between the telephone jacks after the telco's box are 4-pair Ethernet, there are 8 wires between each receptacle/port less 2 wires for each phone line. If only one phone line has ever been connected and there are 4pair (Ethernet,) cables between the phone jacks, there are 3 spare wire pairs that could be connected in a local private closed loop in order to carry (modulated) DC voltage for other applications; though you'd need to put like a thermal sticker label after the box and behind/on all the phone jacks to warn about the voltage (which is a presumed risk when handling 4pair cabling as an installer, as a (demo) contractor, as a future occupant)
Gigabit Ethernet uses all 4 pairs for data, so Gigabit PoE and faster must have data+power on one or more pairs.
From https://en.m.wikipedia.org/wiki/Power_over_Ethernet :
> In addition to standardizing existing practice for spare-pair (Alternative B), common-mode data pair power (Alternative A) and 4-pair transmission (4PPoE), the IEEE PoE standards provide for signaling between the power sourcing equipment (PSE) and powered device (PD). This signaling allows the presence of a conformant device to be detected by the power source, and allows the device and source to negotiate the amount of power required or available while avoiding damage to non-compatible devices.
Karpathy's llama2.c ported to pure Python
I find these efforts impressive, but what is the value proposition here? (I'm not just talking about this fork, but also Karapathy's llama2.c as well).
Quantum Echoes: A Revolutionary Method to Store Information as Sound Waves
From https://news.ycombinator.com/item?id=35622236 https://westurner.github.io/hnlog/#comment-35622236 :
> If quantum information is never destroyed – and classical information is quantum information without the complex term i – perhaps our brain states are already preserved in the universe; like reflections in water droplets in the quantum foam.
> Lagrangian points, non-intersecting paths through accretion discs, and microscopic black holes all preserve data - modulated energy; information - for some time before reversible or unreversible transformation.
How do phonons interact with such phenomena?
> Perhaps Superfluid quantum gravity can afford insight into the interior topology of black holes and other quantum foam phenomena?
“A quantum electromechanical interface for long-lived phonons” (2023) https://www.nature.com/articles/s41567-023-02080-w :
> Abstract: In single crystals, the suppression of intrinsic loss channels at low temperatures leads to exceptionally long mechanical lifetimes. Quantum electrical control of such long-lived mechanical oscillators would enable the development of phononic memory elements, sensors and transducers. The integration of piezoelectric materials is one approach to introducing electrical control, but the challenges of combining heterogeneous materials lead to severely limited phonon lifetimes. Here we present a non-piezoelectric silicon electromechanical system capable of operating in the gigahertz frequency band. Relying on a driving scheme based on electrostatic fields and the kinetic inductance effect in disordered superconductors, we demonstrate a parametrically enhanced electromechanical coupling of g/2π = 1.1 MHz, sufficient to enter the strong-coupling regime with a cooperativity of C=1,200 . In our best devices, we measure mechanical quality factors approaching Q ≈ 107, measured at low-phonon numbers and millikelvin temperatures. Despite using strong electrostatic fields, we find the cavity mechanics system in the quantum ground state, verified by thermometry measurements. Simultaneously achieving ground-state operation, long mechanical lifetimes and strong coupling sets the stage for employing silicon electromechanical devices in hybrid quantum systems and as a tool for studying the origins of acoustic loss in the quantum regime.
"Gigahertz topological valley Hall effect in NEMS phononic crystals" (2022) https://news.ycombinator.com/item?id=31049907
Can qubits be stored by transforming a [regular] [crystal] lattice, playing signal and measuring the signal after it is affected by the encoding(s) in the lattice?
Isn't diffraction of photons in a crystal also enough to recreate a qubit / wave function / histogram?
I must be failing to comprehend some limit to applied holography; holographic data storage?
Phonon (quantum sound wave) https://en.wikipedia.org/wiki/Phonon
Pentium floating-point division bug (1994)
That’s why they called it the Pentium and not the 586: on the bench it kept coming out at 585.9999999. I’m sorry.
To pay my way for the silliness: I will highly recommend “The Pentium Chronicles” for a view into that time and place.
It covers the development of the P6 arch, but there is some inside baseball about the P5. And it’s just one of the best books I’ve ever read about really amazingly well-run engineering efforts.
0.999 repeating is an infinite summation that converges to 1.
There are multiple proofs for this, but my favorite simple, yet informal proof, is that there is no real number you can add to 0.999 infinitely repeating to make it 1. So mathematically, they are equal.
Show HN: Tetris, but the blocks are ARM instructions that execute in the browser
OFRAK Tetris is a project I started at work about two weeks ago. It's a web-based game that works on desktop and mobile. I made it for my company to bring to events like DEF CON, and to promote our binary analysis and patching framework called OFRAK.
In the game, 32-bit, little-endian ARM assembly instructions fall, and you can modify the operands before executing them on a CPU emulator. There are two segments mapped – one for instructions, and one for data (though both have read, write, and execute permissions). Your score is a four byte signed integer stored at the virtual address pointed to by the R12 register, and the goal is to use the instructions that fall to make the score value in memory as high as possible. When it's game over, you can download your game as an ELF to relive the glory in GDB on your favorite ARM device.
The CPU emulator is a version of Unicorn (https://www.unicorn-engine.org/) that has been cross-compiled to WebAssembly (https://alexaltea.github.io/unicorn.js/), so everything on the page runs in the browser without the need for any complicated infrastructure on the back end.
Since I've only been working on this for a short period of time leading up to its debut at DEF CON, there are still many more features I'd eventually like to implement. These include adding support for other ISAs besides ARM, adding an instruction reference manual, and lots of little cleanups, bug fixes, and adjustments.
My highest score is 509,644,979, but my average is about 131,378.
I look forward to feedback, bug reports, feature requests, and strategy discussions!
ASM years ago. MOV'ing without any initial data feels unnerving. WASM and JupyterLite would be cool, too.
Category:Instruction_set_listings has x86 but no aarch64: https://en.wikipedia.org/wiki/Category:Instruction_set_listi...
/? jupyter asm [kernel]:
- "Introduction to Assembly Language Tutorial.ipynb" w/ the emu86 jupyter kernel which shows register state after ops: https://github.com/gcallah/Emu86/blob/master/kernels/Introdu...
- it looks like emu86 already also supports RISC, MIPS, and WASM but not yet ARM: https://github.com/gcallah/Emu86/blob/master/assembler/WASM/...
- DeepHorizons/iarm: https://github.com/DeepHorizons/iarm/blob/master/iarm_kernel... https://github.com/DeepHorizons/iarm/blob/master/iarm/arm_in... MOV https://github.com/DeepHorizons/iarm/blob/master/tests/iarm/... :
> IArm is an ARM interpreter for the ARMv6 THUMB instruction set (More specifically for the ARM Cortex M0+ CPU). It supports almost 100% of the instructions, and some assembler directives. There is also its Jupyter kernel counterpart so it can be used with Jupyter notebooks.
- JupyterLite > Adding other kernels; emu86, iarm, WASM, : https://jupyterlite.readthedocs.io/en/latest/howto/configure... [... https://github.com/jupyterlite/jupyterlite/discussions/968#d... ]
"Ask HN: How did you learn x86-64 assembly?" (2020) re: HLA, : https://news.ycombinator.com/item?id=23931373 re: the Diaphora bindiff tool and ghidra, which has GDB support now FWIU: https://news.ycombinator.com/item?id=36454485
Do Machine Learning Models Memorize or Generalize?
If you omit the training data points where the baseball hits the ground, what will a machine learning model predict?
You can train a classical ML model on the known orbits of the planets in the past, but it can presumably never predict orbits given unseen n-body gravity events like another dense mass moving through the solar system because of classical insufficiency to model quantum problems, for example.
Church-Turing-Deutsch doesn't say there could not exist a Classical / Quantum correspondence; but a classical model on a classical computer cannot be sufficient for quantum-hard problems. (e.g. Quantum Discord says that there are entanglement and non-entanglement nonlocal relations in the data.)
Regardless of whether they sufficiently generalize, [LLMs, ML Models, and AutoMLs] don't yet Critically Think and it's dangerous to take action without critical thought.
Critical Thinking; Logic, Rationality: https://en.wikipedia.org/wiki/Critical_thinking#Logic_and_ra...
MetaGPT: Meta Programming for Multi-Agent Collaborative Framework
Once we start talking about multi-agent AI systems I think my intuitions go out the window. If we consider that GPT-4 is about as capable as a reasonably intelligent high school graduate or college freshmen, I wonder how much actual benefit we get from multi-agent setups.
I mean that intuitively I couldn't imagine replacing 1 experienced professional with 1, 2, 10, 100 or even 1000 intelligent high school graduates. Intelligence doesn't seem strictly additive across multiple individuals for all cases. But then I consider that one of the most powerful life-lines in the TV game show "Who wants to be a millionaire" was the "Ask the audience". I am reminded of the cliche of the wisdom of crowds, even when the crowd is made up of non-experts.
This suggests to me that there is a kind of problem where multiple lower power agents can solve the issue to a higher quality. But there are also kinds of problem where a single higher-power intelligence will be necessary. I haven't developed an intuition when each approach is valid.
Cosign, with extreme prejudice. We went through this with AutoGPT.
I see two camps, people quietly working, and people working on grandiose ideas from the initial rush that make good headlines but not good products.
Some informal warning signs I use subconciously:
- "paper" on Arxiv, and its about prompt engineering
- agent_S_
- state machine where the LLM is eating its own output repeatedly
- LLM makes decisions based on its own output.
There's 100x more alpha in the obvious stuff because even that's not well-implemented or shared widely. Ex. people are still stuck on hallucinations 90% of the time: an obvious way to handle that is doc retrieval.
Last n.b.: in 2021 I was frustrated with ~100% of models being behind locked doors. Then, I saw how GPT3.0 was working and evolving. My mantra became "products not papers." Maybe that applies downstream of the LLM now.
Are you saying AutoGPT has similar problems or has it solved it?
We tried it recently and found it very challenging to get it to accomplish simple tasks.
TL;DR: AutoGPT has similar if not identical problems
re: AutoGPT
Got really excited at first. Thought maybe I had missed that it was viable in a year of playing with LLMs. Tried a few demos, didn't work. Looked into it more and confirmed a core loop involved LLM eating its own output to make decision.
I still kept investigating it on my todo list, in case my earlier experiments with that approach were wrong.
I took it off the todo list later.
I saw near-universal feedback like yours, that general technique never worked IMHO, and IMHO sycophancy explains why. Paper here[1], TL;DR the model is very likely to agree, so critical feedback loops over multiple steps tend to settle into a loops of steps.
IMHO this doesn't mean sycophancy breaks _all_ workflows, ex. a flow for writing a story involving outlining, writing, criticizing, then rewriting is a genuine real quality boost.
However, if a human does write => criticize 10 times, it keeps getting better each iteration. If you have an LLM do it 10 times, IMHO it's actively harmful after round 3.
"outline" => ["write page 1", "write page 2", "write page 3"] => ["feedback on page 1"..."feedback on page3" => ["use feedback and original draft to rewrite page1"..."page3"] => "combine pages 1 2 and 3 into cohesive story"
[1] https://www.anthropic.com/index/discovering-language-model-b...
> However, if a human does write => criticize 10 times, it keeps getting better each iteration. If you have an LLM do it 10 times, IMHO it's actively harmful after round 3.
Telephone (game) https://en.wikipedia.org/wiki/Telephone_(game)#Game :
>> The game has no winner: the entertainment comes from comparing the original and final messages. Intermediate messages may also be compared; some messages will become unrecognizable after only a few steps.
Transmission chain method: https://en.wikipedia.org/wiki/Transmission_chain_method :
>> The transmission chain method is a method used in cultural evolution research to uncover biases in cultural transmission.[1] This method was first developed by Frederic Bartlett in 1932.[2][1]
Feedback > Electrical Engineering; Positive Feedback, Negative Feedback,: https://en.wikipedia.org/wiki/Feedback#Electronic_engineerin...
Control Theory > Stability: https://en.wikipedia.org/wiki/Control_theory#Stability
Multi-agent System > Concept: https://en.wikipedia.org/wiki/Multi-agent_system#Concept
Convergence (disambiguation) https://en.wikipedia.org/wiki/Convergence
Convergence (logic) https://en.wikipedia.org/wiki/Convergence_(logic) :
> In mathematics, computer science and logic, convergence is the idea that different sequences of transformations come to a conclusion in a finite amount of time (the transformations are terminating), and that the conclusion reached is independent of the path taken to get to it (they are confluent).
> More formally, a preordered set of term rewriting transformations are said to be convergent if they are confluent and terminating.[1]
And then
Consensus (disambiguation) https://en.wikipedia.org/wiki/Consensus_(disambiguation)
Revision is a step in a Writing Process.
Revision (writing) https://en.wikipedia.org/wiki/Revision_(writing)
Writing Process: https://en.wikipedia.org/wiki/Writing_process
Collaborative writing > Types, Tools: https://en.wikipedia.org/wiki/Collaborative_writing#Tools
And now it is time to underline the premise(s) and the conclusion(s), time to apply Critical Thinking; Logic and Rationality.
Critical Thinking > Logic and rationality: https://en.wikipedia.org/wiki/Critical_thinking#Logic_and_ra...
To simulate a full-scale multi-agent game, there would need to be a "Fourth Estate" (and maybe a Fifth Estate); a peanut gallery of dissenters with signs and no jobs.
Then, you could model consensus with LLMs and have something better than the mediocrity that sometimes results from committees.
More samples from the same LLM vs Sample different LLMs
Maybe feed one or more agents relevant encyclopedia articles as context first; which read the most encyclopedia articles first?
SQLedge: Replicate Postgres to SQLite on the Edge
This space is starting to get crowded. Can anyone compare this with some of the other solutions coming out recently?
List a couple?
#. SQLite WAL mode
From https://www.sqlite.org/isolation.html https://news.ycombinator.com/item?id=32247085 :
> [sqlite] WAL mode permits simultaneous readers and writers. It can do this because changes do not overwrite the original database file, but rather go into the separate write-ahead log file. That means that readers can continue to read the old, original, unaltered content from the original database file at the same time that the writer is appending to the write-ahead log
#. superfly/litefs: a FUSE-based file system for replicating SQLite https://github.com/superfly/litefs
#. sqldiff: https://www.sqlite.org/sqldiff.html https://news.ycombinator.com/item?id=31265005
#. dolthub/dolt: https://github.com/dolthub/dolt :
> Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a Git repository. [...]
> Dolt can be set up as a replica of your existing MySQL or MariaDB database using standard MySQL binlog replication. Every write becomes a Dolt commit. This is a great way to get the version control benefits of Dolt and keep an existing MySQL or MariaDB database.
#. github/gh-ost: https://github.com/github/gh-ost :
> Instead, gh-ost uses the binary log stream to capture table changes, and asynchronously applies them onto the ghost table. gh-ost takes upon itself some tasks that other tools leave for the database to perform. As result, gh-ost has greater control over the migration process; can truly suspend it; can truly decouple the migration's write load from the master's workload.
#. vlcn-io/cr-sqlite: https://github.com/vlcn-io/cr-sqlite :
> Convergent, Replicated SQLite. Multi-writer and CRDT support for SQLite
> CR-SQLite is a run-time loadable extension for SQLite and libSQL. It allows merging different SQLite databases together that have taken independent writes.
> In other words, you can write to your SQLite database while offline. I can write to mine while offline. We can then both come online and merge our databases together, without conflict.
> In technical terms: cr-sqlite adds multi-master replication and partition tolerance to SQLite via conflict free replicated data types (CRDTs) and/or causally ordered event logs.
yjs also does CRDTs (Jupyter RTC,)
#. pganalyze/libpg_query: https://github.com/pganalyze/libpg_query :
> C library for accessing the PostgreSQL parser outside of the server environment
#. Ibis + Substrait [ + DuckDB ] https://ibis-project.org/blog/ibis_substrait_to_duckdb/ :
> ibis strives to provide a consistent interface for interacting with a multitude of different analytical execution engines, most of which (but not all) speak some dialect of SQL.
> Today, Ibis accomplishes this with a lot of help from `sqlalchemy` and `sqlglot` to handle differences in dialect, or we interact directly with available Python bindings (for instance with the pandas, datafusion, and polars backends).
> [...] `Substrait` is a new cross-language serialization format for communicating (among other things) query plans. It's still in its early days, but there is already nascent support for Substrait in Apache Arrow, DuckDB, and Velox.
#. ibis-project/ibis-substrait: https://github.com/ibis-project/ibis-substrait
#. tobymao/sqlglot: https://github.com/tobymao/sqlglot :
> SQLGlot is a no-dependency SQL parser, transpiler, optimizer, and engine. It can be used to format SQL or translate between 19 different dialects like DuckDB, Presto, Spark, Snowflake, and BigQuery. It aims to read a wide variety of SQL inputs and output syntactically and semantically correct SQL in the targeted dialects.
> It is a very comprehensive generic SQL parser with a robust test suite. It is also quite performant, while being written purely in Python.
> You can easily customize the parser, analyze queries, traverse expression trees, and programmatically build SQL.
> Syntax errors are highlighted and dialect incompatibilities can warn or raise depending on configurations. However, it should be noted that SQL validation is not SQLGlot’s goal, so some syntax errors may go unnoticed.
#. benbjohnson/postlite: https://github.com/benbjohnson/postlite :
> postlite is a network proxy to allow access to remote SQLite databases over the Postgres wire protocol. This allows GUI tools to be used on remote SQLite databases which can make administration easier.
> The proxy works by translating Postgres frontend wire messages into SQLite transactions and converting results back into Postgres response wire messages. Many Postgres clients also inspect the pg_catalog to determine system information so Postlite mirrors this catalog by using an attached in-memory database with virtual tables. The proxy also performs minor rewriting on these system queries to convert them to usable SQLite syntax.
> Note: This software is in alpha. Please report bugs. Postlite doesn't alter your database unless you issue INSERT, UPDATE, DELETE commands so it's probably safe. If anything, the Postlite process may die but it shouldn't affect your database.
#. > "Hosting SQLite Databases on GitHub Pages" (2021) re: sql.js-httpvfs, DuckDB https://news.ycombinator.com/item?id=28021766
#. >> - bittorrent/sqltorrent https://github.com/bittorrent/sqltorrent
>> Sqltorrent is a custom VFS for sqlite which allows applications to query an sqlite database contained within a torrent. Queries can be processed immediately after the database has been opened, even though the database file is still being downloaded. Pieces of the file which are required to complete a query are prioritized so that queries complete reasonably quickly even if only a small fraction of the whole database has been downloaded.
#. simonw/datasette-lite: https://github.com/simonw/datasette-lite datasette, *-to-sqlite, dogsheep
"Loading SQLite databases" [w/ datasette] https://github.com/simonw/datasette-lite#loading-sqlite-data...
#. awesome-db-tools: https://github.com/mgramin/awesome-db-tools
Lots of neat SQLite/vtable/pg/replication things
Bram Moolenaar has died
From https://www.reddit.com/r/vim/comments/3tluqr/my_list_of_appl... :
> What are some useful applications with Vim keybindings?
Rest in peace Bram Moolenaar, author of Vim
I had the privilege of interacting with Bram a few times through Github and he was always fast, thoughtful, and pragmatic. I know others who would say the same thing. His software will always be one of my favorites of all time.
I had one time indirect contact and confirm this. Polite, direct and helpful. Vim is reliable which is seldom.
And Moolenaar did care about other people. See his help for Uganda.
I always appreciated his charity work. Seeing that banner on vim was the first time I realized that I could apply my coding skills and turn around and give money to charity and have a real impact in the world.
Chrultrabook – Modify a Chromebook to Run Windows/Linux/macOS
A chromebook requires no modification to run linux. They all run linux out of the box by definition.
On Chromebooks, there is now a "Turn on Linux" button that's only for non-student, non-family accounts.
Can the SecureBoot keys and Serial be overwritten, or are they e-waste after supported updates end and school districts are holding the bag for computers that the kids can't run `git --help` on?
If they can turn on developer mode, they can run sudo dev_install, which will bring up a gentoo environment on which ChromeOS is based on, and then they have access to portage, and git --help. Outside of that, GalliumOS runs pretty well on most of them.
> If they can turn on developer mode
"Developer mode" (aka the ChromeOS 'Turn on Linux' mode) is not available for Google FamilyLink accounts and is optional on a per-student basis with Chromebooks for Education.
We bought the students Chromebook "Linux" computers; and then Google locked them out of Linux and added a "Turn on Linux" button.
Students are no longer able to run bash, git, or python on their Chromebooks.
Students must only install approved apps from the "Play Store" which takes a 15-30% cut from paid apps and now must use GPay if they process payments. Students and FamilyLink accounts cannot "sideload an APKs" like they can "Install a package" on actual Linux, Mac, and Windows computers.
The SecureBoot kernel and module signing keys can be easily changed on all coreboot machines FWIU.
ChromiumOS was originally Gentoo, Gnome, and Chrome.
Students can't run `git --help`, `python`, or even `adb` on Chromebooks*.
(* Students can't run `git` on a Chromebook without transpiling to WASM (on a different machine) and running all apps as the same user account without SELinux or `ls -Z` on the host).
Students can run `bash`, `git`, and `python` on Linux / Mac / or Windows computers.
There should be a "Computers for STEM" spec.
It is a bait-and-switch shame that we bought Chromebooks for all of the kids and then they were home due to COVID, and they couldn't learn to git, bash, or python with pytest.
Microsoft's CBL-Mariner 2.0 Linux Distribution Adds Clippy
"Clippy" or Office Assistant: https://en.wikipedia.org/wiki/Office_Assistant
AST libraries which linters like Clippy modify source code with are useful for Mutation testing: https://en.wikipedia.org/wiki/Mutation_testing
Pico Serial Bootloader (2021)
From "Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" (2023) https://news.ycombinator.com/item?id=35120403 :
> From https://www.cnx-software.com/2023/02/11/raspberry-pi-pico-w-... :
>> The Raspberry Pi Pico W board was launched with a WiFi 4 and Bluetooth 5.2 module based on the Infineon CYW43439 wireless chip in June 2022
> bluekitchen/btstack is the basis for the Bluetooth support in Pi Pico SDK 1.5.0+: https://github.com/bluekitchen/btstack
https://github.com/raspberrypi/pico-sdk/tree/master/lib /btstack
> /? "pi pico" bluetooth 5.2 [BLE] https://www.google.com/search?q=%22pi+pico%22+bluetooth+5.2
Companies with good ESG scores pollute as much as low-rated rivals
From https://www.ft.com/content/b9582d62-cc6f-4b76-b0f9-5b37cf15d... :
> Keeran Beeharee, vice-president for ESG outreach and research at Moody’s, agreed that ESG investment does not necessarily help an investor create a low-carbon portfolio, or any other specific goal.
> “[There is a] perception that ESG assessments do something that they do not. ESG assessments are an aggregate product, their nature is that they are looking at a range of material factors, so drawing a correlation to one factor is always going to be difficult,” Beeharee said.
> “In 2015-16, post the SDGs [UN sustainable development goals] and COP21 [Paris Agreement], when people began to really focus on the issue of climate, they quickly realised that an ESG assessment is not going to be much use there and that they need the right tool for the right task. There are now more targeted tools available that look at just carbon intensity, for example,” he added.
Emission intensity includes CO2 (Carbon Intensity) and also Methane and other emissions.
Emission intensity: https://en.wikipedia.org/wiki/Emission_intensity
UN SDG Indicators 2023 revision (Goals, Targets, Indicators) https://unstats.un.org/sdgs/indicators/Global%20Indicator%20... ctrl-f "carbon", "emission", "methane"
> Goal 9. Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation
> 9.4.1 CO2 emission per unit of value added
> Goal 13. Take urgent action to combat climate change and its impacts
> 13.2.2 Total greenhouse gas emissions per year
SDG9 > "Target 9.4: Upgrade all industries and infrastructures for sustainability": https://en.wikipedia.org/wiki/Sustainable_Development_Goal_9...
SDG13 > "Target 13.2: Integrate climate change measures into policy and planning": https://en.wikipedia.org/wiki/Sustainable_Development_Goal_1...
How should ESG composite scores be updated to reflect Emission intensity (to include CO2, and CH4,) as a weighted factor?
Many people in finance, sales and management feel their jobs are pointless
"‘[BS]’ After All? Why People Consider Their Jobs Socially Useless. Work, Employment and Society" (2023) https://journals.sagepub.com/doi/10.1177/09500170231175771 :
> Of course, this definition raises the question of how one is to decide whether a job makes a positive, a negative, or no contribution to society at all. To make such a decision, one clearly needs a theory of social value.
https://en.wikipedia.org/wiki/Welfare_economics#Approaches
https://en.wikipedia.org/wiki/Pareto_efficiency :
> Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off.
"Ask HN: Any well funded tech companies tackling big, meaningful problems?" (2020) https://news.ycombinator.com/item?id=24412493 ; Impact_investing, Strategic_alignment, #GlobalGoals (UN SDG Sustainable Development Goals) ;
> You can make an impact by solving important local and global problems by investing your time, career, and savings; by listing and comparing solutions.
Impact investing: https://en.wikipedia.org/wiki/Impact_investing
Strategic alignment: https://en.wikipedia.org/wiki/Strategic_alignment
There are 17 Global Goals. GRI Sustainability Reports are aligned with the Global Goals. Helping keep the information for the [GRI] Sustainability Report ready all year may afford additional opportunities to find meaning and impact.
Traceroute isn't real
Traceroute is problematic and sometimes lies. Paths aren't always symmetric. Sometimes routers are too busy to answer ICMP. And MPLS/encapsulating/tunneling is opaque.
On the other hand, real world network engineers rely a whole lot on traceroute from their machines and from looking glasses (and mtr is wonderful).
Is there any guarantee that there's a stable path for the duration of the traceroute packet series; FWIU, IP routes can change in the middle of a traceroute?
From "Gping – ping, but with a graph" https://news.ycombinator.com/item?id=36549005 :
> Scapy has a 3d visualization of one traceroute sequence with vpython. In college, I remember modifying it to run multiple traceroutes and then overlaying all of the routes; and wondering whether a given route is even stable through a complete traceroute packet sequence. https://scapy.readthedocs.io/en/latest/usage.html#tcp-tracer...
It's trivial to modify packet TTLs with e.g. iptables.
Virtual circuit: https://en.wikipedia.org/wiki/Virtual_circuit
There's no guarantees on path during a trace; after all, a connection could drop or come up during a trace, and routing may change as a result. We're packet switched, not circuit switched, so there's nothing out of spec if you route differently.
However, tcp is sensitive to getting packets out of order, and most multipacket udp applications aren't a big fan either. So it's common to ensure a packet with a given 5-tuple {ip dest, ip source, protocol, protocol port dest, port source} will route the same, including traveling over the same port in a multi-port bundle, unless that next hop goes down or a better next hop comes up.
That said, many traceroute implementations default to using a new port for each probe, and depending on the networks between you and what you're probing, you might see a variety of paths in a single short probe. When that happens, you really don't get much information.
Of course, the return path for the packets is likely to be different and isn't easy to know; if you control both ends, you obviously can; and if the destination network has a good looking glass, you can get some idea, but detailed traces aren't going to happen.
All that said, parts of the rant are accurate. Routers and hosts don't have to participate and many don't or are rate limited. Intermediate latency is often not very helpful. Where a trace ends only tells you were the trace ends, not where the packets left the network (possibly at the destination). Having useful diagnostic information doesn't help if you can't get in touch with someone who will use the information to fix the problem.
Being able to trace from multiple vantage points helps make sense of things.
If you're experiencing intermittent behavior like some tcp connections between two hosts work great and some don't, and most of the network involved participates in tracing and you have a good contact, you can do specific tracing and narrow down the routers they need to check to find the problem; sometimes that helps time to resolution a lot.
If you don't have a cooperative network, it's better to invest in making multiple connections and using the best one as measured; because that gives you agency to 'fix' network problems.
One perspective: http://shouldiblockicmp.com/ :
- Don't block ICMP: Echo Request and Echo Reply, Fragmentation Needed (IPv4) / Packet Too Big (IPv6), Time Exceeded, and only within the boundary allow NDP and SLAAC (IPv6)
But presumably none of that helps solve for what traceroute is used to help diagnose.
"ICMP: The Good, the Bad, and the Ugly" (2016) https://blog.securityevaluators.com/icmp-the-good-the-bad-an... ; blocking ICMP mitigates {Ping sweep, Ping flood, ICMP tunneling, Forged ICMP redirects,} but breaks {Path MTU discovery (PMTUD), TTL, ICMP Redirect}
ICMP > Redirect: https://en.wikipedia.org/wiki/Internet_Control_Message_Proto... ; Type 5
So this would drop just ICMP redirects:
sudo iptables -A OUTPUT -p icmp --icmp-type 5 -j REJECT
What I block or don't for ICMP doesn't much matter when I'm tracerouting through networks I don't control,which is nearly all of them. Yeah, if I block the returns from my probes, I won't get useful results for traces to pretty much anywhere, but that should solve itself.
Open-sourcing AudioCraft: Generative AI for audio
Generative AI > Modalities > [Music,]: https://en.wikipedia.org/wiki/Generative_artificial_intellig...
An open-source, free circuit simulator
Wokwi is free open source and neat with an MIT license https://github.com/wokwi/avr8js https://wokwi.com
From "Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" https://news.ycombinator.com/item?id=35120757 ; wokwi/rp2040js
> Wokwi > New Pi Pico + MicroPython project: https://wokwi.com/projects/new/micropython-pi-pico
And from https://news.ycombinator.com/item?id=36408561 :
> "Learn PWM signal using Wokwi Logic Analyzer" https://blog.wokwi.com/explore-pwm-with-logic-analyzer/
> Wokwi > New (Pi Pico w/ Micropython) LED project: https://wokwi.com/projects/300504213470839309
jupyter-micropython-upydevice: https://pypi.org/project/jupyter-micropython-upydevice/
Researchers create open-source platform for Neural Radiance Field development
Docs: https://docs.nerf.studio/en/latest/
Src: https://github.com/nerfstudio-project/nerfstudio/
Draft: Neural_Radiance_Fields: https://en.wikipedia.org/wiki/Draft:Neural_Radiance_Fields :
> A Neural Radiance Field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. It is a type of generative model that can create 3D scenes from a set of 2D images.[1] The NeRF model can learn the scene geometry, camera poses, and the reflectance properties of objects in a scene, which allows it to render new views of the scene from novel viewpoints.
Electronic Structure of LK-99
"Electronic structure of the putative room-temperature superconductor [ Pb_9 Cu( PO_4)_6 O ]" (2023) https://arxiv.org/abs/2308.00676 :
> A recent paper [Lee {\em et al.}, J. Korean Cryt. Growth Cryst. Techn. {\bf 33}, 61 (2023)] provides some experimental indications that Pb10−xCux(PO4)6O with x≈1, coined LK-99, might be a room-temperature superconductor at ambient pressure. Our density-functional theory calculations show lattice parameters and a volume contraction with x -- very similar to experiment. The DFT electronic structure shows Cu2+ in a 3d9 configuration with two extremely flat Cu bands crossing the Fermi energy. This puts Pb9Cu(PO4)6O in an ultra-correlated regime and suggests that, without doping, it is a Mott or charge transfer insulator. If doped such an electronic structure might support flat-band superconductivity or an correlation-enhanced electron-phonon mechanism, whereas a diamagnet without superconductivity appears to be rather at odds with our results.
Superconductivity: https://en.wikipedia.org/wiki/Superconductivity
Superconductor classification: https://en.wikipedia.org/wiki/Superconductor_classification
Room-temperature superconductor: https://en.wikipedia.org/wiki/Room-temperature_superconducto...
Diamagnetism: https://en.wikipedia.org/wiki/Diamagnetism
So theorically they agree with LK-99 reaching superconductivity?
More specifically they agree with another similar paper from Stanford's Sinead Griffin
"Origin of correlated isolated flat bands in copper-substituted lead phosphate apatite" (2023) https://arxiv.org/abs/2307.16892 :
> Abstract: A recent report of room temperature superconductivity at ambient pressure in Cu-substituted apatite (`LK99') has invigorated interest in the understanding of what materials and mechanisms can allow for high-temperature superconductivity. Here I perform density functional theory calculations on Cu-substituted lead phosphate apatite, identifying correlated isolated flat bands at the Fermi level, a common signature of high transition temperatures in already established families of superconductors. I elucidate the origins of these isolated bands as arising from a structural distortion induced by the Cu ions and a chiral charge density wave from the Pb lone pairs. These results suggest that a minimal two-band model can encompass much of the low-energy physics in this system. Finally, I discuss the implications of my results on possible superconductivity in Cu-doped apatite
Low-cost additive turns concrete slabs into super-fast energy storage
From "Low-cost additive turns concrete slabs into super-fast energy storage" (2023) https://newatlas.com/architecture/mit-concrete-supercapacito... :
> But the great thing here is that this energy storage device doesn't need to be small; concrete tends to get used in bulk. An average American 2,000-sq-ft (185.8-m2) home built on a reasonably standard five-inch-thick (13-cm) concrete slab uses about 31 cubic yards (~24 m3) of concrete. Add more if you've got a driveway or a concreted garage, and significantly more again if the house is built using concrete walls or columns.
> The MIT team says a 1,589-cu-ft (45 m3) block of nanocarbon black-doped concrete will store around 10 kWh of electricity – enough to cover around a third of the power consumption of the average American home, or to reduce your grid energy bill close to zero in conjunction with a decent-sized solar rooftop array. What's more, it would add little to no cost.
> The team has tested these concrete supercaps at small scale, cutting out pairs of electrodes to create tiny 1-volt supercapacitors about the size of button-cell batteries, and using three of them to light up a 3-volt LED. Now, it's working on blocks the size of car batteries, and targeting a 1,589-cu-ft, 10-kWh version for a larger-scale demonstration.
"Carbon–cement supercapacitors as a scalable bulk energy storage solution" (2023) https://www.pnas.org/doi/10.1073/pnas.2304318120
From https://news.ycombinator.com/item?id=35903823 :
>> Here's a discussion about the lower costs of hemp supercapacitors as compared with graphene super capacitors: https://news.ycombinator.com/item?id=16814022
FWIU, Heat-treated hemp bast fiber is comparable to graphene electrodes, and at least was far less costly because bast fiber is otherwise a waste output (and graphene at least was hazardous to produce and doesn't have a natural dendritic branch structure).
"Hemp Carbon Makes Supercapacitors Superfast" (2013) https://www.asme.org/engineering-topics/articles/energy/hemp... :
> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.
> MIT's discovery appears to take things to the next level, since it does away with the need to lay mesh electrodes into the concrete, and instead allows the carbon black to form its own connected electrode structures as part of the curing process.
> This process takes advantage of the way that water and cement react together; the water forms a branching network of channels in the concrete as it starts to harden, and the carbon black naturally migrates into those channels. These channels exhibit a fractal-like structure, larger branches splitting off into smaller and smaller ones – and that creates carbon electrodes with an extremely large surface area, running throughout the concrete.
“…it's unclear whether this kind of concrete would be suitable for outdoor use where it'll get wet.”
Pretty much all concrete used in building a house — especially the slab or basement — will get wet so that renders this idea moot for storing power for a home.
"Does rebar rust?" (2019) https://youtu.be/PLF18H9JGHs
A room-temperature superconductor? New developments
The ramifications of the inflection point we are currently at is mind boggling. I had a hard time explaining this last night but we may very well be witnessing the beginnings of a technological transformation era much like when the p-n junction was invented. From the 1940s standpoint it would be hard to envision all we had today.
- Lossless transport of energy - Batteries that don't take any time to recharge - Faster CPUs. Much faster with no heat to burn your lap.
Can I have my flying car now?
> Much faster with no heat to burn your lap.
Computation inherently creates heat, that's not something that superconductors will change.
Flipping a bit from 1 to 0 releases heat (because you can't just drop the 1 onto the negative/ground)
Resistance in non-super- conductors wastes electricity as heat.
From "Thermodynamics of Computation Wiki" (2018) https://news.ycombinator.com/item?id=18146854 :
> "Quantum knowledge cools computers: New understanding of entropy" (2011) https://www.sciencedaily.com/releases/2011/06/110601134300.h...
>> The new study revisits Landauer's principle for cases when the values of the bits to be deleted may be known. (with QC)
BHP says battery electric cheaper than hydrogen as it dumps diesel
I thought this sentence at the end was quite interesting:
> BHP’s modelling shows that battery charging time may be the biggest cost on the electric side, something that dynamic charging can reduce significantly.
For an industry like mining, the biggest cost of switching to electric isn’t buying vehicles, or batteries or setting up charging stations: it’s the downtime they experience while charging.
Very interesting. Makes me imagine a battery swapping station for a huge mining truck. Probably not feasible but I’m sure it would look awesome if it was.
They have hybrid test trucks that can power themselves via overhead lines for the uphill loaded portion of the trip. I wonder if something like that could reduce battery usage and time to charge.
That makes a lot of sense.
I went in a road trip in our EV yesterday pulling a trailer for the first time. We basically drove down into a valley in the morning, and back up out in the evening. As you could imagine, we got basically infinity range going downhill, and much much shorter range going uphill.
A track that powered/charged mining trucks while they were hauling material up, coupled with regenerative breaking seems like it could pretty much solve charging for that application.
Even better, many BHP iron sites in the Pilbarra are mesa mining .. they drive up unloaded and roll down with 100+ tonnes - plenty of opportunity to gravity charge while retarding downhill motion (free wheeling a laden HaulPak down a mesa doesn't end well).
That’s the principal behind Fortescue’s infinity train:
https://newatlas.com/transport/fortescue-wae-infinity-train-...
Regenerative braking in trains: https://www.google.com/search?q=regenerative+braking+trains
Regenerative braking: https://en.wikipedia.org/wiki/Regenerative_braking
From https://news.ycombinator.com/item?id=27579732 :
> In Scandinavia the Kiruna to Narvik electrified railway [...] The regenerated energy is sufficient to power the empty trains back up to the national border
Is re-planning routes for regenerative braking solvable with the Modified Snow Plow Problem (variation on TSP Traveling Salesman Problem), on a QC Quantum Computer; with Quantum Algorithmic advantage due to the complexity of the problem?
From "Snow plow routing problem": https://en.wikipedia.org/wiki/Snow_plow_routing_problem :
> The snow plow routing problem is an application of the structure of Arc Routing Problems (ARPs) and Vehicle Routing Problems (VRPs) to snow removal that considers roads as edges of a graph.
> The problem is a simple routing problem when the arrival times are not specified.[1] Snow plow problems consider constraints such as the cost of plowing downhill compared to plowing uphill.[2] The Mixed Chinese Postman Problem is applicable to snow routes where directed edges represent one-way streets and undirected edges represent two-way streets. [3]
Arc routing > Algorithms: https://en.wikipedia.org/wiki/Arc_routing#Algorithms :
> Finding an efficient solution with large amounts data to the Chinese Postman Problem (CPP), the Windy Postman Problem (WPP), the Rural Postman Problem (RPP), the k-Chinese postman problem (KCPP), the mixed Chinese postman problem (MCPP), the Directed Chinese Postman Problem (DCPP),[8] the Downhill Plowing Problem (DPP), the Plowing with Precedence Problem (PPP), the Windy Rural Postman Problem (WRPP) and the Windy General Routing Problem (WGRP) requires using thoughtful mathematical concepts, including heuristic optimization methods, branch-and-bound methods, integer linear programming, and applications of traveling salesman problem algorithms such as the Held–Karp algorithm makes an improvement from O(n!) to O(2^{n}n^{2}).[9] In addition to these algorithms, these classes of problems can also be solved with the cutting plane algorithm, convex optimization, convex hulls, Lagrange multipliers and other dynamic programming methods. In cases where it is not feasible to run the Held–Karp algorithm because of its high computational complexity, algorithms like this can be used to approximate the solution in a reasonable amount of time.[10]
QC algos for TSP and similar:
- QISkit tutorials > Max-Cut and Traveling Salesman Problem: docs/tutorials/06_examples_max_cut_and_tsp.ipynb: https://qiskit.org/ecosystem/optimization/tutorials/06_examp...
Solar Powered Conways Game of Life
Neat. Can it zoom out instead of wrapping around?
From "Building arbitrary Life patterns in 15 gliders" (2022) https://news.ycombinator.com/item?id=33799981 :
> From "Convolution Is Fancy Multiplication" https://news.ycombinator.com/item?id=25194658 :
>> FWIW, (bounded) Conway's Game of Life can be efficiently implemented as a convolution of the board state: https://gist.github.com/mikelane/89c580b7764f04cf73b32bf4e94... ; scipy.ndimage.convolve
> Conway's Game is a 2D convolution
> Convolution theorem: https://en.wikipedia.org/wiki/Convolution_theorem
Conway's Game of Life > Algorithms: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Algori...
Rosetta Code > Conway's Game of Life: http://rosettacode.org/wiki/Conway%27s_Game_of_Life
scipy.ndimage.convolve / scipy.fftpack.dct*
From https://github.com/thearn/game-of-life :
> How it's written
> The game grid is encoded as a simple m by n array (default 100x100 in the code) of zeros and ones. In each program, a state transition is determined for each pixel by looking at the 8 pixel values all around it, and counting how many of them are "alive", then applying some rules based that number. Since the "alive" or "dead" states are just encoded as 1 or 0, this is equivalent to summing up the values of all 8 neighbors.
> If you want to do this for all pixels in a single shot, this is equivalent to computing the 2D convolution between the game state array and the matrix [[1,1,1],[1,0,1],[1,1,1]]. Convolution is an operation that basically applies that matrix as a "stencil" at every position around the game grid array, and sums up the values according to the values in that stencil. And convolution can be very efficiently computed using the FFT, thanks to the Fourier Convolution Theorem.
> One side effect: the convolution using FFT implicitly involves periodic boundary conditions, so the game grid is "wrapped" around itself (like in Pacman, or Mario Bros. 2). If you wanted to change this, you would just have to modify the 2D convolution function to use an orthogonal form of the DCT instead of the FFT. This would correspond to "hard" (i.e. Dirichlet) boundary conditions on the convolution operator.
> I think you could do this with scipy using the 1D orthogonal dct provided:
from scipy import fftpack
fftpack.dct(data, norm='ortho')
> but I haven't tried that yet. You'd have to define a dct2() method using fftpack.dct and separability (ie. the 1D transform matrix is rank-one, and the 2D transform operator is the Kronecker product of two 1D transforms)The (Inverse) Quantum Fourier Transform is also applied in QC; https://en.wikipedia.org/wiki/Quantum_Fourier_transform#Defi...
Conway's Game of Life: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
Solar power: https://en.wikipedia.org/wiki/Solar_power
E Ink: https://en.wikipedia.org/wiki/E_Ink
I am just curious, but what do you feel defining these three common terms, surely all well known to anyone who'd read HN, adds to the discussion?
Links are edges. When there are URL links, there are better edges that cost less to find than NLP tasks like conceptual entity recognition.
Because I've added edges from hn_uri to wikipedia_uris; I, others, and search engines could discover the hn_uris relevant to the concept URIs (by graph pattern search)
W3C SPARQL is a graph pattern query language. Some triple graph patterns enabled by there being links in the comments here in HN:
?s ?p ?o.
?s linksTo ?o.
?s linksTo <wikipedia_uri> .
Also, recently, for example, because I linked to "methane" on Wikipedia unnecessarily, and then checked the "See also" section, and then saw the "Global Methane Initiative" page, I realized that there was an opportunity to forward the latest methane emissions tech to GMI (which I learned about by reading comparatively mundane background information from Wikipedia, which has Concept URIs which are reused by DBpedia which contributes to the Linked Open Data cloud for indeed what returns)Single Atom Pd Catalyst Could Cut Methane Pollution 90% from Millions of Engines
"Single Atom Catalyst Could Cut Methane Pollution 90% From Millions of Engines" (2023) https://scitechdaily.com/single-atom-catalyst-could-cut-meth... :
> [...] Individual palladium atoms attached to the surface of a catalyst can remove 90% of unburned methane from natural-gas engine exhaust at low temperatures, scientists reported on July 20 in the journal Nature Catalysis.
> While more research needs to be done, they said, the advance in single-atom catalysis has the potential to lower exhaust emissions of methane, one of the worst greenhouse gases, which traps heat at about 25 times the rate of carbon dioxide.
> Promising Results Across Engine Operation Temperatures
> Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Washington State University showed that the catalyst removed methane from engine exhaust at both the lower temperatures where engines start up and the higher temperatures where they operate most efficiently, but where catalysts often break down.
"Dynamic and reversible transformations of subnanometre-sized palladium on ceria for efficient methane removal" (2023) https://www.nature.com/articles/s41929-023-00983-8
Methane emissions > Removal technology: https://en.wikipedia.org/wiki/Methane_emissions#Removal_tech...
Palladium: https://en.wikipedia.org/wiki/Palladium
See also:
- "Add-on [AGR] device makes home furnaces cleaner, safer and longer-lasting" (2023) https://news.ycombinator.com/item?id=36926952 :
> [...] removes more than 99.9% of acidic gases and other emissions to produce an ultraclean natural gas furnace.
Add-on device makes home furnaces cleaner, safer and longer-lasting
"Add-on device makes home furnaces cleaner, safer and longer-lasting" (2023) https://www.ornl.gov/news/add-device-makes-home-furnaces-cle... :
> [...] Even modern high-efficiency condensing furnaces produce significant amounts of corrosive acidic condensation and unhealthy levels of nitrogen oxides, carbon monoxide, hydrocarbons and methane. These emissions are typically vented into the atmosphere and end up polluting our soil, water and air.
> Now, scientists at the Department of Energy’s Oak Ridge National Laboratory have developed an affordable add-on technology that removes more than 99.9% of acidic gases and other emissions to produce an ultraclean natural gas furnace. This acidic gas reduction, or AGR, technology can also be added to other natural gas-driven equipment such as water heaters, commercial boilers and industrial furnaces.
> “Just as catalytic converters help reduce emissions from billions of vehicles worldwide, the new AGR technology can virtually eliminate problematic greenhouse gases and acidic condensation produced by today’s new and existing residential gas furnaces,” said Zhiming Gao, staff researcher with ORNL’s Energy Science and Technology Directorate. “An eco-friendly condensate eliminates the need to use corrosion-resistant stainless steel materials for furnace heat exchangers, which reduces manufacturing costs.”
"Nondestructive neutron imaging diagnosis of acidic gas reduction catalyst after 400-Hour operation in natural gas furnace" (2023) https://www.sciencedirect.com/science/article/abs/pii/S13858...
See also:
- "Single Atom [Palladium] Catalyst Could Cut Methane Pollution 90% from Millions of Engines" (2023) https://news.ycombinator.com/item?id=36926784
Blockchain company is granted a patent for a room-temperature superconductor
Ooh high temperature superconductors are in the news how can we work this into a crypto grift?
LPython: Novel, Fast, Retargetable Python Compiler
Looks very interesting ! The authors talk about Numba, but does anyone know how it would compare to Codon ? (https://news.ycombinator.com/item?id=33908576)
edit: after trying quickly, it seems that lpython really requires type annotations everywhere, while codon is more permissive (or does type inference)
I would say LPython, Codon, Mojo and Taichi are structured similarly as compilers written in C++, see the links at the bottom of https://lpython.org/.
Internally they each parse the syntax to AST, then have some kind of an intermediate representation (IR), do some optimizations and generate code. The differences are in the details of the IR and how the compiler is internally structured.
Regarding the type inference, this is for a blog post on its own. See this issue for now: https://github.com/lcompilers/lpython/issues/2168, roughly speaking, there is implicit typing (inference), implicit declarations and implicit casting. Rust disallows implicit declarations and casting, but allows implicit typing. As shown in that issue they only meant to do single line implicit typing, but (by a mistake?) allowed multi-statements implicit typing (action at a distance). LPython currently does not allow any implicit typing (type inference). As documented at the issue, the main problem with implicit typing is that there is no good syntax in CPython that would allow explicit type declaration but implicit typing. Typically you get both implicit declaration and implicit typing, say in `x = 5`, this both declares `x` as a new variable as well as types it as integer. C++ and Rust does not allow implicit declarations (you have to use `auto` or `let` keywords) and I think we should not do either. We could do something like `x: var = 5`, but at that point you might as well just do `x: i32 = 5`, use the actual type instead of `var`.
Shedskin is a Python-to-C++ transpiler that does type inference and does not support the full standard library: https://en.wikipedia.org/wiki/Shed_Skin#Type_inference
From "Show HN: Python Tests That Write Themselves" (2019) https://news.ycombinator.com/item?id=21012133:
> pytype (Google) [1], PyAnnotate (Dropbox) [2], and MonkeyType (Instagram) [3] all do dynamic / runtime PEP-484 type annotation type inference [4]
Hypothesis (@given decorator tests) also does type inference IIUC? https://hypothesis.readthedocs.io/en/latest/
icontract and pycontracts do runtime Preconditions and Postconditions with Design-by-Contract patterns similar to Eiffel DbC; they check the types and values of arguments passed while the program in running and not just at coding or compile time.
Yes, we have Shedskin in the list at the bottom of https://lpython.org/. Note that the Shedskin compiler is written in Python, so the speed of compilation might be lower than other Python compilers written in C++. Unless it compiles itself, that would be interesting. We thought about eventually writing LPython in LPython, but for now we are focusing on delivering, so we are sticking to C++.
Can LPython transpile existing Python type inference libraries written in Python to C++?
Only if the libraries used the subset of Python that LPython can compile. So currently no.
Intent to approve PEP 703: making the GIL optional
Lots of C library code for decades carried man page warnings it was unstable for use in async, re-entrant and recursive contexts. We learned how to cope and incrementally re-entrant safe versions deployed without too much API instability. Maybe time has healed wounds and caused me memory loss of the pain of discovery you'd tripped over them.
String parsing which tokenised in-place. DNS calls which used static buffers. Things which exploited Vax specific stack behaviour.
I think the GIL has been a blessing and a curse.
While I'm happy to see optional GIL approved and happening,
I also suspect that the GIL has saved us from debugging reentrant and/or dangerously concurrent code for years, and I salute the GIL for forcing us to build Arrow for IPC in Python, in particular.
Someday, URI attributes in CPython docstrings might specify which functions are constant time, non re-entrant, functions.
Reentrancy (computing): https://en.wikipedia.org/wiki/Reentrancy_(computing)
Global Interpreter Lock: https://en.wikipedia.org/wiki/Global_interpreter_lock
An ultra-sensitive on-off switch helps axolotls regrow limbs
From TA https://scopeblog.stanford.edu/2023/07/26/how-an-ultra-sensi... :
> Axolotls, they discovered, have an ultra-sensitive version of mTOR, a molecule that acts as an on-off switch for protein production. And, like survivalists who fill their basements with non-perishable food for hard times, axolotl cells stockpile messenger RNA molecules, which contain genetic instructions for producing proteins. The combination of an easily activated mTOR molecule and a repository of ready-to-use mRNAs means that after an injury, axolotl cells can quickly produce the proteins needed for tissue regeneration.
From "Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://news.ycombinator.com/item?id=35871887 :
> "Direct neuronal reprogramming by temporal identity factors" (2023) https://www.pnas.org/doi/10.1073/pnas.2122168120#abstract :
> Abstract: Temporal identity factors are sufficient to reprogram developmental competence of neural progenitors and shift cell fate output, but whether they can also reprogram the identity of terminally differentiated cells is unknown. To address this question, we designed a conditional gene expression system that allows rapid screening of potential reprogramming factors in mouse retinal glial cells combined with genetic lineage tracing. Using this assay, we found that coexpression of the early temporal identity transcription factors Ikzf1 and Ikzf4 is sufficient to directly convert Müller glial (MG) cells into cells that translocate to the outer nuclear layer (ONL), where photoreceptor cells normally reside. We name these “induced ONL (iONL)” cells. Using genetic lineage tracing, histological, immunohistochemical, and single-cell transcriptome and multiome analyses, we show that expression of Ikzf1/4 in MG in vivo, without retinal injury, mostly generates iONL cells that share molecular characteristics with bipolar cells, although a fraction of them stain for Rxrg, a cone photoreceptor marker. Furthermore, we show that coexpression of Ikzf1 and Ikzf4 can reprogram mouse embryonic fibroblasts to induced neurons in culture by rapidly remodeling chromatin and activating a neuronal gene expression program. This work uncovers general neuronal reprogramming properties for temporal identity factors in terminally differentiated cells.
>> Is it possible to produce or convert Müller glial cells with Nanotransfection (stroma reprogramming), too?
Muller glial cells and mTOR: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&as_vi... :
- "Genetic and epigenetic regulators of retinal Müller glial cell reprogramming" (2023) https://www.sciencedirect.com/science/article/pii/S266737622... :
> A number of factors have been identified as the important regulators in Müller glial cell reprogramming. The early response of Müller glial cells upon acute retinal injury, such as the regulation in the exit from quiescent state, the initiation of reactive gliosis, and the re-entry of cell cycle of Müller glial cells, displays significant difference between mouse and zebrafish, which may be mediated by the diverse regulation of Notch and TGFβ (transforming growth factor-β) isoforms and different chromatin accessibility.
From "Fiber-infused ink enables 3D-printed heart muscle to beat" (2023) https://news.ycombinator.com/item?id=36894749 https://seas.harvard.edu/news/2023/07/fiber-infused-ink-enab... :
> In a paper published in Nature Materials, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) report the development of a new hydrogel ink infused with gelatin fibers that enables 3D printing of a functional heart ventricle that mimics beating like a human heart. They discovered the fiber-infused gel (FIG) ink allows heart muscle cells printed in the shape of a ventricle to align and beat in coordination like a human heart chamber.
> “People have been trying to replicate organ structures and functions to test drug safety and efficacy as a way of predicting what might happen in the clinical setting,” says Suji Choi, research associate at SEAS and first author on the paper. But until now, 3D printing techniques alone have not been able to achieve physiologically-relevant alignment of cardiomyocytes, the cells responsible for transmitting electrical signals in a coordinated fashion to contract heart muscle.
Hydrogel and gelatin.
Regenerative medicine: https://en.wikipedia.org/wiki/Regenerative_medicine
Regeneration in humans > Induced regeneration: https://en.wikipedia.org/wiki/Regeneration_in_humans#Induced...
Adding clean energy to the US power grid just got a lot easier
From TA https://www.theverge.com/2023/7/28/23811031/ferc-renewable-e... :
> As it is now, it takes an average of five years for a new energy project to connect to the grid. There’s a huge backlog of more than 2,000 gigawatts of clean energy generation and storage that’s just waiting in line for approval. That’s about as much capacity as the nation’s existing power plants have for generating electricity today.
> [...] To clear the backlog, the new federal rule will require grid managers to assess projects in clusters instead of one at a time. They’ll also face firm deadlines and penalties for failing to finish interconnection studies on time. The new rule prioritizes projects that are the farthest along in development and also includes new requirements for project developers, like financial deposits to discourage them from proposing projects that might not pull through.
Also from TA:
> Today, renewable energy makes up just over 20 percent of the US electricity mix. A five-year wait time isn’t going to cut it if the Biden administration wants to reach its goal of achieving a 100 percent clean power grid by 2035. Added capacity would also come in handy during sweltering heatwaves like those of this summer, when electricity demand spikes for air conditioning. Two of the nation’s largest grid operators issued alerts about potential energy shortages for that reason this week.
From "Computer simulation provides 4k scenarios for a climate turnaround" (2023) https://news.ycombinator.com/item?id=36578412 :
> Highlights:
> • Probabilistic assessment of 4000 decarbonisation pathways with the ETSAP-TIAM.
> • Delaying the action for the 2 °C target costs similar to acting now for the 1.5 °C.
> • Demand electrification is higher in 2050 than today in all model runs.
> • Hydrogen is used in 99% of the model runs to meet the 1.5 °C target.
Why SQLite does not use Git (2018)
I'm glad Fossil works for them, but this line is bothered me:
> In contrast, Fossil is a single standalone binary which is installed by putting it on $PATH. That one binary contains all the functionality of core Git and also GitHub and/or GitLab. It manages a community server with wiki, bug tracking, and forums, provides packaged downloads for consumers, login managements, and so forth, with no extra software required
git, for all its issues, is not bundling the kitchen sink. I do prefer the "do one thing and do it well" approach.
Git actually is bundling a lot of stuff you probably don't realize. Run 'git instaweb' for example and it will spin up a local CGI server and perl CGI script to give a very simple web UI: https://git-scm.com/docs/git-instaweb Fossil isn't much different in core functionality here.
There is a ton of email client integration and functionality in git too that most people who only use GitHub probably have absolutely no idea exists or even why it's there. Stuff like sending email patches right from git: https://git-scm.com/docs/git-send-email
FWIW, that isn't actually part of the core git binary. Because unlike fossil, git is a collection of executables, not just one.
Yes, which is why git is a pain in the butt to install and you have to rely on your OS packages (which includes all the kitchen sink stuff) or a GUI installer with all the necessary dependencies vs. Fossil which is just download executable and done.
Securing the software supply chain for software updaters and VCS systems means PKI and/or key distribution, cryptographic hashes and signatures, file manifests with per-archive-file checksums, and DAC extended filesystem attributes for installed files; Ironically, there's more to it than just `curl`'ing a binary into place and not remembering to update it.
And that is why package managers.
And sigstore; for centralized software artifact (package,) hashes: https://sigstore.dev/
Wireless charging technique boosts long-distance efficiency to 80%
Does wireless power transmission increase atmospheric heat; adversely affect climate change?
Wireless power transfer: https://en.wikipedia.org/wiki/Wireless_power_transfer
First time I read about it, so also the actual QI wireless charging should increase the atmospheric heat?
Anyway the current wireless charging is less efficient, and the “dispersed” power is transformed into heat, so yes, it’s always less environment friendly. The numbers of dispersed power are very small but on a very big scale.
Space-based solar power > Safety: https://en.wikipedia.org/wiki/Space-based_solar_power#Safety :
> At the Earth's surface, a suggested microwave beam would have a maximum intensity at its center, of 23 mW/cm2 (less than 1/4 the solar irradiation constant), and an intensity of less than 1 mW/cm2 outside the rectenna fenceline (the receiver's perimeter).[91] These compare with current United States Occupational Safety and Health Act (OSHA) workplace exposure limits for microwaves, which are 10 mW/cm2
If the beam intensity is lower than than the solar irradiation constant, what is the advantage over terrestrial solar/TPV?
I think that intensity could heat a metal object?
Is all of the inefficiency of wireless power transfer systems due to thermal conversion/loss?
The first room-temperature ambient-pressure superconductor?
Room-temperature superconductor: https://en.wikipedia.org/wiki/Room-temperature_superconducto...
Superconducting quantum computing: https://en.wikipedia.org/wiki/Superconducting_quantum_comput...
AI/ML development, training and inference using Python and Jupyter
We Just Published a AI/ML development Kit on AWS, This VM comes with the setup & installation for collaborative development, training & inference of AI-ML models using Jupyter notebook, python libraries & NVIDIA Cuda drivers. Check it Out.
"Python Machine Learning (3rd Ed.) " https://github.com/rasbt/python-machine-learning-book-3rd-ed...
Vscode.dev: Local Development with Cloud Tools
FYI – you can go to github.dev (or press "." while browsing any repository) to get the same VS Code online experience, additionally with the repo automatically loaded into it.
Something else I find very confusing is that vscode.dev & github.dev are both free and run the editor fully locally in your browser. Github Codespaces on the other hand provisions a VM behind the scenes so you can run containers, build tooling and whatever else alongside the editor. That one charges by the hour.
github.dev now has a "Continue With..." call-to-action which spins up a Codespace, so that differentiates it a bit.
From https://github.com/jupyterlite/jupyterlite/issues/1086#issue... :
vscode://vscode.git/clone?url=https%3A%2F%2Fgithub.com%2FMicrosoft%2Fvscode-vsce.git
vscode://file/path/to/file/file.ext:55:20
vscode://file/path/to/project/
https://vscode.dev/github/organization/repo
https://vscode.dev/github/python/cpython https://vscode.dev/azurerepos/organization/project/repo
https://vscode.dev/azurerepos/organization/project/repoFancy quartz countertops are killing people who make them, doctors say
I’m curious whether quartzite carries the same risk? They only mention marble and granite as having a lower health risk
Retentive Network: A Successor to Transformer Implemented in PyTorch
"Retentive Network: A Successor to Transformer for Large Language Models" (2023) https://arxiv.org/abs/2307.08621 https://www.arxiv-vanity.com/papers/2307.08621/ https://huggingface.co/papers/2307.08621 https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=Ret...
- https://news.ycombinator.com/item?id=36831956 2 days ago
ElKaWe – Electrocaloric heat pumps
Natural refrigerant https://en.wikipedia.org/wiki/Natural_refrigerant :
> Natural refrigerants are considered substances that serve as refrigerants in refrigeration systems (including refrigerators, HVAC, and air conditioning). They are alternatives to synthetic refrigerants such as chlorofluorocarbon (CFC), hydrochlorofluorocarbon (HCFC), and hydrofluorocarbon (HFC) based refrigerants. Unlike other refrigerants, natural refrigerants can be found in nature and are commercially available thanks to physical industrial processes like fractional distillation, chemical reactions such as Haber process and spin-off gases. The most prominent of these include various natural hydrocarbons, carbon dioxide, ammonia, and water.[1] Natural refrigerants are preferred actually in new equipment to their synthetic counterparts for their presumption of higher degrees of sustainability. With the current technologies available, almost 75 percent of the refrigeration and air conditioning sector has the potential to be converted to natural refrigerants
Edit
Heat pump and refrigeration cycle > Thermodynamic cycles: https://en.wikipedia.org/wiki/Heat_pump_and_refrigeration_cy... :
> According to the second law of thermodynamics, heat cannot spontaneously flow from a colder location to a hotter area; work is required to achieve this.[3] An air conditioner requires work to cool a living space, moving heat from the interior being cooled (the heat source) to the outdoors (the heat sink). Similarly, a refrigerator moves heat from inside the cold icebox (the heat source) to the warmer room-temperature air of the kitchen (the heat sink). The operating principle of an ideal heat engine was described mathematically using the Carnot cycle by Sadi Carnot in 1824. An ideal refrigerator or heat pump can be thought of as an ideal heat engine that is operating in a reverse Carnot cycle.[4]
> Heat pump cycles and refrigeration cycles can be classified as vapor compression, vapor absorption, gas cycle, or Stirling cycle types.
Heat pump: https://en.wikipedia.org/wiki/Heat_pump :
> [...] When in heating mode, a refrigerant at the warmer temperature is compressed, becoming hot. Its thermal energy can be transferred to the cooler space. After being returned to the warmer space the refrigerant is decompressed — evaporated. It has delivered some of its thermal energy, so returns colder than the environment, and can again take up energy from the air or the ground in the warm space, and repeat the cycle.
> Air source heat pumps are the most common models, while other types include ground source heat pumps, water source heat pumps and exhaust air heat pumps. Large-scale heat pumps are also used in district heating systems.[2]
> The efficiency of a heat pump is expressed as a coefficient of performance (COP), or seasonal coefficient of performance (SCOP). The higher the number, the more efficient a heat pump is. When used for space heating, heat pumps are typically much more energy-efficient than electric resistance and other heaters. Because of their high efficiency and the increasing share of fossil-free sources in electrical grids, heat pumps can play a key role in climate change mitigation.[3][4] Consuming 1 kWh of electricity, they can transfer 3 to 6 kWh of thermal energy into a building.[5] The carbon footprint of heat pumps depends on how electricity is generated, but they usually reduce emissions in mild climates.[6] Heat pumps could satisfy over 80% of global space and water heating needs with a lower carbon footprint than gas-fired condensing boilers: however, in 2021 they only met 10%.[7]
>> [...] When in heating mode, a refrigerant at the warmer temperature is compressed, becoming hot. Its thermal energy can be transferred to the cooler space. After being returned to the warmer space the refrigerant is decompressed — evaporated. It has delivered some of its thermal energy, so returns colder than the environment, and can again take up energy from the air or the ground in the warm space, and repeat the cycle.
Great to see that even Wikipedia struggles with this :-)
This is 100% wrong.
Transferring heat from a warm space to a cold space happens on its own and does not need a heat pump. In the contrary, you can even extract mechanical work from this process.
Heating mode means heating a warm space by transferring heat from a colder space, which requires work.
If you convert all of the local thermal gradient to electricity, there's no work left to do without a pressure gradient in the loop or passive exchange to keep a thermal gradient there?
According to Superfluid Quantum Gravity, black holes and the quantum foam have a gravitational pressure gradient; vortices given density.
Stem Formulas
Clean!
I submitted the first formula that popped into my head from uni years ago: the Generalised Linear Model definition from one of the later stats courses
This project vaguely reminds me of Gradshteyn and Ryzhik (https://en.wikipedia.org/wiki/Gradshteyn_and_Ryzhik)
Rosetta Code: https://rosettacode.org/
> Complexity Zoo > Petting Zoo > {P, NP, PP,}, Modeling Computation > Deterministic Turing Machine https://complexityzoo.net/Petting_Zoo#Deterministic_Turing_M...
From https://news.ycombinator.com/item?id=31719696 :
> Fixed-point combinator > Y Combinator, Implementations in other languages: https://en.wikipedia.org/wiki/Fixed-point_combinator
> Y-combinator in like 100 languages: https://rosettacode.org/wiki/Y_combinator #Python
Quantum Algorithm Zoo by Microsoft Quantum: https://quantumalgorithmzoo.org/
From https://westurner.github.io/hnlog/#comment-30784573 :
> Quantum Monte Carlo,
QFT and iQFT; Inverted Quantum Fourier Transform: https://en.wikipedia.org/wiki/Quantum_Fourier_transform
From "Common Lisp names all sixteen binary logic gates" https://news.ycombinator.com/item?id=32804463 :
- Quantum_logic
The matrix representations of quantum gates and operators are neat too; https://www.google.com/search?q=matrix+representations+of+qu...
Lean mathlib may already have the proofs for many of these Stem Formulas (in LaTeX)? These formulas as SymPy would also be useful.
latex2sympy parses LaTeX and generates SymPy symbolic CAS Python code (w/ ANTLR) and is now merged in SymPy core but you must install ANTLR before because it's an optional dependency. Then, sympy.lambdify will compile a symbolic expression for use with TODO JAX, TensorFlow, PyTorch,.
"Ask HN: Did studying proof based math topics make you a better programmer?" Re: lean mathlib https://news.ycombinator.com/item?id=36463580
[Mathematics in mathlib > A mathlib overview]( https://leanprover-community.github.io/mathlib-overview.html )
From https://news.ycombinator.com/item?id=36159017 :
> sympy.utilities.lambdify.lambdify() https://github.com/sympy/sympy/blob/a76b02fcd3a8b7f79b3a88df... :
>> """Convert a SymPy expression into a function that allows for fast numeric evaluation [with the CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, SymPy, numexpr,]*
From https://westurner.github.io/hnlog/#comment-19084622 :
> "latex2sympy parses LaTeX math expressions and converts it into the equivalent SymPy form" and is now merged into SymPy master and callable with sympy.parsing.latex.parse_latex(). It requires antlr-python-runtime to be installed. https://github.com/augustt198/latex2sympy https://github.com/sympy/sympy/pull/13706
ENH: 'generate a Jupyter notebook' (nbformat .ipynb JSON) function from this stem formula
ENH: Store/export Stem formula attributes as JSON-LD Linked Data and/or RDFa (RDF in HTML Attributes) .
JSON-LD Playground has examples of JSON-LD, as does https://schema.org/CreativeWork : https://json-ld.org/playground/
Pending are the schema.org RDFS vocabulary MathSolver class and mathExpression property: https://schema.org/MathSolver and https://schema.org/mathExpression
Dbpedia has wikipedia infobox attributes as RDF.
IDK if there's an RDF interface to Wolfram?
ENH: Generate search urls from formula names
Computational Chemistry Using PyTorch
I tried using this last year or the year before. Ended up using DeepChem instead https://github.com/deepchem/deepchem.
That said, I’ve found RDKit to be the singularly most useful toolkit in this area.
Tequila wraps Psi4, Madness, and/or PySCF for Quantum Chemistry with Expectation Values: https://github.com/tequilahub/tequila#quantumchemistry
Vimspector – the Vim debugger rules all
The actual title is "Vimspector - A multi-language debugging plugin for Vim".
It is a UI around DAP: https://github.com/puremourning/vimspector#what-vimspector-i...
Microsoft made LSP and it does DAP, both are really useful for coders.
In fact even GDB added DAP support recently: https://www.phoronix.com/news/GDB-Debug-Adapter-Protocol
Bad numbers in the “gzip beats BERT” paper?
Probably disappointing to the authors but an excellent rebuttal.
This is the sort of mistake that's awfully easy to make in ML. The unfortunate thing about the field is that subtle methodological errors often cause subtle failures rather than catastrophic failures as we're used to in many other branches of engineering or science. You can easily make a system slightly worse or slightly better by contaminating your training set with bad data or accidentally leaking some information about your target and the ML system will take it in stride (but with slightly contaminated results).
This result makes sense to me because as much as I would like it to be true, applying existing compression algorithms to ML feels like too much of a "free lunch". If there was any special magic happening in compression algorithms we'd use compression algorithms as encoders instead of using transformers as compressors.
It's true in many experiments. The desire to get the result you want can often overwhelm the need to validate what you are getting.
Especially true when the results confirm any pre-existing thinking you may have.
Yep, confirmation bias. Luckily helped with peer review!
Hasn’t this paper made it through peer review?
Yeah it was published at ACL ( https://aclanthology.org/2023.findings-acl.426/ ) which is one of the most prestigious conferences in NLP. So kinda disappointing.
But paper reviewers are usually not supposed to look at the actual source code of the papers, and definitely don't try to reproduce the results. They just read the paper itself, which of course doesn't talk about the error.
Not sure what the best solution is, other than having the most "hyped" papers double verified by researchers on Twitter.
Yeah, it’s not (entirely) the students’ faults that this slipped through peer review. I don’t envy the whiplash they’re going to experience over the next few weeks.
If I was the graduate chair of their department I might schedule a meeting with their supervisor to sort out how this happened.
What about the difference in CPU cost, RAM cost, and GPU training hours, though? What about the comparative Big-E's of the models?
Great topic: Model minification and algorithmic omplexity
These are good research topics, but then you really need to be comparing to other models in the same class.
The only other super cheap model they compare with is FastText, and FastText beat them quite substantially.
Cython 3.0 Released
From https://cython.readthedocs.io/en/latest/src/changes.html#com... :
> Since Cython 3.0.0 started development, CPython 3.8-3.11 were released. All these are supported in Cython, including experimental support for the in-development CPython 3.12
A PostgreSQL Docker container that automatically upgrades your database
This looks useful.
I've had issues in the past with PostgreSQL as a backend for NextCloud (all running in docker) and blindly pulling the latest PostgreSQL and then wondering why it didn't work when the major version jumped. (It's easy enough to fix once you figure out why it's not running - just do a manual export from the previous version and import the data into the newer version).
However, does this container automatically backup the data before upgrading in case you discover that the newer version isn't compatible with whatever is using it?
> However, does this container automatically backup the data before upgrading ...
Nope. This container is the official Docker Postgres 15.3-alpine3.18 image + the older versions of PostgreSQL compiled into it and some pg_upgrade scripting added to the docker entrypoint script to run the upgrade before starting PostgreSQL.
It goes out of it's way to use the "--link" option when running pg_upgrade, to upgrade in-place and therefore avoid making an additional copy of the data.
That being said, this is a pretty new project (about a week old on GitHub), and having some support for making an (optional) backup isn't a bad idea.
I'll have to think on a good way to make that work. Probably needs to check for some environment variable as a toggle or something, for the people who want it... (unsure yet).
Seems def interesting to handle backup, wonder if could just be something simple enough that lets you downgrade if upgrade fails
Yeah. Probably going to need to add a bunch more error catching and handling-of-specific-situations. Edge cases being a thing and all. :)
Chapter 26. Backup and Restore: https://www.postgresql.org/docs/current/backup.html
Chapter 26. Backup and Restore > 26.3. Continuous Archiving and Point-in-Time Recovery (PITR) > 26.3.4. Recovering Using a Continuous Archive Backup: https://www.postgresql.org/docs/current/continuous-archiving...
IIRC there are fancier ways than pg_dump to do Postgres backups that aren't postgres native PITR?
gh topic postgresql-backup: https://github.com/topics/postgresql-backup
- pgsql-backup.sh: https://github.com/fukawi2/pgsql-backup/blob/develop/src/pgs...
- https://github.com/SadeghHayeri/pgkit#backup https://github.com/SadeghHayeri/pgkit/blob/main/pgkit/cli/co... :
$ sudo pgkit pitr backup <name> <delay>
> Recover: This command is used to recover a delayed replica to a specified point in time between now and the database's delay amount. The time can be given in the YYYY-mm-ddTHH:MM format. The latest keyword can also be used to recover the database up to the latest transaction available.: $ sudo pgkit pitr recover <name> <time>
$ sudo pgkit pitr recover <name> latest
> The database will then start replaying the WAL files. It's progress can be tracked through the log files at /var/log/postgresql/.- "PostgreSQL-Disaster-Recovery-With-Barman" https://github.com/softwarebrahma/PostgreSQL-Disaster-Recove... :
> The solution architecture chosen here is a 'Traditional backup with WAL streaming' architecture implementation (Backup via rsync/SSH + WAL streaming). This is chosen as it provides incremental backup/restore & a bunch of other features.
Glossary of backup terms: https://en.wikipedia.org/wiki/Glossary_of_backup_terms
Continuous Data Protection > Continuous vs near continuous: https://en.wikipedia.org/wiki/Continuous_Data_Protection#Con...
Thanks. I hadn't come across pgkit before. :)
A default option is to warn about the possibility and then just leave the admins to do their own backups (which they should be doing anyway).
What do you reckon the right place(s) to warn people would be?
As an interactive prompt before mutating the data, with a `-y` to bypass the interactive check (and in the docs for `-y`)
Nah. I can't see anything that needs interaction as being workable, as this is supposed to be an "automatic upgrade" thing.
New study gives clues on why exercise helps with inflammation
Exercise powers the lymphatic system which increases the rate at which wastes are removed from tissues.
Exercise correlates with subsequent increased levels of endocannabinoids.
"The anti-inflammatory effect of bacterial short chain fatty acids is partially mediated by endocannabinoids" (2021) https://www.tandfonline.com/doi/full/10.1080/19490976.2021.1... https://www.medicalnewstoday.com/articles/exercise-lowers-in...
"The Endocannabinoid System and Physical Exercise" (2022) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9916354/
What are the relations between exercise and diet (and [health] conscientiousness)?
/? Omega 6:3 ratio (and inflammation): https://www.google.com/search?q=omega+6+3+ratio
A third of North America’s birds have vanished
One thing I dislike the most about these problems – and the information/media/activism around them – is the "solutioning".
Yes, we should recycle and reduce carbon emissions and learn to live in a more integrated fashion with earth. But the problem is so diffuse and requires substantive work against powerful forces (government, business, apathy).
What I'd like to see more of, when presented with these sorts of problems, is viable solutions proposed that can be implemented bottom-up, and in the following hierarchy:
1. Regular Individuals like me e.g. "build a bird-friendly yard"
2. Influential individuals like architects, urban planners,
3. Small groups, e.g. birdwatchers, Boy Scouts, churches, schools
4. Small towns & neighborhoods, e.g. "build bird friendly parks"
All too often the "solutioning" defaults to the highest concentration of power, e.g. government/regulation – but that obviously isn't working at the speed it has to, and I suspect its because it's very easy to say "they should/we should" instead if "I will/we will".
"Bird-friendly yard" is probably the most counterproductive of these. Habitat loss is far and away the biggest problem that birds face. Anyone having a yard is the problem. People must learn to congregate in compact development forms and leave the land for the animals, if they care about having animals.
Probably the only easy life change individuals can do is not eat meat and not buy ethanol. That would return tons of land to natural uses.
"Air pollution particles may be cause of dramatic drop in global insect numbers" (2023) https://www.sciencedaily.com/releases/2023/07/230712124747.h... :
> Researchers report that an insect's ability to find food and a mate is reduced when their antennae are contaminated by particulate matter from industry, transport, bushfires, and other sources of air pollution.
So catalytic converters, catalytic combustors, AGR Acidic Gas Reduction, Net Zero homes and vehicles etc. may be pretty impactful.
(In addition to planting and allowing clover and dandelions for the bees; and leaving longer grass for fireflies and ticks and CO2 capture and grasshoppers)
An herbicide that doesn't kill clover would make for prettier lawns; but the non-grass plants in the lawn are the food of the insects, which are the food of the birds.
Building a safer FIDO2 key with privilege separation and WebAssembly
The USB driver itself can not access arbitrary memory. But it may be able to program the DMA controller of the USB peripheral to access arbitrary memory. So the WebAssembly sandboxing of a driver alone is not enough. You still need some hardware mechanism like an SMMU. Or a trusted module that abstracts the DMA controller.
Indeed we thought this would be a challenge and I didn’t explain this aspect in the blog post. But on this chip, DMA is its own peripheral and the DMA peripheral is not used by the USB driver. Instead, the USB peripheral and the main CPU share a small memory region. The USB peripheral is then programmed in terms of offsets into this shared memory region, rather than physical memory addresses—-the USB peripheral does not have access to all of physical memory. This is discussed at the bottom of page 48 of the thesis itself [1].
This saved a lot of trouble, but in intro work on this I was using another chip (nRF52840) that worked the way you describe. To safely handle DMA in that case, without an IOMMU, we had to add somewhat complex reasoning that looked at each memory read and write to see if it was modifying a DMA control register and reject the write if it could lead to unsafe behavior. More info is on pages 52-55 of the thesis PDF.
This was pretty messy, so it was fortunate that the chip we used had a different plan. Let me know if I’m misunderstanding you!
[1]: https://pdos.csail.mit.edu/papers/bkettle-meng.pdf#page48
Are there NX pages/flags?
NX bit: https://en.wikipedia.org/wiki/NX_bit
Tagged architecture / Memory tagging: https://en.wikipedia.org/wiki/Tagged_architecture & type unions
Harvard architecture > memory details > Contrast with modified Harvard architecture: https://en.wikipedia.org/wiki/Harvard_architecture#Contrast_...
IIUC Ideally there should be an NX bit on pages, registers, names, and/or variables; and the programming language supports it.
(IIRC, with CPython the NX bit doesn't work when any imported C extension has nested functions / trampolines?)
Matrices and Graph
This approach reminds me of RedisGraph[1] (which is now unfortunately EoL).
"RedisGraph is the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph."
RDF-star and SPARQL-star are basically Property Graph interfaces if you don't validate with e.g. RDFS (schema.org,), SHACL, json-ld-schema (jsonschema+shacl), and/or OWL.
Justify Linked Data; https://5stardata.info/
W3C RDF-star and SPARQL-star > 2.2 RDF-star Graph Examples: https://w3c.github.io/rdf-star/cg-spec/editors_draft.html#rd...
def to_matrices(g: rdflib.MultiDiGraph) -> Union[Matrix, Tensor]
rdflib.MultiDiGraph:
https://networkx.org/documentation/stable/reference/classes/...Multigraph: https://en.wikipedia.org/wiki/Multigraph :
> In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges[1]), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge ... [which requires multidimensional matrices, netcdf (pydata/xarray,), tensors, or a better implementation of a representation; and edge reification in RDF]
From "Why tensors? A beginner's perspective" https://news.ycombinator.com/item?id=30629931 :
> https://en.wikipedia.org/wiki/Tensor
... Tensor product of graphs: https://en.wikipedia.org/wiki/Tensor_product_of_graphs
Hilbert space: https://en.wikipedia.org/wiki/Hilbert_space :
> The inner product between two state vectors is a complex number known as a probability amplitude.
Ziplm: Gzip-Backed Language Model
> Transformers (GPT-3, Copilot ,) are built upon an expensive self-attention network. Instead or also FFT looks to be 7x GPU-cheaper.
"Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs" (2021) https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-ba...
The next step of Conway's Game can be calculated with FFT and also 2D Convolution.
Convolution > Visual explanation: https://en.wikipedia.org/wiki/Convolution
"Convolutions / Why X+Y in probability is a beautiful mess" by 3blue1brown https://youtu.be/IaSGqQa5O-M
Google now requires and lists phone number in Play Store listings
What percentage of existing app developers on the Play Store do not have a DUNS number? Do they need to incorporate an LLC in order to get a DUNS number for their open source project?
Google decided to take the first 15% of app developers' first $1M in revenue instead of 30% in 2021. Google decided to take a 30% cut from all Play Store app and in-app revenue, and you may only use Google Payments in your app (just like Apple).
$150,000 out of $1,000,000 is a lot of money to host downloads, comments, and scan APKs. Shopping malls don't even demand a 15%-30% cut of revenue.
F-droid charges $0 to host APK app downloads and comments but doesn't scan APKs IIUC? https://f-droid.org/en/docs/Security_Model/
How to donate to F-Droid: https://f-droid.org/en/donate/
The platform fee of 30% is completely independent from the cost of running the store. The value of the Play store is the reach it provides app developers.
Are there estimates of Play Store cost models and profit margins?
From https://news.ycombinator.com/item?id=28382186 :
>> JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:
>> Assuming a publication rate of 200 papers per year this works out at ~$4.75 per paper
> [joss_costs]: https://joss.theoj.org/about#costs
But that's for ScholarlyArticles without automated peer review.
(And then they need somewhere else to host their datasets, because journals aren't CDNs. And then they need someone else to host repo2docker container instances or repo2jupyterlite in WASM.)
When I loan my money to a bank, they go invest it and give me like a 1% interest rate.
SUSE is forking RHEL
I'm continually baffled that so many companies follow RHEL compatibility to this day.
I've been using Linux for nearly 30 years. Admining as a profession for at least a quarter of that. 20 years ago, it made a ton of sense. Today, less so.
The 'stable version but we backport patches' mantra doesn't make any sense today. I can't even describe how many things that have broken that you can't even find an answer for because it was some RH specific patch.
Between Debian, Nix, Arch, and others, I can't figure out why RHEL compat is so desirable. I'll go as far as to say that my Arch boxes have been far more predictable than my RHEL boxes.
> I can't figure out why RHEL compat is so desirable
RHEL compat per se is irrelevant, it's the 10 years guaranteed maintenance enterprise customers are after, plus a path forward for their third-party software investments only certified on RHEL (because of said guarantees) and internal IT deployment/admin line processes. Despite making the rounds on HN, enterprise and other commercial users give a flying fuck to io_uring, systemd (up until RHEL 7 when they were forced to), namespaces, and other Linux "innovations" which in fact are just annoyances to justify contracts for 30+ years of ongoing maintenance of an age-old POSIX core operating system once developed in a couple of months and praised for its minimalism.
If LTS contracts are that lucrative, are the new Stream release models missing the market?
Are there time-to-patch metrics for comparison with and without stream as compared with IDK Fedora git?
Rpms/kernel explains: https://src.fedoraproject.org/rpms/kernel :
> The kernel is maintained in a source tree rather than directly in dist-git. The specfile is maintained as a template in the source tree along with a set of build scripts to generate configurations, (S)RPMs, and to populate the dist-git repository.
Here are fedora's kernel patches: https://gitlab.com/cki-project/kernel-ark/-/commits/ark-patc...
RHEL <= 9 has kernel-lt (Long Term) and kernel-ml (mainline) kernel patchsets and packages: https://pkgs.org/search/?q=kernel-lt https://www.google.com/search?q=kernel-lt+kernel-ml
Notable Red Hat Enterprise Linux derivatives: https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux_deriv...
List of commercial products based on Red Hat Enterprise Linux: https://en.wikipedia.org/wiki/List_of_commercial_products_ba...
There used to be a stack of numbered kernel patches in the kernel SRPM Source RPM, but now what is the best way to diff centos-stream-9's main branch with Torvalds/git:main? AFAIU kernel distributors must dist patches because GPLv2; packaging workflows are irrelevant to license terms? https://gitlab.com/redhat/centos-stream/src/kernel/centos-st...
Fedora and many other distros do a lot of valued work, too.
FWICS there are FIPS kernel variants for Ubuntu <= 20.04 LTS (2020) but not 22.04 LTS (2022), and Debian and Ubuntu don't have the selinux policy set that Fedora and RHEL+EPEL have. https://ubuntu.com/kernel
From https://news.ycombinator.com/item?id=36480033 :
> Would it be feasible to sed-replace the RHEL and/or Fedora selinux and container-selinux rulesets for use with other Linux distros?
> "AFAIU only SUSE can run both AppArmor and SELinux?
> And browsers are running as unconfined in selinux with like all major distros; even on ChromiumOS
Act like you added `systemd-nspawn respawn` to every SysV-init script and correctly formatted the epoch time in the correct column of each of the log files to merge and then logship again.
Conda-forge's GitHub PRs to feedstocks model works, but there are no grub or uboot or selinux-policy-targeted or container-selinux or kernel packages in conda-forge or Alpine linux.
containers/container-selinux: https://github.com/containers/container-selinux/tree/main https://src.fedoraproject.org/rpms/container-selinux/blob/ra...
From "Systemd service sandboxing and security hardening" (2022) https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for:?
systemd-analyze security
And e.g. this in systemd units: SystemCallFilter=@basic-io
SystemCallLog=~@basic-io
Experiments with plane-filling curves and Fourier transform
Wow this is a cool idea. I wonder if it would also be interesting to analyze the aperiodic tilings, including monotiles, that have popped up recently.
They are often constructed that way.
Aperiodic lattices in the plane arise from projection of the periodic grid in R^5 into an inclined plane (with irrational coefficients). Now in R^5 you have the famous Poisson summation formula, that says that the Fourier transform of a dirac comb on the grid is a dirac comb. By projecting this Poisson formula in the spatial and frequential domains, you obtain an aperiodic Poisson formula in dimension 2. Thus the Fourier transform of an aperiodic lattice is another aperiodic lattice.
Reciprocal lattice: https://en.wikipedia.org/wiki/Reciprocal_lattice :
> In physics, the reciprocal lattice represents the Fourier transform of another lattice. The direct lattice or real lattice is a periodic function in physical space, such as a crystal system (usually a Bravais lattice). The reciprocal lattice exists in the mathematical space of spatial frequencies, known as reciprocal space or k space, where `k` refers to the wavevector.
> [...] The reciprocal lattice plays a fundamental role in most analytic studies of periodic structures, particularly in the theory of diffraction. In neutron, helium and X-ray diffraction, due to the Laue conditions, the momentum difference between incoming and diffracted X-rays of a crystal is a reciprocal lattice vector. The diffraction pattern of a crystal can be used to determine the reciprocal vectors of the lattice. Using this process, one can infer the atomic arrangement of a crystal.
Was wondering about biomedical NIRS applications and crystallography molecular identification and just found this. Cool. How are aperiodic and reciprocal lattices different from any molecule (or subatomic particles in a field of superfluidic vortices in a fluid with viscosity<=1)?
Is this potentially part of how to make a medical tricorder (and win the Tricorder XPRIZE)?
Algae powers computer for a year using only light and water (2022)
currently hugged. https://web.archive.org/web/20220624095811/https://www.anthr...
From the (archived) article:
> In the new work published in Energy & Environmental Science, researchers at Cambridge University harnessed algae’s ability to produce electricity during photosynthesis. They put blue-green algae, also called cyanobacteria, into a see-through case made of plastic and aluminum. Then they placed it on a windowsill and connected it to the microprocessor, which they had programmed to run for cycles of 45 minutes followed by 15 minutes of rest.
"Powering a microprocessor by photosynthesis" (2022) https://pubs.rsc.org/en/content/articlelanding/2022/ee/d2ee0... :
> Abstract: Sustainable, affordable and decentralised sources of electrical energy are required to power the network of electronic devices known as the Internet of Things. Power consumption for a single Internet of Things device is modest, ranging from μW to mW, but the number of Internet of Things devices has already reached many billions and is expected to grow to one trillion by 2035, requiring a vast number of portable energy sources (e.g., a battery or an energy harvester). Batteries rely largely on expensive and unsustainable materials (e.g., rare earth elements) and their charge eventually runs out. Existing energy harvesters (e.g., solar, temperature, vibration) are longer lasting but may have adverse effects on the environment (e.g., hazardous materials are used in the production of photovoltaics). Here, we describe a bio-photovoltaic energy harvester system using photosynthetic microorganisms on an aluminium anode that can power an Arm Cortex M0+, a microprocessor widely used in Internet of Things applications. The proposed energy harvester has operated the Arm Cortex M0+ for over six months in a domestic environment under ambient light. It is comparable in size to an AA battery, and is built using common, durable, inexpensive and largely recyclable materials.
How many wh/kg/cm3?
List of Unix binaries that can be used to bypass local security restrictions
Most of them by very far rely on sudo. I've got a system without sudo. The only way to execute something as root on my system is to log in, using a U2F security key, from a second system (typically a small laptop sitting next to my workstation). That second system is connected to my workstation on a dedicated LAN only used by these two machines. Firewalls on both (for example my workstation only allow SSH in from that one LAN and only from the IP of my laptop on that LAN etc.). The laptop has no Internet access, WiFi disabled (modules prevented from loading, firewall preventing network access, etc.). The laptop's sole purpose is to run command as root on the workstation: it's its sole purpose.
The downside is that I allow "root SSH", contrarily to every single best practice out there. The upside... Well. Is that there's no "sudo" installed on my workstation.
I think no sudo even installed on the machine beats the "never allow root SSH" mantra but YMMV.
(to set up the system I install the system offline, from a Debian installation medium whose checksum and signature I've verified, do my thing as root on the workstation, set up the login using U2F from the laptop, then I "lock myself out" from login as root directly at the workstation by entering a password so long I'll never enter it, something like "neverenterthisrootpasswordanymoreusethelaptopu2fkeyinstead#$bey8rUtIopTY6"*. A bit of pain to enter twice but then I'm good to go)
Out of curiosity: What sensitive things does the root account protect on your workstation?
On my desktop (and probably 99% of people's desktops here) getting access to the user account is game over. The password manager? Runs as my user - one ptrace and the key can be extracted. Cookies for all my online services? Sitting right there in the home directory.
The only thing root access would give somebody on my machine is to uninstall some random packages or corrupt my install.
And don't get me wrong, I don't like this situation - I tried running some high-risk programs (browser, Libre Office) under flatpak to achieve at least some separation - but it breaks too many things.
> The only thing root access would give somebody on my machine is to uninstall some random packages or corrupt my install.
While I agree that compromise of an unprivileged account has significant costs, technically superusers do have significantly greater access to the system and so there are greater levels of risk.
RedoxOS is reimplementing Linux userspace utilities in rust in order to avoid C vulns in suid binaries; like ping, which requires raw sockets for ICMP (which most of us only need the Echo Request capability of)
Superuser: https://en.wikipedia.org/wiki/Superuser
Capability-based security: https://en.wikipedia.org/wiki/Capability-based_security
Privilege-based escalation: https://en.wikipedia.org/wiki/Privilege_escalation
Principle of least privilege: https://en.wikipedia.org/wiki/Principle_of_least_privilege
MAC: Mandatory Access Control: https://en.wikipedia.org/wiki/Mandatory_access_control
>in suid binaries; like ping
ping hasn't required suid in ages, there's net.ipv4.ping_group_range and CAP_NET_RAW.
Can a Rubik's Cube be brute-forced?
> My initial implementation worked at a whopping 200 permutations per second. That’s incredibly slow, and meant that it would take well over a century (in the worst case) for my program to finish. Now, it works at about 190,000 permutations per second, with an estimated worst-case search time of 2 months. (I haven’t encountered a scrambled cube position which has taken more than 10 hours.)
"The algorithmic trick that solves Rubik’s Cubes and breaks ciphers" (2022) https://youtu.be/wL3uWO-KLUE :
> Instead of 10^20 moves to find connecting path(s) towards Solved in a Rubik's cube, this algorithm solves in 2*10^10 from each side (and IIUC the solution side (1*10^10) can be cached?)
The manim code for the video and the c++ solution code are open source: https://github.com/polylog-cs/rubiks-cube-video/blob/main/co...
Interesting video. The approach in it is also inspired by meet-in-the-middle, but suffers from a significant deficiency: It requires a lot of memory—90 GiB—enough that the author had to borrow their university's compute cluster to conduct a solve, even when written and optimized in C++. Likewise, their "triple DES" presentation discussed another way that would require a database that's terabytes in size (specifically, memory that's O(N^0.5)). Nonetheless, I think it's a good exposition of meet-in-the-middle.
What's neat about the method in the post is that it requires O(N^0.5) time and O(N^0.25) memory, which allows running it on an entry-level consumer laptop (hours of time and a few GiB of memory) when implemented even in a dynamically typed, garbage-collected language.
> It requires a lot of memory—90 GiB—enough that the author had to borrow their university's compute cluster to conduct a solve
I remember something like fifteen years ago we needed a 128GB computer to solve IC modelling (timing closure) problems. While this was reasonably available, the only supplier we could find was gaming-oriented, so we had a blinky light tower PC in the corner of the server cupboard.
Anyway, never underestimate the power of a single computer, especially with a GPU.
MacBook Pro laptops can now be had with 96GB of RAM.
And terabyte class databases can be extremely fast if running on SSD.
It's also not hard these days to fire up an instance on AWS or other cloud service which will have up to a few terabytes of RAM.
Plus, you can always just bite the bullet and do it all out of swap - might not be as bad as you fear, and often can optimize whatever you're doing to be more online. There are countless tricks left over from the old days for doing out-of-core operations efficiently, whether disk or tape.
If you're firing up anything on AWS with that kind of RAM, you want to make sure you understand how much that's going to cost. The per second cost is always going to look really low and you will tend not to worry about it. However, once you do the math, you could find that you're easily and quickly spending more money than it would cost to buy a machine to run locally with that much RAM.
It's been fine when I've used terabyte instances for dynamic programming stuff similar to OP. You are babysitting it and waiting for a specific answer, not running a permanent service which will hemorrhage $$$ when you forget to turn it off. At dollars an hour, hard to forget...
So long as you actually play close attention to it, that's not a problem.
The issue is that many people fail to pay close attention, and that's where the money can really rack up.
[deleted]
I wonder if the algorithm can be run on a GPU, that should lower the time a lot.
Google to explore alternatives to robots.txt
There are a number of opportunities to solve for carbon.txt, security.txt, content licenses, indication of [AI] provenance, and do better than robots.txt; hopefully with JSON-LD Linked Data.
> https://news.ycombinator.com/item?id=35888037 : security.txt, carbon.txt, SPDX SBOM, OSV, JSON-LD, blockcerts
"Google will label fake images created with its A.I" (re: IPTC, Schema org JSON-LD" (2023) https://news.ycombinator.com/item?id=35896000
From "Tell HN: We should start to add “ai.txt” as we do for “robots.txt”" (2023) https://news.ycombinator.com/item?id=35888037 :
> How many parsers should be necessary for https://schema.org/CreativeWork https://schema.org/license metadata for resources with (Linked Data) URIs?
If PEP 703 is accepted, Meta can commit three engineer-years to no-GIL CPython
The editorialized title "Meta pledges Three-Year sponsorship for Python if GIL removal is accepted" seems misleading at best. The actual wording was:
> support in the form of three engineer-years [...] between the acceptance of PEP 703 and the end of 2025
It seems much more like them pledging engineer time from Meta employees to work on this project than sponsorship, not a monetary sponsorship like the title implies.
Offering dedicated specific CPython resources (aka. enginneering hours) is worth more than a monetary sponsorship in my view.
How do I diff colesbury/nogil-3.12 with python/cpython? https://github.com/colesbury/nogil-3.12
32“ E Ink screen that displays daily newspapers on your wall (2021)
I'm a news junkie, and the power and allure of a newspaper's front page have always fascinated me. When I stumbled upon an article written by a Google engineer who had built an e ink device with the front page of his favorite daily newspaper prominently displayed on his wall, I was a bit jealous. So, I worked with a e-ink company called Visionect in Slovenia to build a version that comes shipped ready to put on your wall. The screen isn't cheap ($2500) because it's a huge 32" e-ink display. The beautiful glass screen is connected to wifi and we made it easy to choose your favorite newspaper frontpages to put on display. It's a bit like an ever changing artwork.
Why do larger e-ink screens cost more to produce?
Something like this for (e.g. Kanban card-based) project management would be cool, though the contrast at a distance.
The PineNote is a 9" eInk development device; 3287cm/3 for $400. https://wiki.pine64.org/wiki/PineNote
But it's not going to look as nice as a 32" e-ink display on the wall.
Is there a yellow backlight, or a blueish backlight?
Is there a low-cost way to make a solar roof that varies in solar reflectivity? FWIU e-ink only requires voltage to cause the e-ink particles to flip over? https://en.wikipedia.org/wiki/E_Ink
> Why do larger e-ink screens cost more to produce?
Last I knew the bulk of cost for e-ink displays was just due to the patent holders decision to charge a lot, which is disappointing, but can't go on that much longer...
This is a myth that people keep repeating. E-ink prices have very little to do with patents.
E-ink displays are more expensive because they are utterly niche compared to LCD screens (there are over 6 billion smartphone users in the world right now, and plenty of people own a smartphone, laptop, desktop, TV, tablet, smartwatch, then use a computer at work, interact with a self-checkout kiosk at the supermarket, etc).
There are basically two common use-cases for e-ink right now: e-readers (and nowadays 'e-notes', which have stylus input), and smart-pricetags in supermarkets. Neither of these require large e-ink screens, so there's no existing production line to feed these 32" monitors off. The 32" screens are created by hand-fusing 4 16" screens together, IIRC. This isn't automated because there's not enough demand to automate it.
The reason there's no demand for e-ink screens is that the technology just isn't very useful - I love e-ink, but it's not particularly flexible and can't be used in general-purpose devices that need to display video. It has its niches, but they're niche.
It's lower battery usage is due to low refresh rate, which is inherently incompatible with displaying video -if an e-reader refreshes once per minute and an LCD screen would have to refresh 60 times a second, then the LCD needs to refresh 3600x more often which means e-ink saves power even if the e-ink screen takes 10x or 100x the power per refresh. But if the e-ink screen plays video at 60FPS, then by definition it's refreshing 60 times a second and thus will use 10x or 100x the power of the equivalent LCD screen playing the same video.
If we want prices to drop substantially, we need more use-cases for e-ink. There's hope here, as the recent fast-ACEP tech (the Gallery 3) is color-tech that doesn't sacrifice half of your contrast and resolution (like Color Filter Arrays do), but is fast enough for an interactive device and is almost as fast to refresh as a monochrome screen (previous ACEP screens took min 7 seconds to refresh the screen, the new tech is amazing). With some refinement, it might be enough to bring e-notes somewhat into the mainstream.
So, all that said: e-ink screens that are 6" or less aren't really any more expensive than an LCD. It's only when you get outside of normal e-reader sizes that the price goes up like crazy.
...not to mention, e-ink isn't the only "e-paper" company - ReInkstone are making their DES screens, which dodge at least some of the patents on microencapsulated electrophoretic displays as their cofferdam tech doesn't use microcapsules at all and instead integrate the seals into the display.
Git and Jupyter Notebooks Guide
Curious that they discuss several options, but ignore the totally obvious one: just use jupytext [0]. Jupytext is a (tiny) jupyter extension that reads/writes notebooks as python files, with text cells being represented as comments. With jupytext, you do away with the stupid .ipynb format. As long as you don't need to save the cell outputs, which is the case for version control, jupytext is the way to go.
People: pip install jupytext. All your python files will become notebooks, and your notebooks will become python files.
What happens to the outputs in this case? I found the outputs to be both the most useful parts of notebooks, but also the most troublesome for diffing and versioning.
Why would you commit the outputs into git? That would be like committing compiled binary objects or pdfs. Of course the outputs are useful, but you just want to commit the sources.
The .ipynb stores inputs and outputs together in an unholy way. It is much cleaner to separate them. The inputs are python (or markdown) files that you can edit with a text editor and version control with git. The outputs are html, pdf, or whatever you want to nbconvert to and share.
The .ipynb file would only be useful if you want to share a stateful notebook, whose state cannot be easily reproduced by the people who you share it with. But that would be really bizarre and definitely in bad taste. Sharing the .ipynb is akin to sharing your .pyc files.
I love working with notebooks, but as a measure of hygiene I avoid .ipynb files altogether.
If you want to store (e.g. base64-encoded) outputs in Markdown, you're going to have to scroll past a lot of data to get to the next input cell in a notebook.
Jupyter notebooks store which Jupyter kernel they were run with to generate the outputs.
nbformat (.ipynb with inlined base64 outputs) isn't a sufficient package format: https://github.com/jupyter/enhancement-proposals/pull/103#is...
Papermill is one tool for running Jupyter notebooks as reports; with the date in the filename. https://papermill.readthedocs.io/en/latest/
A fun new feature we are working on in systemd: userspace-only reboot
I've thought about implementing this before (in a systemd alternative), but there are some good reasons to reboot. The kernel does leak, and only rebooting user space is a force multiplier for certain types of attacks where userspace is limited to an action a finite number of times per boot.
One recent article that comes to mind is this (though incrementing the counter here doesn't need a reboot to get hit): https://googleprojectzero.blogspot.com/2023/01/exploiting-nu...
The other thing is that not rebooting the kernel removes the pressure on the kernel developers to reboot quickly. If people depend on it less, that can regress. systemd is big enough to affect that.
pivot_root doesn't replace the kernel or its state either. https://www.google.com/search?q=pivot_root
How much state is there in a kernel?
Ksplice > See also lists kexec, kGraft, kpatch, and KernelCare: https://en.wikipedia.org/wiki/Ksplice#See_also
Kgraft: https://en.wikipedia.org/wiki/KGraft
Live migration: https://en.wikipedia.org/wiki/Live_migration
libvirt: Guest migration > Offline migration: https://libvirt.org/migration.html#offline-migration
Memory forensics: https://en.wikipedia.org/wiki/Memory_forensics : Volatility
awesome-forensics > acquisition: https://github.com/cugu/awesome-forensics#acquisition : Memory forensics acquisition tools: AVML, POFR: PenguinOS Flight Recorder, LIME
An Architectural Overview of QNX (1992) [pdf]
...no longer the only true microkernel in industry (the paper was written in 1992, back then QNX was certainly leading in this respect). Nowadays, open source as well as commercial L4-based microkernels are widely used, e.g. seL4 (UNSW), Fiasco (TU Dresden) and PikeOS (SYSGO).
Microkernel: https://en.wikipedia.org/wiki/Microkernel
Category:Microkernel-based operating systems: https://en.wikipedia.org/wiki/Category:Microkernel-based_ope...
- redox (rust-based) https://en.wikipedia.org/wiki/Redox_(operating_system)
- Nintendo Switch runs a microkernel OS
- Linux started in 1999 as a clone of MINIX and was initially developed on MINIX, which is a microkernel OS: https://en.wikipedia.org/wiki/History_of_Linux
Photonic chip transforms lightbeam into multiple beams with different properties
"Universal visible emitters in nanoscale integrated photonics" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-7-8... :
> Visible wavelengths of light control the quantum matter of atoms and molecules and are foundational for quantum technologies, including computers, sensors, and clocks. The development of visible integrated photonics opens the possibility for scalable circuits with complex functionalities, advancing both science and technology frontiers. We experimentally demonstrate an inverse design approach based on the superposition of guided mode sources, allowing the generation and complete control of free-space radiation directly from within a single 150 nm layer Ta2O5 , showing low loss across visible and near-infrared spectra. We generate diverging circularly polarized beams at the challenging 461 nm wavelength that can be directly used for magneto-optical traps of strontium atoms, constituting a fundamental building block for a range of atomic-physics-based quantum technologies. Our generated topological vortex beams and the potential for spatially varying polarization emitters could open unexplored light–matter interaction pathways, enabling a broad new photonic–atomic paradigm. Our platform highlights the generalizability of nanoscale devices for visible-laser emission and will be critical for scaling quantum technologies.
Computer simulation provides 4k scenarios for a climate turnaround
From the linked article https://news.ycombinator.com/item?id=36577474 :
> A large computer simulation dealing with this issue is now providing some answers. It combines climate models with economic models and 1,200 technologies for supplying and using energy, as well as reducing greenhouse gas emissions. As part of the study, a supercomputer calculated 4,000 scenarios for 15 regions of the world, taking into account possible developments in ten-year steps up to the year 2100.
> [...] "We considered 18 uncertainty factors, including population and economic growth, climate sensitivity, resource potential, the impact of changes in agriculture and forestry, the cost of energy technologies and the decoupling of energy demand from economic development," explains James Glynn of Columbia University.
"Deep decarbonisation pathways of the energy system in times of unprecedented uncertainty in the energy sector" (2023) https://www.sciencedirect.com/science/article/pii/S030142152... :
> Abstract: Unprecedented investments in clean energy technology are required for a net-zero carbon energy system before temperatures breach the Paris Agreement goals. By performing a Monte-Carlo Analysis with the detailed ETSAPTIAM Integrated Assessment Model and by generating 4000 scenarios of the world’s energy system, climate and economy, we find that the uncertainty surrounding technology costs, resource potentials, climate sensitivity and the level of decoupling between energy demands and economic growth influence the efficiency of climate policies and accentuate investment risks in clean energy technologies. Contrary to other studies relying on exploring the uncertainty space via model intercomparison, we find that the CO2 emissions and CO2 prices vary convexly and nonlinearly with the discount rate and climate sensitivity over time. Accounting for this uncertainty is important for designing climate policies and carbon prices to accelerate the transition. In 70% of the scenarios, a 1.5 ◦C temperature overshoot was within this decade, calling for immediate policy action. Delaying this action by ten years may result in 2 ◦C mitigation costs being similar to those required to reach the 1.5 ◦C target if started today, with an immediate peak in emissions, a larger uncertainty in the medium-term horizon and a higher effort for net-zero emissions.
> Highlights:
> • Probabilistic assessment of 4000 decarbonisation pathways with the ETSAP-TIAM.
> • Delaying the action for the 2 °C target costs similar to acting now for the 1.5 °C.
> • Demand electrification is higher in 2050 than today in all model runs.
> • Hydrogen is used in 99% of the model runs to meet the 1.5 °C target.
What about non-CO2 emissions and prices/fines/fees; like Methane (natural gas) which is like 80x worse than CO2 for climate?
Show HN: Python can make 3M+ WebSocket keys per second
The hardest part of debugging Python is "hitting the wall" when you come to a native library (compiled C code). And Python has achieved a lot of speed-ups from Py2 to Py3 by adding more compiled C code. This is a real blocker for understanding the foundation library.
On the other hand, in Java, with a few exceptions around Swing (native painting for GUIs) in Java, almost everything is written in pure Java, so you can debug all the way down if need be. It is a huge help for understanding the foundation library and all of its edge cases (normal for any huge library). Modern Java debuggers, like IntelliJ, are so crazy, they will decompile JARs and allow you to set debug breakpoints and step-into decompiled code. It is mind blowing when trying to debug a library that you don't own the source code (random quant lib, ancient auth lib, etc.).
> The hardest part of debugging Python is "hitting the wall" when you come to a native library (compiled C code). And Python has achieved a lot of speed-ups from Py2 to Py3 by adding more compiled C code.
"Debugging a Mixed Python and C Language Stack" (2023) https://news.ycombinator.com/item?id=35710350
Quart is an async Python web microframework
how does one simply shove it in a container is my current headache. wsgi, asgi nginx gunicorn is a maze
ASGI > Implementations > Servers: https://asgi.readthedocs.io/en/latest/implementations.html#s... ; Daphne, Uvicorn, Hypercorn, Granian, NGINX Unit
asgiref: https://github.com/django/asgiref/blob/main/asgiref/sync.py AsyncToSync, SyncToAsync
Ask HN: What are the limits of Quantum-on-Silicon and Photonic computers?
Photonic interactions, electron spin, and electron charge all are used to implement quantum gates and operators.
What the comparative advantages of each approach?
Automated CPU Design with AI
From the abstract; "Pushing the Limits of Machine Design: Automated CPU Design with AI" (2023) https://arxiv.org/abs/2306.12456 :
> [...] This approach generates the circuit logic, which is represented by a graph structure called Binary Speculation Diagram (BSD), of the CPU design from only external input-output observations instead of formal program code. During the generation of BSD, Monte Carlo-based expansion and the distance of Boolean functions are used to guarantee accuracy and efficiency, respectively. By efficiently exploring a search space of unprecedented size 10^{10^{540}}, which is the largest one of all machine-designed objects to our best knowledge, and thus pushing the limits of machine design, our approach generates an industrial-scale RISC-V CPU within only 5 hours. The taped-out CPU successfully runs the Linux operating system and performs comparably against the human-designed Intel 80486SX CPU. In addition to learning the world's first CPU only from input-output observations, which may reform the semiconductor industry by significantly reducing the design cycle, our approach even autonomously discovers human knowledge of the von Neumann architecture.
The von Neumann (and Mark) architectures have an instruction pipeline bottleneck maybe by design for serial debuggability; as compared with IDK in-RAM computing with existing RAM geometries? (See also: "Rowhammer for qubits")
(Edit: High-Bandwidth Memory; hbm2e vs gddr6x (2023) https://en.wikipedia.org/wiki/High_Bandwidth_Memory )
Hopefully part of the fitness function is determined by the presence and severity of hardware side channels and electron tunneling; does it filter out candidate designs with side-channel vulnerabilities (that are presumed undetectable with TLA+)?
Ten-cent BPClip may soon let smartphones check blood pressure
From https://newatlas.com/medical/bpclip-smartphone-blood-pressur... :
> The BPClip simply gets clipped onto one corner of the user's smartphone, so that a tiny hole in the device sits over the phone's camera lens, and a light guide in it covers the phone's flash.
> Instructed by an accompanying app, the user then presses a fingertip down onto the hole – a spring in the clip provides resistance, allowing them to take multiple readings using different amounts of force.
> For each reading, the camera takes a photo of the flash-lit underside of the fingertip, looking through the hole. Because the hole is so small, each image just takes the form of a red dot. The diameter of that dot increases as more pressure is applied, while the brightness of the red color varies with the volume of blood being pumped in and out of the finger.
Some camera sensors detect infrared. An infrared LED (like in old remote controls) might be useful for this, too; like NIRS Near-Infrared Spectroscopy with chemometrics, which works because red and infrared light pass through tissue.
IDK what that would do for the cost of such a potential tricorder-like medical device attachment.
First 'tooth regrowth' medicine moves toward clinical trials in Japan
"Anti–USAG-1 therapy for tooth regeneration through enhanced BMP" (2021) https://www.science.org/doi/10.1126/sciadv.abf1798
https://scholar.google.com/scholar?cites=6855835701389665111...
From "Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://news.ycombinator.com/item?id=35871917: direct neuronal reprogramming (for retinal growth), Tissue NanoTransfection, Muller glia
"TNT" out of Ohio State: https://en.wikipedia.org/wiki/Tissue_nanotransfection ; direct (epithelial,) stroma reprogramming
Regenerative medicine: https://en.wikipedia.org/wiki/Regenerative_medicine
From https://www.ada.org/en/resources/research/health-policy-inst... :
> What share of U.S. children and adults have dental benefits?
> Dental benefits coverage varies by age. For children ages 2-18, 51.3 percent have private dental benefits, 38.5 percent have dental benefits through Medicaid or the Children’s Health Insurance Program (CHIP), and 10.3 percent do not have dental benefits. For adults ages 19-64, 59.0 percent have private dental benefits, 7.4 percent have dental benefits through Medicaid, and 33.6 percent do not have dental benefits. Source: Dental Benefits Coverage in the U.S.
TIL about Dental lasers: https://en.wikipedia.org/wiki/Dental_laser
What percentage of dentists offices have lasers in the United States? https://www.google.com/search?q=what+percentage+of+dentist+o...
FWIU, dental lasers cost from $4-40K per unit, per station in the office; and they may be tuned for specific applications or general purpose. There's a waveguide to guide the beam; and there are new waveguide methods for steering photon beams through opaque tissue (with e.g. dual beams).
"Scientists Have Discovered a Drug [Tideglusib] That Fixes Cavities and Regrows Teeth" (2017) that's already FDA approved for Alzheimer's but there are yet no clinical trials of it as a cavity fill?? https://futurism.com/neoscope/scientists-have-discovered-thi...
Tideglusib: https://en.wikipedia.org/wiki/Tideglusib
Can teeth be grown in molds from e.g. stem cells, or nanotransfected tissue with/without a 3-d printed scaffold or [...]?
Dental care probably has direct returns in labor productivity?
YouTube is testing a more aggressive approach against ad blockers
Students don't even have the option of paying for premium using their school account, and so if you assign them videos to watch (using that account) they must watch ads; is that correct?
I think YouTube should just drop the creators who demanded the greater subscription revenue that must've precipitated this change; go sell your own content.
FWIW, ad-free Khan Academy videos are subsidized by Alphabet/Google/YouTube.
Gping – ping, but with a graph
mtr does traceroute, too: https://en.wikipedia.org/wiki/MTR_(software) :
> The tool is often used for network troubleshooting. By showing a list of routers traversed, and the average round-trip time as well as packet loss to each router, it allows users to identify links between two given routers responsible for certain fractions of the overall latency or packet loss through the network.[4] This can help identify network overuse problems.[5]
Scapy has a 3d visualization of one traceroute sequence with vpython. In college, I remember modifying it to run multiple traceroutes and then overlaying all of the routes; and wondering whether a given route is even stable through a complete traceroute packet sequence. https://scapy.readthedocs.io/en/latest/usage.html#tcp-tracer...
One way to avoid running tools that need root for crafting packets at layer 2 is to use setcap:
setcap CAP_NET_RAW /use/bin/python-scapy
Does traceroute inappropriately connect the dots?My mind is blown by those 3D graphs. I almost want to make them dynamically drawn in a futuristic style and use that as screensaver...
Like a WebGL VPN company logo screensaver composed of all those routes we shouldn't trust?
graph-drawing gh topic lists a number of JS libraries: https://github.com/topics/graph-drawing
Is there a good way to make a JS screensaver that doesn't leak memory yet? Maybe WebGL could get that done
Maybe. I don't know. But that link sent me to another rabbit hole, thanks:)
VUDA: A Vulkan Implementation of CUDA
As someone who has never programmed directly for a GPU, how does this compare to HIP? Can this be an efficient abstraction over Nvidia and AMD GPUs?
From https://news.ycombinator.com/item?id=34399633 :
>>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced. [...]
(Edit) CUDA APIs supported by hipify-clang: https://rocm.docs.amd.com/projects/HIPIFY/en/latest/supporte...
Show HN: Tabserve.dev. HTTPS proxy using Web Workers and a Cloudflare Worker
Tabserve gives you a https url for localhost using only the browser (tabserve.dev).
Take a look: https://tabserve.dev
This is pretty awesome. I was going to ask if it is constrained by any Same Origin restrictions, but I see the note at the bottom of the page that you must "enable CORS" on the localhost server, so indeed it is constrained by such policies. Specifically, I imagine the requirement is that the server you're tunneling to respond with `Access-Control-Allow-Origin: app.tabserve.dev`.
Is this correct? Or should the value be the subdomain of the cloudflare worker? In other words, which is the "origin" of the proxied requests? I assume it's `app.tabserve.dev` since the user is accessing the tunneled service through the worker subdomain which would be the value of the `Host` header.
You can probably just use access-control-allow-origin: * to "turn off" CORS on the local server.
Since you are behind the reverse proxy anyway, there is probably not much of a security implication?
> Since you are behind the reverse proxy anyway, there is probably not much of a security implication?
If you access other sites in the same browser, they'll be able to send requests to your private localhost server. I don't expect this to be a problem right now, but if it becomes a popular thing to do that might change.
GitHub's Copilot may lead to global $1.5T GDP boost
But if it and we don't write tests for our software - and think through writing tests when we write the code - the losses and liability may exceed such measures of return.
Current gen LLM (Large Language Model) AI are trained to regurgitate non-formally-verified code; and even if they were exclusively trained on labeled-as Formally Verified code, they would produce code that does not pass formal verification.
The LLM AI doesn't care about your liability or others'; we need to think through our code to write sufficient tests (given incomplete code specifications).
That $1.5T figure must be gross not net.
Atom feed format was born 20 years ago
Atom vs RSS is a great example of how technical correctness is trumped by social factors, in this case namely support from makers of popular software and content as well as social influence and documentation skills of creators.
The person who pushed RSS to success (IMO) Dave Winer was superb at communicating and evangelizing his goals, connecting partners like Netscape and NYT, and documenting his work including the RSS related tools he built.
His spec was “worse” in the sense that it was under specified but better in the sense that it achieved wide support (both in text and podcast form) among people who made content. This is partly because Dave had first an influential email newsletter and Wired column (DaveNet) and second an influential very early blog Scripting News. He had also been working with news companies for years at prior startups. He could write well. He showed up for and arranged meetings with people who did not at first understand the need for something like rss. He was clear and relentless in his promotion which was borne out of what seemed to be a genuine desire for open standards in this area rather than greed / trying to do lock in.
People with technical backgrounds in places like this tend to fixate on the technical aspects of Atom vs RSS. There is no question Atom is more technically correct. There is also no question (IMO) it came too late and focused on the wrong things — being correct and complete at the expense of being complicated and hard to understand — and more importantly was led and promoted by people who lacked the social skills to make it popular outside of technical circles. (These folks could be brutal about rss's flaws without seeming to have awareness of this shortcoming in their own effort.)
> Atom vs RSS is a great example of how technical correctness is trumped by social factors
Or perhaps having at six year head start (RSS 0.90, 1999; Atom, 2005) and having it compound.
This includes podcasting, as the term was coined in 2004, but which was happening even before that (and before Atom was finalized):
* https://en.wikipedia.org/wiki/Podcast#Etymology
First mover advantage, leading to network effects, can be a thing.
RSS > RSS compared with Atom https://en.wikipedia.org/wiki/RSS#RSS_compared_with_Atom
CDATA > CDATA in XML: https://en.m.wikipedia.org/wiki/CDATA#CDATA_sections_in_XML
IIRC RSS was not originally an XML document, so CDATA tags (to prevent XSS) didn't work; and the issue remains with content syndication: feed elements should somehow HTML escape their content to prevent XSS (arbitrary JS on a different Origin)
XSS: Cross-site Scripting: https://en.wikipedia.org/wiki/Cross-site_scripting
Same-origin policy: https://en.wikipedia.org/wiki/Same-origin_policy https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...
Content Security Policy (CSP) https://en.wikipedia.org/wiki/Content_Security_Policy https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
CSRF: Cross-site request forgery: https://en.wikipedia.org/wiki/Cross-site_request_forgery
JSONP > CSRF: https://en.wikipedia.org/wiki/JSONP#Cross-site_request_forge...
The whole internet was broken, and RSS helped us realize it: the one-way, one-time syndication advantage.
These days it's all about https://schema.org/CreativeWork JSON-LD instead of RDFa, which you can try to sanitize with Mozilla/bleach like arbitrary HTML in comments on the page.
> IIRC RSS was not originally an XML document…
AFAIK RDF Site Summary (the original meaning of "RSS") used XML from the start¹. You may be thinking of spiritual predecessors like the Apple-developed MCF².
¹ https://www.rssboard.org/rss-0-9-0 ² https://en.wikipedia.org/wiki/History_of_web_syndication_tec...
Pretty sure I can remember the actionscript XML parser failing on RSS in like 2000 and the feed was to spec.
With that app, at least, I don't remember trying to just include RSS feed elements in HTML4 or XHTML; that app tried to parse the RSS and failed, it wasn't copying the feed elements without escaping href=javascript: links etc.
Patterns of Distributed Systems (2022)
Distributed computing > Theoretical foundations: https://en.wikipedia.org/wiki/Distributed_computing#Theoreti...
Distributed algorithm > Standard problems: https://en.wikipedia.org/wiki/Distributed_algorithm#Standard...
Notes from "Ask HN: Learning about distributed systems?" https://news.ycombinator.com/item?id=23932271 ; CAP(?), BSP, Paxos, Raft, Byzantine fault, Consensus (computer science), Category: Distributed computing
"Ask HN: Do you use TLA+?" (2022) https://news.ycombinator.com/item?id=30194993 :
> "Concurrency: The Works of Leslie Lamport" ( https://g.co/kgs/nx1BaB )
Lamport timestamp > Lamport's logical clock in distributed systems: https://en.wikipedia.org/wiki/Lamport_timestamp#Lamport's_lo... :
> In a distributed system, it is not possible in practice to synchronize time across entities (typically thought of as processes) within the system; hence, the entities can use the concept of a logical clock based on the events through which they communicate.
Vector clock: https://en.wikipedia.org/wiki/Vector_clock
> https://westurner.github.io/hnlog/#comment-27442819 :
>> Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ help with that at all?
One initial time synchronization may be enough, given availability of new quantum time-keeping methods (with e.g. a USB interface).
"Quantum watch and its intrinsic proof of accuracy" (2022) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... https://www.sciencealert.com/scientists-just-discovered-an-e...
Anatomy of an ACH transaction
ACH transactions happen every day
Yes and they happen very s-l-o-w-ly.
Sometimes they can take up to 3 business days (not counting weekends and holidays). And they only work during "bankers hours" --- between 10am and 3pm each day.
In other words, ACH is the horse and buggy way to process financial transactions with lots of manual work required along the way.
It shouldn't be this way and most everyone knows it. The new and improved and long overdue way is called FedNow and will operate 24/7/365 and will offer almost immediate results --- so you can do amazing, wonderful, high tech things like electronically pay a friend your part of the dinner check while still sitting at the table.
https://www.federalreserve.gov/newsevents/pressreleases/othe...
I've always heard ACH is slow, and the article mentions that Venmo uses it. So why then can I deposit my Venmo balance into my linked bank account almost instantly for a very small fee?
Instant deposits usually don't use ACH, but rather debit network rails. These are instant (at least the payment confirmation is, if not the settlement – but that's usually good enough for the use case.
If you can really perform one using your account and routing number, it might also be RTP.
;"Why we can't have nice things, why we can't just email money on the internet"
From "FedNow FAQ" https://news.ycombinator.com/item?id=32515531 :
> W3C ILP Interledger Protocol [1] specifies addresses [2]: [...]
>> Neighborhoods are leading segments with no specific meaning, whose purpose is to help route to the right area. At this time, there is no official list of neighborhoods, but the following list of examples should illustrate what might constitute a neighborhood:
>> - crypto. for ledgers related to decentralized crypto-currencies such as Bitcoin, Ethereum, or XRP.
>> - sepa. for ledgers in the Single Euro Payments Area.
>> - dev. for Interledger Protocol development and early adopters
> From "ILP Addresses - v2.0.0" [2]:
>> Example Global Allocation Scheme Addresses
>> - g.acme.bob - a destination address to the account "bob" held with the connector "acme"
>> - g.us-fed.ach.0.acmebank.swx0a0.acmecorp.sales.199.~ipr.cdfa5e16-e759-4ba3-88f6-8b9dc83c1868.2 - destination address for a particular invoice, which can break down as follows:
>> -- Neighborhoods: us-fed., ach., 0.
>> -- Account identifiers: acmebank., swx0a0., acmecorp., sales, 199 (An ACME Corp sales account at ACME Bank)
>> -- Interactions: ~ipr, cdfa5e16-e759-4ba3-88f6-8b9dc83c1868, 2*
> And from [3] "Payment Pointers and Payment Setup Protocols":
>> The following payment pointers resolve to the specified endpoint URLS:
$example.com -> https://example.com/.well-known/pay
$example.com/invoices/12345 -> https://example.com/invoices/12345
$bob.example.com -> https://bob.example.com/.well-known/pay
$example.com/bob -> https://example.com/bob
> The WebMonetization spec [4] and docs [5] specifies the `monetization` <meta> tag for indicating where supporting browsers can send payments and micropayments: <meta
name="monetization"
content="$<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">wallet.example.com/alice">
ILP also specifies settlement; From "Fed expects to launch long-awaited Faster Payments System by 2023" (2022) https://news.ycombinator.com/item?id=32658402 :> And then you realize you're sharing payment address information over a different but comparably-unsecured channel in a non-stanfardized way; From https://github.com/interledger/rfcs/blob/master/0009-simple-... :
>> Relation to Other Protocols: SPSP is used for exchanging connection information before an ILP payment or data transfer is initiated
> To do a complete business process, [there's] signaling around transactions, which then necessarily depend upon another - hopefully also cryptographically-secured and HA Highly Available - information system with API version(s) and database schema(s) unless there's something like Interledger SPSP Simple Payment Setup Protocol and Payment Pointers [...]
Clearing (finance) > US: https://en.wikipedia.org/wiki/Clearing_(finance)#United_Stat...
ACH Network: https://en.wikipedia.org/wiki/ACH_Network :
> The Federal Reserve's FedACH and The Clearing House Payments Company's Electronic Payments Network (EPN) are the two ACH operators in the United States.[3]
From https://news.ycombinator.com/item?id=28232243 :
> FWIU, each trusted ACH (US 'Direct Deposit') party has a (one) GPG key that they use to sign transaction documents sent over now (S)FTP on scout's honor - on behalf of all of their customers' accounts.
Payment rail: https://en.wikipedia.org/wiki/Payment_rail
EFT: Electronic Funds Transfer: https://en.wikipedia.org/wiki/Electronic_funds_transfer
From "Ask HN: What security is in place for bank-to-bank EFT?" (2021) : https://news.ycombinator.com/item?id=26111184 :
> AFAIU, no existing banking transaction systems require the receiver to confirm in order to receive a funds transfer. [...]
> ILP: Interledger Protocol > RFC 32 > "Peering, Clearing and Settling" describes how ~EFT with Interledger works: https://interledger.org/rfcs/0032-peering-clearing-settlemen...
Who/what are you replying to?
Is ORM still an anti-pattern?
I've been doing ORM on Java since Hibernate was new, and it has always sucked. One of the selling points, which is now understood to be garbage, is that you can use different databases. But no-one uses different databases. Another selling point which is "you don't need to know SQL", is also garbage. Every non-trivial long-lived application will require tweaks to individual queries at the string level. The proper way to build a data layer is one query at a time, as a string, with string interpolation. The closer you are to raw JDBC the better.
Oh yeah, another bad reason for ORM: to support the "Domain Model". Which is always, and I mean always, devoid of any logic. The so-called "anemic domain model" anti-pattern. How many man-hours have been wasted on ORM, XML, annotations, debugging generated SQL, and so on? It makes me cry.
> The proper way to build a data layer is one query at a time, as a string, with string interpolation.
Surely you mean a parameterized query string and not string interpolation, which is susceptible to sql injection.
Details. But sure. In truth the best way to do the data layer is to use stored procs with a generated binding at the application layer. This is absolutely safe from injection, and is wicked fast as well.
Having been on the team that inherited a project full on stored procedures (more than once): no thank you, not ever again.
Opaque, difficult to debug, high risk to invoke, difficult to version and version-control properly. I’m sure they could be done “less worse”, but I doubt they can be done well.
Stored procedure: https://en.wikipedia.org/wiki/Stored_procedure
Are there SQL migration tools like Alembic and South that also version and sync Stored Procedures? (Alembic on "Replaceable objects" and schema: https://alembic.sqlalchemy.org/en/latest/cookbook.html#repla... , alembic_utils: https://github.com/olirice/alembic_utils )
From "What ORMs have taught me: just learn SQL (2014)" https://news.ycombinator.com/item?id=15949610 :
> ORMs:
> - Are maintainable by a team. "Oh, because that seemed faster at the time."
> - Are unit tested: eventually we end up creating at least structs or objects anyway, and then that needs to be the same everywhere, and then the abstraction is wrong because "everything should just be functional like SQL" until we need to decide what you called "the_initializer2".
> - Can make it very easy to create maintainable test fixtures which raise exceptions when the schema has changed but the test data hasn't.
ORMs may help catch incomplete migrations; for example when every reference to a renamed column hasn't been updated, or worse when foreign key relations changed breakingly.
django_rest_assured generates many tests for the models (SQL)/views/API views in a very DRY way from ORM schema and view/controller/route registrations, but is not as fast as e.g. FastAPI w/ SQLalchemy or django-ninja.
Tinc, a GPLv2 mesh routing VPN
Tinc is incredible, it has worked flawlessly for me for 6+ years with exactly 0 maintenance.
As trustworthy as it is, I am sadly on the hunt to replace it. Compared to wireguard, the throughput ain't great, and it takes way too much CPU on my low power nodes. I would pay good money for "tinc, but with wireguard transport" -- there's of course projects purporting to do this but I haven't found one I trust yet.
Would Tailscale be an effective replacement for Tinc? It's built on top of Wireguard and works really well.
WebVM runs x86 binaries in WASM on any browser w/ ("[CheerpX:] an x86-to-WebAssembly JIT compiler, a virtual block-based file system, and a Linux syscall emulator") and for external sockets there's Tailscale networking. https://webvm.io/
IIUC that means an SSH (and/or MoSH Mobile Shell) client in a WASM WebVM in a browser tab could connect to a (tailscale (wg)) VPN mesh? (And JupyterLite+WebVM could ssh over an in-browser VPN mesh)
You'd probably need to compile a userspace wireguard implementation with a fork of the WebVM Dockerfile, or is that redundant because tailscale already wg's the sockets?: https://github.com/leaningtech/webvm/blob/main/dockerfiles/d...
I'm Done with Red Hat (Enterprise Linux)
I'm surprised at all of the drama surrounding Red Hat's decisions. Red Hat is a for-profit company and not a charity. They (Red Hat) don't make any claims of offering RHEL for free to the whole world. If you want that, Canonical's Ubuntu may be a better option for you. Am I missing something?
I mean, they’re within their right to pull support, but people are equally within their right to point out how that is shooting themselves and the RHEL communities in the foot.
This. What a good opportunity to compare: the (GitOps!) packaging workflows, build server security, software supply chain integrity controls, issue tracking / triage, wiki, documentation, kernel patching, cloud fuzzing / integration testing, and baseline MAC and DAC policies of the stable kernel patchset OSes within budget for schools, hobbyists, after workers, and corporations who can and for some services maybe should afford an SLA.
On worthwhile investments of time differentiating our offering in InfoSec and Operating Systems,
FWIU (RH) OpenShift (and MicroShift) does k8s containers most correctly in terms of separate SELinux contexts per container, which we should probably have for browser tabs, too. Do (a) browsers, (b) Cloudflare Runners, and (c) Docker WASM runtimes run WASM tasks without container-like process isolation; all as the same user and cgroup and context?
> which we should probably have for browser tabs, too
This would be incredible
Would it be feasible to sed-replace the RHEL and/or Fedora selinux and container-selinux rulesets for use with other Linux distros?
AFAIU only SUSE can run both AppArmor and SELinux?
And browsers are running as unconfined in selinux with like all major distros; even on ChromiumOS (which was based on Gentoo, Gnome, and Chrome) where WASM or a paid shell (or 15-30% cut from the Play Store only) is the only way for the kids to Python on the Chromebooks we bought them for school.
Wouldn't it be great for them to not have to switch OSes and distros throughout the day.
I don't know about "sed-replace", but you can download the RHEL selinux-policy source package from https://gitlab.com/redhat/centos-stream/rpms/selinux-policy
Playing sounds of healthy coral on reefs makes fish return (2019)
Coral reef restoration: https://en.wikipedia.org/wiki/Coral_reef_restoration
"Coral restoration – A systematic review of current methods, successes, failures and future directions" (2020) https://journals.plos.org/plosone/article?id=10.1371/journal... ; citations of https://scholar.google.com/scholar?hl=en&as_sdt=5,47&sciodt=...
Sunscreen > Environmental effects: https://en.wikipedia.org/wiki/Sunscreen#Environmental_effect... :
> Additionally, the National Oceanic and Atmospheric Administration (NOAA) has indicated that coral decline is associated with effects from climate change (warming oceans, rising water levels, acidification), overfishing, and pollution from agriculture, wastewater, and urban run-off.[155]
Oxybenzone > Environmental effects: https://en.wikipedia.org/wiki/Oxybenzone#Environmental_effec... :
> [Oxybenzone-containing sunscreen is now banned] in many areas [45] such as Palau, [46] Hawaii, [5] nature reserves in Mexico, Bonaire, the Marshall Islands, the United States Virgin Islands, Thailand's marine natural parks, [47] the Northern Mariana Islands, [48] and Aruba. [49]
Thanks coral restoration volunteers!
Are there already coral restoration drones / RUVs?
Is there a SAR Synthetic Aperture Radar -like imaging capability for mapping coral health; with sensor fusion? https://en.wikipedia.org/wiki/Synthetic-aperture_radar
Sensor fusion: https://en.wikipedia.org/wiki/Sensor_fusion
"Bacterial sensors send a jolt of electricity when triggered" https://news.rice.edu/news/2022/bacterial-sensors-send-jolt-...
re: sensor nets: https://twitter.com/westurner/status/1612605800576335874 ... ansible-OpenWRT, [DuckLink] LoRa, MEMS wave power
LoRa: https://en.wikipedia.org/wiki/LoRa
#Goal14: Life below water: https://www.globalgoals.org/goals/14-life-below-water/
#SDG14: Conserve and sustainably use the oceans, seas and marine resources for sustainable development : https://sdgs.un.org/goals/goal14
Do coral fragments have to be planted ashore or could they be ~transplanted by a drone?
EU Advocate General: Technical Standards must be freely available [pdf]
IIUC this means that EU government standards must be public domain? Or, preexisting paywalled / committee_attendance_required standards are now conveniently by decree public domain?
What are some good examples of Open Standards and Open Web Standards for EU and other world regions?
We all benefit from Open Standards; but in a competitive market specifying any particular standard - open or not - may be disadvantageous for the market.
Open Standards: https://en.wikipedia.org/wiki/Open_standard
Open Standards > Specific definitions of an open standard: https://en.wikipedia.org/wiki/Open_standard#Specific_definit...
Web Standards: https://en.wikipedia.org/wiki/Web_standards
E.g. W3C, IETF, WHATWG, and ECMA develop open standards.
W3C Web Monetization: https://webmonetization.org/ :
> The Web Monetization API allows websites to automatically and passively receive payments from Web Monetization-enabled visitors.
From https://interledger.org/faq/ :
> Web Monetization is being proposed as a W3C standard. Using the Interledger Protocol, the Web Monetization proposed standard aims to make it easier for web creators to generate income from their work without relying on advertising, site-by-site subscriptions or tracking models.
IIUC, ILP messages would help financial regulators - such as FTC CAT Consolidated Audit Trail - audit traditional ledgers and cryptoassets?
Interledger was contributed to W3C and has undergone significant major revision; including packetization of liquidity into many small transactions. FWIU, W3C Interledger Protocol is a W3C spec but by producing IETF-style numbered RFCs, their process slightly differs from the W3C WG Working Group model (with a page, a mailing list; and one or more git Repositories with Issues: github.com/orgname, github.com/orgname/readme, github.com/orgname/orgname.github.io, ).
I wouldn't call the "web standards" actual standards in any meaningful sense; real standards change slowly, while web "standards" churn routinely according to the whims of Big Tech.
All standards development groups have to deal with people that don't help; that just complain and diminish the work of others.
When "the work of others" is to further a monopoly, it should not be considered acceptable.
Are you arguing that there's gatekeeping at the standards organizations where all parties disclose their interests (Use Cases)?
Independent, state, academia, and industry are all invited to contribute time to open [web] standards that solve for many users and use cases.
W3C Process Document > 2. Members and the Team: https://www.w3.org/Consortium/Process/#Organization
"The IETF process: an informal guide" > 2.6. Bodies involved in the process: https://www.ietf.org/standards/process/informal/
Ecma By-laws > Art. 3: Membership: https://www.ecma-international.org/policies/by-laws/
Ecma TC39 (ECMAscript (JS)) Process matrix: https://tc39.es/process-document/
Free redistribution without modification, is likely a better model
Everything that uses configuration files should report where they're located
A quick and dirty way to do this is run the program through `strace` and find out what syscall it issues to read it.
I had to do this to figure out where Cold Steel was reading save files from years back: https://vaughanhilts.me/blog/2018/02/16/playing-trails-of-co...
ps wxa | grep <cmd> # or pgrep
strace -f -f -e trace=file <cmd> | grep -v '-1 ENOENT (No such file or directory)$'
IIRC there's some way to filter ENOENT messages with strace instead of grep?Strace: https://en.wikipedia.org/wiki/Strace
I use quick and dirty greps all the time, even when there is a "better" option available. It just works and is very intuitive in interactive contexts. Probably GP works in a similar way.
Remembered to add the `2>&1` stderr>stdout output redirection:
ps wxa | grep <cmd> # or pgrep
strace -f -f -e trace=file <cmd> 2>&1 | grep -v '-1 ENOENT (No such file or directory)$'
Bash manual > 3. Basic Shell Features > 3.6 Redirections: https://www.gnu.org/software/bash/manual/html_node/Redirecti...In lieu of strace, IDK how fast `ls -al /proc/${pid}/fd/*` can be polled and logged in search of which config file it is; almost like `lsof -n`:
pid=$$ # this $SHELL's pid
help set; # help help
(set -B; cat /proc/"${pid}"/{cmdline,environ}) \
| tr '\0' $'\n'
(while 1; do echo \#$pid; ls -al /proc/${pid}/fd/*; done) | tee "file_handle_to_file_path.${pid}.log"
# Ctrl-C
It's good Configuration Management practice to wrap managed config with a preamble/postamble indicating at least that some code somewhere wrote that and may overwrite whatever a user might manually change it to (prior to the configuration management system copying over or re-templating the config file content, thus overwriting any manual modifications) ## < managed by config_mgmt_system_xyz (in path.to.module.callable) >
## </ managed>
I prefer to use process substitution (not in POSIX):
strace -f -e trace=file -o >(grep -v ENOENT) <cmd>
Ask HN: Did studying proof based math topics make you a better programmer?
Hello, Hope all is well. I love math but haven’t studied much of it.
I have only studied up to real analysis. But definitely not a whole semester worth; my experience with analysis has been analogous to Poor performance in a quarter based analysis course. I did some exercises but not many.
I have heard from an mit professor[1] that math is the way to learn to think rigorously. So I’m curious about this and I’d like to improve my thinking skills in the hope that I will become a better programmer.
In an article a math professor said that there was some evidence that math improved logical skills[2].
So I’m wondering if any of you noticed that your thinking skills improved thereby making you a better programmer after studying analysis or topology or some other proof based math course.
Also which math other than the math required in CS programs should one study to be a better programmer and thinker? Thanks [1] https://m.youtube.com/watch?v=SZF0MFm9pqw&pp=ygUTQWR2aWNlIHJ1c3MgdGVkcmFrZQ%3D%3D [2] https://www.nytimes.com/2018/04/13/opinion/sunday/math-logic-smarter.html
Yes, in the sense that "math is programming paper instead of computers", being better at one translates to being better at the other. This intuition can even be made precise via the "Curry-Howard isomorphism", upon which "proof assistants" such as Coq are built.
Lean mathlib was originally a type checker proof assistant, but now leanprover-community is implementing like all math as proofs in Lean in the mathlib project.
Lean (proof assistant) https://en.wikipedia.org/wiki/Lean_(proof_assistant)
"Lean mathlib overview": https://leanprover-community.github.io/mathlib-overview.html
"Where to start learning Lean": https://github.com/leanprover-community/mathlib/wiki/Where-t...
leanprover-community/mathlib: https://github.com/leanprover-community/mathlib
Decades-long bet on consciousness ends
I wouldn't be surprised if consciousness ~conscience~ is just an emergent phenomenon resulting from any sufficiently powerful cognitive system (either biological or artificial) with sufficient inputs of its environment and, crucially, of itself. So as to be able to develop a rich model of itself and its relationship with its environment, and that thing will resemble a whole lot what we call consciousness ~conscience~. Then, of course, we will push the goalposts on what conscience is, so as to protect our fragile human egos.
EDIT: fixed spelling of consciousness. Apologies from an english second language speaker.
Automata: https://en.wikipedia.org/wiki/Automata_theory#Hierarchy_in_t...
Sentience: https://en.wikipedia.org/wiki/Sentience#Digital_sentience
Artificial consciousness > Testing: https://en.wikipedia.org/wiki/Artificial_consciousness
Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory :
> In constructor theory, a transformation or change is described as a task. A constructor is a physical entity that is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory, everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible and impossible tasks. Counterfactuals are thus fundamental statements, and the properties of information may be described by physical laws.[4] If a system has a set of attributes, then the set of permutations of these attributes is seen as a set of tasks. A computation medium is a system whose attributes permute to always produce a possible task. The set of permutations, and hence of tasks, is a computation set . If it is possible to copy the attributes in the computation set, the computation medium is also an information medium.
Is computation medium sufficient to none, some, or all of the sentient computational tasks done by humans?
Notice of Intent to Amend the Prescription Drug List: Vitamin D (2020)
Reminder that taking Vitamin D in excess without having enough Vitamin K could lead to vascular calcification, whereas sufficient levels of Vitamin K promotes proper absorption of calcium into bones.
Vitamin K supplementation for the primary prevention of osteoporotic fractures: is it cost-effective and is future research warranted? https://pubmed.ncbi.nlm.nih.gov/22398856/
Matrix Gla protein is an independent predictor of both intimal and medial vascular calcification in chronic kidney disease https://www.nature.com/articles/s41598-020-63013-8
Matrix Gla protein https://en.wikipedia.org/wiki/Matrix_Gla_protein
What's considered excess for Vitamin D?
UK NHS page says:
"If you choose to take vitamin D supplements, 10 micrograms a day will be enough for most people."
"Do not take more than 100 micrograms (4,000 IU) of vitamin D a day as it could be harmful. This applies to adults, including pregnant and breastfeeding women and the elderly, and children aged 11 to 17 years."
https://www.nhs.uk/conditions/vitamins-and-minerals/vitamin-...
The problem with vitamin D is that there's a lot of wrong information. It's because of this[0]:
>A statistical error in the estimation of the recommended dietary allowance (RDA) for vitamin D was recently discovered; in a correct analysis of the data used by the Institute of Medicine, it was found that 8895 IU/d was needed for 97.5% of individuals to achieve values ≥50 nmol/L. Another study confirmed that 6201 IU/d was needed to achieve 75 nmol/L and 9122 IU/d was needed to reach 100 nmol/L.
>The largest meta-analysis ever conducted of studies published between 1966 and 2013 showed that 25-hydroxyvitamin D levels <75 nmol/L may be too low for safety and associated with higher all-cause mortality
10 mcg = 400 IU. It's based on the previous (erroneous) daily recommended intake of 20 mcg per day.
Also, if you look at how much vitamin D you get from sunlight then [1]:
>A minimum erythema dose (1 MED) amounts to about 30 minutes midsummer midday sun exposure in Oslo (=20 mJ/cm2). Such an exposure given to the whole skin surface is equivalent to taking between 10,000 and 20,000 IU vitamin D orally, i.e. as much as 250-500 μg vitamin D, a similar amount as that obtained when consuming 200-375 mL of cod liver oil
It makes me wonder if health authorities use a dart board to set recommended amounts for vitamins. But hey, at least they're finally changing things and 4000 IU (100 mcg) seems reasonable.
I'm just wondering how they justified keeping recommendations at 10-20 mcg a day when the sunlight study says you can get 250-500 mcg in 30 minutes.
Edit: I believe it's 30 minutes of midday sun. You're going to get a lot less vitamin D at other times of the day.
> is equivalent to
How does 30m exposure compare to a full workday in the sun? Does it scale up to a limit? What are the serum correlates to these dimensional studies?
def gridsearch(person, intake={vitamin_d, vitamin_k, calcium, fiber})
I don't know. But it is my understanding that how high up in the sky the sun is matters a lot. Ie 5 pm sun is going to be a lot less effective than midday sun, because UV-B light has to go through more of the atmosphere.
>Does it scale up to a limit?
Yes, because sunlight only really makes a precursor to vitamin D. Ie you can't overdose on vitamin D from sunlight.
Prompt: generate a dataframe that lists: the amount of sunlight exposure by time, given the UV index, that corresponds to a given dose of Vitamin D in IUs, and milligrams (for each and/or an average patient profile)
How much vitamin D supplementation results in the same serum level as sunlight exposure?
Gas Is Here to Stay for Decades, Say Fossil Fuel Heavyweights
Natural gas is CH4 (methane).
Methane: https://en.wikipedia.org/wiki/Methane
/? methane emissions rules: https://www.google.com/search?q=methane+emissions+rules
From https://www.epa.gov/newsreleases/biden-harris-administration... :
> Oil and natural gas operations are the nation’s largest industrial source of methane. Methane is a potent greenhouse gas that traps about 80 times as much heat as carbon dioxide, on average, over the first 20 years after it reaches the atmosphere and is responsible for approximately one third of the warming from greenhouse gases occurring today. Sharp cuts in methane emissions are among the most critical actions the U.S. can take in the short term to slow the rate of climate change. Oil and natural gas operations are also significant sources of other health-harming air pollutants, including smog-forming volatile organic compounds (VOCs) and toxic air pollutants such as benzene.
Diaphora: an open-source program diffing IDA plugin
What would it take to add an adapter to or port Diaphora to Ghidra?
A bunch of open source Ghidra plugins, some ported from IDA: https://github.com/fr0gger/awesome-ida-x64-olly-plugin/blob/... ctrl-f 'diff', 'bindiff'
ghidra-patchdiff-correlator#how-does-it-work: https://github.com/threatrack/ghidra-patchdiff-correlator#ho...
https://ghidra.re/ghidra_docs/api/ghidra/python/PythonPlugin...
ghidra-jython-kernel + jupyter_console: https://github.com/AllsafeCyberSecurity/ghidra-jython-kernel
ghidrathon https://www.mandiant.com/resources/blog/ghidrathon-snaking-g... :
> Ghidrathon replaces the existing Python 2 extension implemented via Jython. This includes the interactive interpreter window, integration with the Ghidra Script Manager, and script execution in Ghidra headless mode. You can build and install Ghidrathon using the steps outlined in our README to start using the features described below [...]
> Alternatives: Ghidrathon is one of multiple solutions, including Ghidraal, Ghidra Bridge, and pyhidra, that enables Python 3 scripting in Ghidra. Each solution is implemented differently with accompanying benefits and limitations. We encourage you to explore all solutions and choose which best fits your needs.
Removing official support for Red Hat enterprise Linux
> For all of my open source projects, effective immediately, I am no longer going to maintain 'official' support for Red Hat Enterprise Linux.
> I will still support users of CentOS Stream, Rocky Linux, and Alma Linux, as I am able to test against those targets.
Open Source projects cannot afford to support (post-IBM-acquisition) RedHat RHEL license terms. How does that change developers' (coders, actual merge maintainers) and maybe also influencers' valuation of an acquired Linux distribution (with a new Stream model)?
Unexpected downsides of UUID keys in PostgreSQL
Can UUID v7 still be created independently, on a client for instance? Or do they ideally need to be generated in the same place?
Yes, UUID v7 can be generated independently. The UUID has three parts, which maintains strict chronological ordering for UUIDs generated in one process, and to within clock skew when UUIDs are generated on different systems.
The three parts are:
- time-based leading bits.
- sequential counter, so that multiple UUID 7s generated very rapidly within the same process will be monotonic even if the time counter does not increment.
- enough random bits to ensure no collisions and UUIDs cannot be guessed.
> There are new UUID formats that are timestamp-sortable; for when blockchain cryptographic hashes aren't enough entropy.
Note that multiple rounds of cryptographic hashing is not considered sufficient anymore; PBKDF2 and Argon2 are Key Derivation Functions, and those are used instead of hash functions.
"New UUID Formats – IETF Draft" https://news.ycombinator.com/item?id=28088213
draft-peabody-dispatch-new-uuid-format-04 Internet-Draft "New UUID Formats" https://datatracker.ietf.org/doc/html/draft-peabody-dispatch... ; UUID6, UUID7, UUID8
Hackers can steal cryptographic keys by video-recording power LEDs 60 feet away
Air-gap malware: https://en.wikipedia.org/wiki/Air-gap_malware
Some Blue LEDs contain sapphire, which is apparently macrostate entangleable.
> Some Blue LEDs contain sapphire, which is apparently macrostate entangleable.
I’ve studied quantum cryptography rather extensively, and I have no idea what you’re trying to say.
You could have a power LED that contains an actual magical quantum computer running attacker controlled software and with unlimited entanglement with the attacker, and it would not have a qualitatively greater ability to exfiltrate information to the attacker than a plain old LED would have. At best you would get a bandwidth increase by a small constant factor, improved tolerance to noise, and the ability to prevent anyone else from decoding the transmission.
Is there a published way to cause nonlocal entanglement between (blue, sapphire) LEDs; with just high/low voltage regulation, and/or with better or different control of the electron lepton particle properties other than just voltage on or off (i.e. spin,)?
What is the maximum presumed distance over which photon emissions from blue LEDs can be entangled? What about with [time-synchronized] applied magnetic fields? Could newer waveguide approaches - for example, dual beams - improve the distance and efficiency of transceivers operating with such a quantum communication channel?
From "Experiment demonstrates continuously operating optical fiber made of thin air" (2023) https://news.ycombinator.com/item?id=35812168 :
> Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33490730
"Trillionths of a second: Photon pairs compress an electron beam into short pulses" (2023-06-19) https://phys.org/news/2023-06-trillionths-photon-pairs-compr...
> What is also remarkable: Plane electromagnetic waves like a light beam normally cannot cause permanent velocity changes of electrons in vacuum, because the total energy and the total momentum of the massive electron and a zero rest mass light particle (photon) cannot be conserved. However, having two photons simultaneously in a wave traveling slower than the speed of light solves this problem (Kapitza-Dirac effect).
> For Peter Baum, physics professor and head of the Light and Matter Group at the University of Konstanz, these results are still clearly basic research, but he emphasizes the great potential for future research: "If a material is hit by two of our short pulses at a variable time interval, the first pulse can trigger a change and the second pulse can be used for observation—similar to the flash of a camera."
Chirped Pulse Amplification: https://en.wikipedia.org/wiki/Chirped_pulse_amplification
What Hz rate is necessary to do CPA Chirped Pulse Amplification with laser, or with a [blue] LED? FWIU lasers have a repetition rate between 0.1Hz and 1Mhz, and a pulse width between 1 picosecond and 1 millisecond?
Looks like there are already commercial LED CPA systems.
Aren't there also weird signal effects with e.g. PWM and a blue led connected to a Pi with a sufficient clock rate? What are the maximum binary data transmission distances for [blue] LEDs?
How to fade an LED in and out when you can only vary 5volts on and off? PWM: Pulse Width Modulation; you vary the duty cycle:
PWM: https://en.wikipedia.org/wiki/Pulse-width_modulation
"Learn PWM signal using Wokwi Logic Analyzer" https://blog.wokwi.com/explore-pwm-with-logic-analyzer/
Wokwi > New (Pi Pico w/ Micropython) LED project: https://wokwi.com/projects/300504213470839309
Reddit 1.0 was written in Lisp
What language is HN written in ?
Arc I believe.
OMG, Paul Graham actually released Arc in some form? I thought he never finished it.
I was going to make snarky comment about how between spez and PG, Lisp must actually be extended-release brain poison for people in executive management positions; now I wonder if spez just knows PG personally and takes his advice.
[deleted]
[deleted]
Measuring blood pressure for less than a dollar using a phone
I miss the SpO2 sensor on my old S5. It used to work up until a couple years ago when S Health took out the functionality.
Oh Gawd.
My wife did not feel well last year and thought her pulse was going up. I figure I'd reassure her quickly (or notice an issue) with my Note 8.
Unbeknowst to me, some samsung update put the hardware functionality behind some health app that wanted to setup a tracking profile and two factor authentication and my email and my phone and name and details and preferences and goals and 12 point health plan and whatnot.
By the end I was the one with anxiety attack. I'm still livid when I think about it. I'm not a violent or profane man but... I have some thoughts reserved for people who planned, approved and implemented that. Not all evil is big evil. Little evil is evil too (Note for all fellow software engineers here :)
An open source app off Fdroid like HeartBeat may or may not have a similar error rate: https://github.com/berdosi/HeartBeat
I think I picked up a dedicated unit finger pulse oximeter for like $20.
FWIU Apple Watch is one product with FDA approval as a medical device (ECG Electrocardiogram) for detection of AFib Atrial Fibrillation and Parkinsons.
Medical device design > Pathway to clearance or approval: https://en.wikipedia.org/wiki/Medical_device_design#Pathway_...
My Apple Watch detected an AFib about a year and a half ago. Was mildly freaked out at the time but glad that I got the alert. Nothing major, thankfully. Was on a mostly plant-based diet (no meat at all) and wasn't getting enough iron. Easy fix.
>...Was on a mostly plant-based diet (no meat at all)...
i.e., A vegetarian diet?
Sounds like it was vegetarian, bordering on vegan.
This is correct. Or to paraphrase The Princess Bride, “Only mostly Vegan”.
The hemoglobin (blood Iron level) finger prick test is used to screen blood donations; they give you a free Iron test when you donate blood.
https://en.wikipedia.org/wiki/Veganism#Nutrients_and_potenti... :
> Vegan diets tend to be high in dietary fiber, folate, vitamins C and E, potassium, magnesium, and unsaturated fats.[34] However, consuming no animal products increases the risk of deficiencies of vitamins B12 and D, calcium, iron, and omega-3 fatty acids. [34][264]
> The American Academy of Nutrition and Dietetics states that special attention may be necessary to ensure that a vegan diet will provide adequate amounts of vitamin B12, omega-3 fatty acids, vitamin D, calcium, iodine, iron, and zinc. It also states that concern that vegans and vegan athletes may not consume an adequate amount and quality of protein is unsubstantiated. [265]
FWIU, high-red-meat diet outcomes are probably on average worse.
"The Plant-Based Athlete: A Game-Changing Approach to Peak Performance" (2021) https://www.amazon.com/Plant-Based-Athlete-Game-Changing-App...
There are also low cost Smartphone fundoscopy devices for eye / retinal health chart images: https://www.aao.org/eyenet/article/smartphone-funduscopy
You can't use AI to classify and predict e.g. diabetic retinopathy from retinal images unless you have retinal images in the chart, over time.
Mars Declared Unsafe For Humans: No one can survive for longer than four years
Tardigrade: https://en.wikipedia.org/wiki/Tardigrade
"Scientists Put Tardigrade DNA Into Human Stem Cells. They May Create Super Soldiers." (2023) https://www.popularmechanics.com/science/animals/a43509580/t...
- https://news.ycombinator.com/item?id=35447885
Tardigrade: https://en.wikipedia.org/wiki/Tardigrade#Physiology :
> Tardigrades are thought to be able to survive even complete global mass extinction events caused by astrophysical events, such as gamma-ray bursts, or large meteorite impacts.[9][10] Some of them can withstand extremely cold temperatures down to 0.01 K (−460 °F; −273 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[36][37] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space.[38] Tardigrades that live in harsh conditions undergo an annual process of cyclomorphosis, allowing for survival in subzero temperatures. [39]
What selected for tardigrades' radiation shielding features?
Perhaps humanoid robots (that children can push over) could better withstand high-radiation environments.
What are some of the other advanced solar and cosmic radiation shielding approaches? IIRC spinning Superconducting materials have better shielding, but may not be as shielded as drilling into an asteroid already on a trajectory similar to the desired transit.
Barring that, I think it's going to need to be matter teleportation (which we don't want to imagine un-friendly aliens having either).
A Case of the MUMPS (2007)
HL7 starts to make more sense to me now.
The computer graphics industry got started at the university of Utah
That's why you see this at the computer history museum:
https://en.wikipedia.org/wiki/Utah_teapot
sort of like the great-great-grandparent of 3dbenchy:
Computer graphics > History: https://en.wikipedia.org/wiki/Computer_graphics#History
History of computer animation: https://en.wikipedia.org/wiki/History_of_computer_animation
Reminds me of the Stanford Bunny:
https://en.wikipedia.org/wiki/Stanford_bunny
Or, more distantly related, the test photo Lena, which recently fell victim to political correctness:
Yeah, I don't really get what happened there. They found out 20 years later than it was a cropped image from Playboy, and eventually both Playboy and the model were fine with it being used for research purposes. Then another 20 years later people re-discovered this and now it was sexist?
Regardless, I read some other literature and I do agree that it was probably well past time to update a reference image that wasnt a 1970's 512x512 image. But the non-technical reasoning around it is baffling. Well, not baffling for American culture, but disappointing indeed.
FWIW, this isn’t an accurate summary of what happened with the Lena image at all, and the story and reasons behind it’s decline are easily available online. If you want to understand it, maybe it’s worth looking up?
Lena is publicly on board with the movement to stop using her image for research, and it is irrelevant that Playboy has waived their copyrights. The reasons the image is discouraged are to promote professionalism, diversity, and respect in research. The journal Nature says “the Lena image clashes with the extensive efforts to promote women undertaking higher education in science and engineering”. [1] The paper “On alternatives to Lenna says “ Whatever its merits, the Lenna image’s origin is incompatible with our community’s sincere attempts to encourage diversity and respect in Science and Technology”. [2] (And references “ A centerfold does not belong in the classroom”. [3])
It’s more or less the same as hanging pin-ups at work- most people understand why that’s a bad idea, regardless of whether it was acceptable in the past. Imagine if female teachers put up Chippendale’s posters in the classroom, or nurses put them in the hospital, and argued that anyone who complained was just being PC police and unreasonably sensitive. Just because some people might not care doesn’t mean everyone is okay with it, and even if you don’t care about it, it’s still clearly unprofessional.
I’m sure it’s completely unintentional and that you and parent comment don’t mean to sound unsupportive of our social advances of women. I’m sure like most people here you actively support and encourage women to learn and work in STEM fields, and advocate for equality. So please consider more carefully the message that pushing back on the Lenna image sends and how it might reflect on you from other people’s point of view.
[1] https://www.nature.com/articles/s41565-018-0337-2
[2] https://www.tandfonline.com/doi/full/10.1080/09500340.2016.1...
[3] https://www.washingtonpost.com/opinions/a-playboy-centerfold...
I reject your accusations. Claiming without evidence that the Lena image is harmful is about as absurd as claiming the face of the statue of David is harmful. That's another often used picture, and of course completely ignored by the breathless "sexism!" yellers.
David wasn’t made with the intent to be pornographic. Lena’s picture was. Being unaware of the evidence doesn’t mean there isn’t any. There’s plenty of evidence of the mild bias inherent in the use of Lena, if you actually want to find the evidence. Lots of women have mentioned their mild discomfort with seeing it in a professional setting. There’s a documentary about this story called “Losing Lena”.
When someone experiences "mild discomfort" when seeing David (it does display an idealized nude man, which could also be said about the Lena photo, which is far too artistic to be merely called "pornography") then this is still no reason to banish the David photo. If everything gets banned that someone could potentially be mildly offended by, then we would have to ban a hell of a lot of images in research and academia. Another commenter made a comparison to Christian puritanism. A devout Christian might be offended both by Lena and by David. Or think of a devout Muslim woman being deeply offended by the sight of da Vinci's Vitruvian Man.
A nonsecular humanist could take issue with religious imagery, whether the model was fairly paid, and the articles. All NN (Non-Nude) references are suspect.
"A Utah school district has removed the Bible from some schools' shelves" (2023) https://www.npr.org/2023/06/02/1179906120/utah-bible-book-ch...
Persons from Utah, are even there good examples of healthy very-heteronormative marriages in the aforementioned now unmentionable text?
[flagged]
(All humans have those. New York already fought for their legal right to public anatomy, too.)
Are there higher resolution test images for ML? Maybe something inanimate that can't be judged and thus can't go to heavem anyway.
sklearn.datasets.load_iris, load_digits, : https://scikit-learn.org/stable/datasets/toy_dataset.html
Sklearn.datasets sample images has an image of a flower but not a USA!: https://scikit-learn.org/stable/datasets/loading_other_datas...
The W3C RDF Primer could describe Shape rdfs:Class'es instead of People and their contact information rdfs:Property(s). An Object-Oriented Programming and Linked Data exercise: Arrange these Classes into a hierarchy, according to their features: Shape, Square, Rectangle, Triangle, Quadrilateral, Triangle.
> Adobe and Pixar founders created tech that shaped modern animation
Pixar!
Pixar in a Box curriculum: https://www.khanacademy.org/computing/pixar
AI Game Development Tools (AI-GDT) > 3D Model: https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo... : BlenderGPT, Blender-GPT,
Pixar > History: https://en.wikipedia.org/wiki/Pixar :
> [...] Coincidentally, one of Steve Jobs's first jobs was under Bushnell in 1973 as a technician at his other company Atari, which Bushnell sold to Warner Communications in 1976 to focus on PTT.[22] PTT would later go bankrupt in 1984 and be acquired by ShowBiz Pizza Place.[23]
PTT: Chuck E. Cheese's Pizza Time Theatres
Pizza Planet! (Toy Story (1995))
Adobe Inc. > History: https://en.wikipedia.org/wiki/Adobe_Inc.#History
Gravitational Machines
It's an article from 1962:
"After the detection of the gravitational wave GW170817, Jason T. Wright (Physics Today, 72, 5, 12, 2019) reminded the community that many of its features had been predicted by Dyson more than half a century earlier. Dyson’s article was published only once, in Cameron’s long out of print collection, though a scan may be found at the web site of the Gravity Research Foundation (https://www.gravityresearchfoundation.org). Dyson thought it had been reprinted (in his Selected Papers, AMS Press, 1996, forward by Elliot H. Lieb) but it was not. Hoping to make the article easier to find, I wrote Dyson for his permission to post it at the arXiv"
It's about using two big bodies, A and B, to accelerate objects: "The energy source of the machine is the gravitational potential between the stars A and B. As the machine continues to operate, the stars A and B will gradually be drawn closer together, their negative potential energy will increase and their orbital velocity V will also increase."
Nice idea.
I wonder if one could calculate an upper bound of the available potential gravitational energy available in the entire universe by estimating how far every massive point (baryon) is from all the others.
[flagged]
What is the issue with this comment?
https://chat.openai.com/share/39580cd0-b5b2-4bec-89d4-09297c...
“[…] 5. Perceived Irrelevance or Lack of Relevance: Depending on the context of the Hacker News post, copy-pasting AI-generated content might be seen as irrelevant or not contributing to the topic at hand. This can be frustrating for users who are looking for meaningful and relevant insights from other community members.”
If I wanted to know what Bard or ChatGPT had to say about this, I would ask them.
But you don't know to ask: that's the problem; They're all trying new QG and dark energy correction factors.
One can't solve 3-body problems without Superfluid Quantum Gravity, thus this that I took the time to add is very relevant.
Not worth helping then.
> One can't solve 3-body problems without Superfluid Quantum Gravity
The article (from 1962) does not predict 3-body accelerations of large masses or particles using current methods (probably because they had not yet been developed at the time).
(Ironically, in context to the rejection of AI methods to summarize superfluid Quantum Gravity for the thread's benefit (and not my own),)
N-body gravity problems are probably best solved by AI methods; it is well understood that there are no known closed-form solutions for n-body gravity problems.
Three body problem: https://en.wikipedia.org/wiki/Three-body_problem
I'd love to see a chatbot try lol. That's all I'll say about that...
[deleted]
Royal Navy says quantum navigation test a success
This is the most obvious practical application of my PhD topic. A Bose Einstein Condensate is an extremely sensitive detector of gravity- a nuclear submarine could use it to make an ultra-accurate map of the world based on variations in the gravitational constant g. This would remove one of the primary reasons subs need to surface , in order to GPS lock.
It’s a naval chart that would not require surfacing.
A French postdoc in my lab swore this was the moneymaker for our entire subfield, and it seems he was on to something …
A submarine itself is quite massive, and has big moving parts (e.g. the rotor). The crew has less mass, but it would be very close, and also in constant movement. Wouldn’t this interfere with those measurements? Would they need to stop the engine and play “frozen TikTok” while the measurement device is active?
Surely the influence of some of these sources of noise can be modeled and filtered out. A University of Vienna currently conducts experiments to measure the gravitational force at very small sales, and they already have to do this with things like crowds moving about the city during rush hours.
Sources of Noise?!
Doesn't that mean that the gravitational field is an information storage and transmission medium; a signal channel?
Is it lossy?
Is there a routable signal path to block harassing calls?
Is it slower or cheaper than nonlocal entanglement; like quantum radar?
What is the minimum energy necessary to cause a propagating thresholdable-as-binary pertubation of a gravitational field?
Not a formal definition, but noise is simply the undesirable parts of the output of any information processing device, be it a sensor, an amplifier, or a transmission medium. But on Earth it's simply not practical as such. Gravitational waves observatories are used to detect events such as neutron star or black hole mergers that are otherwise unobservable, but they require an insane level of precision to be detected.
Are there yet any instruments better than LIGO at a lagrangian point outside Earth's atmosphere and the Van Allen radiation belt?
No, there’s nothing yet. I think you‘re thinking about (e)LISA btw.
Here is a link for that:
https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Ant...
LISA: Laser Interferometer Space Antenna: https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Ant... :
> LISA would be the first dedicated space-based gravitational-wave observatory. It aims to measure gravitational waves directly by using laser interferometry. The LISA concept has a constellation of three spacecraft arranged in an equilateral triangle with sides 2.5 million kilometres long, flying along an Earth-like heliocentric orbit. The distance between the satellites is precisely monitored to detect a passing gravitational wave.[2]
It also says the ESA LISA projected launch date is in year 2037.
Could 3 or 4 cubesats per cluster solve for space-based gravitational wave observation?
CubeSat: https://en.wikipedia.org/wiki/CubeSat
Li-Fi: https://en.wikipedia.org/wiki/Li-Fi
Could the gravitational wave sensors be mounted on Starlink or OneWeb ULEO Ultra Low Earth Orbit satellites with a 5 year lifecycle? (Also, how sealed and shielded do consumer radio telescopes like Unistellar's eVscope and eQuinox need to be to last a few years in microgravity? Maybe mando blackout noise.)
What is SOTA State of the Art in is it "matter-wave interferometry"?
/? matter-wave interferometry wikipedia: https://www.google.com/search?q=matter-wave+interferometry+w...
/? matter-wave interferometry: https://www.google.com/search?q=matter-wave+interferometry
Can low-cost lasers and Rdyberg atoms e.g. Rydberg Technology solve for [space-based] matter-wave interferometry?
/? from:me LIGO https://twitter.com/search?q=from%3A%40westurner%20ligo :
- "Massive Black Holes Shown to Act Like Quantum Particles" (2022) https://www.quantamagazine.org/massive-black-holes-shown-to-... :
> Physicists are using quantum math to understand what happens when black holes collide. In a surprise, they’ve shown that a single particle can describe a collision’s entire gravitational wave.
- GitHub topic: gravitational-waves: https://github.com/topics/gravitational-waves
Further notes regarding Superfluid Quantum Gravity (instead of dark energy): https://news.ycombinator.com/item?id=36258299
It sure is a medium! One Denver (IIRC) gravitational experiment could measure the field distortion caused by 100,000 people and cars at a local sports arena. [citation needed]
Is there a butterfly effect -like minimum pertubation for gravitational wave propagation through matter or massful things, at least?
Is there something that's low-enough power to not EM-burn a brainstem in a crytographically-keyed medical device with key revocation?
Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
Butterfly effect: https://en.wikipedia.org/wiki/Butterfly_effect
Chaos theory: https://en.wikipedia.org/wiki/Chaos_theory
Quantum chaos: https://en.wikipedia.org/wiki/Quantum_chaos
Superfluid Quantum Gravity:
Backscatter: https://en.wikipedia.org/wiki/Backscatter
Apple should pull the plug on the iPhone (2007)
Competitors at the time included Nokia (Symbian OS), Palm, Windows Mobile, Blackberry, OpenMoko, and Android; but not Apple Newton OS (1987-1998).
Cisco IOS (for network gear) was created in the 1980s.
It was possible to run Linux on the 1st gen iPod (2001). IIRC the Archos Jukebox also had a 2.5" drive enclosure with audio (and then video on a color LCD) decoding.
Nvidia releases new AI chip with 480GB CPU RAM, 96GB GPU RAM
They also have a turnkey product with 256 of these things.
1 exaflop + 144TB memory
https://nvidianews.nvidia.com/news/nvidia-announces-dgx-gh20...
I watched Jensen’s announcement for this.
He calls it the worlds largest GPU. It’s just one, giant compute unit.
Unlike super computers, which are highly distributed, Nvidia says this is 140 TERABYTES of UNIFIED MEMORY.
My mind still gets blown just thinking about it. My poor desktop GPU has 4 gigabytes of memory. Heck, it only has 2 terabytes of storage!
It may be presented as seamless unified memory, but it isn’t. Underlying framework still has to figure out how to allocate your data to minimize cross-unit talk. Each unit has independent CPU, GPU and (V)RAM, but units are interconnected via very fast network.
How does this compare to HBM High Bandwidth Memory (and GDDR5)? https://en.wikipedia.org/wiki/High_Bandwidth_Memory
From https://news.ycombinator.com/item?id=36211785 :
> EDIT: found answer to my own question in the datasheet: "The NVIDIA Grace CPU combines 72 Neoverse V2 Armv9 cores with up to 480GB of server-class LPDDR5X memory with ECC."
So, this is not stacked RAM like HBM, it's LPDDR5X which a quick search says is 8.5Gbps.
The Open-Source Software in Our Pockets Needs Our Help
It looks like the Federal Source Code policy at sourcecode.cio.gov is 404'ing; though there are many references to the URL [1][2]: https://github.com/WhiteHouse/source-code-policy/blob/gh-pag...
[1] https://www.cio.gov/2016/08/11/peoples-code.html
[2] https://code.gov/agency-compliance/compliance/procurement
> Section 7.2 and 7.3 of the Source Code Policy require agencies to provide an inventory of their 'custom-developed code' to support government-wide reuse and make Federal open source code easier to find.
> Using these inventories, Code.gov will provide a platform to search federally funded open source software and software available for government-wide reuse.
https://news.ycombinator.com/item?id=35888037 : security.txt, carbon.txt, SPDX SBOM, OSV, JSON-LD, blockcerts
/? "Critical Technology Security Centers Act" of 2023 https://www.google.com/search?q=Critical%2BTechnology%2BSecu...
Text - H.R.2866 - 118th Congress (2023-2024): Critical Technology Security Centers Act of 2023. (2023, April 25). https://www.congress.gov/bill/118th-congress/house-bill/2866...
Demo: Fully P2P and open source Reddit alternative
Every time I see social media recreated in a decentralised fashion I realise why the successful ones are centralised. The loading times of this are dreadful and I don't see how you'll be able to solve that problem when you're relying on P2P.
In any case, it sounds like we'll be getting many new Reddit clones so I may as well plug what I've built recently: a Reddit alternative API that is free. Check it out if you're a dev that is being caught out by the Reddit pricing changes[1].
I don't really find Mastodon that slow.
I do think it's got a lot, lot of sharp edges that need smoothing out before it's really viable. But speed isn't a problem, and it's decentralised.
Japanese Joinery, 3D printing, and FreeCAD
Is anyone aware of attempts at Japanese Joinery 3D printed parts, hopefully using FreeCAD?
https://github.com/topics/joinery :
- chantzlarge/joinery : https://github.com/chantzlarge/joinery
https://github.com/topics/woodworking
- petrasvestartas/compas_wood: https://github.com/petrasvestartas/compas_wood gallery: https://petrasvestartas.github.io/compas_wood/latest/#galler... (Python)
- marialarsson/tsugite: https://github.com/marialarsson/tsugite (Python)
https://github.com/topics/freecad
Japanese carpentry: https://en.wikipedia.org/wiki/Japanese_carpentry
/? Japanese joinery: https://m.youtube.com/results?sp=mAEA&search_query=japanese+...
New open-source datasets for music-based development
I’ll bite: Anyone doing fun stuff on the personal-music-collection front with these datasets?
I’m a big fan of Picard (via beets), and the ListenBrainz dataset seems like a fun resource for finding new music… but despite being intrigued by the rest of the datasets, I have no clue what I’d ever do with them from a practical standpoint.
"Show HN: Polymath: Convert any music-library into a sample-library with ML" (2022) https://news.ycombinator.com/item?id=34756949
Show HN: A Database Generator for EVM with CRUD and On-Chain Indexing
Hey all,
We are working on a database generator that takes in your data schema in YAML format and uses it to generate a database in Solidity with CRUD and on-chain indexing capabilities. You can deploy the generated contracts to any EVM blockchain.
We have a demo and playground if you want to test it out. It's in the early stages, we have planned some exciting features, and we'd appreciate your feedback!
Additional schema languages maybe worth supporting: Arrow, JSONschema, SHACL, json-ld-schema
react-jsonschem-form https://github.com/rjsf-team/react-jsonschema-form
google/react-schemaorg: https://github.com/google/react-schemaorg : Type-checked Schema.org JSON-LD for React
pydantic_schemaorg: https://github.com/lexiq-legal/pydantic_schemaorg
"RDFJS"
YAML-LD (application/ld+yaml) round-trips to JSON-LD (application/ld+json) which round trips to RDF; W3C Linked Data Signatures / W3C Verified Claims cryptographic signatures verify regardless of graph representation.
YAML-LD > 3.1 JSON vs YAML comparison: https://json-ld.github.io/yaml-ld/spec/#json-vs-yaml
Could there be on-chain indexes / indices for Linked Data documents?
rdflib-leveldb: https://github.com/RDFLib/rdflib-leveldb https://github.com/RDFLib/rdflib-leveldb/blob/a0f3386c71e6b1...
Fwiu, "e.g. Graph token (GRT) supports Indexing (search) and Curation of datasets"
Could an code generated with SolidQuery serve Certificate Transparency logs and e.g. Sigstore artifact signatures? https://news.ycombinator.com/item?id=32760170 https://westurner.github.io/hnlog/#comment-32760170 :
> Google's Certificate Transparency Search page to be discontinued May 15th, 2022" https://news.ycombinator.com/item?id=30781698
> - LetsEncrypt Oak is also powered by Google/trillian, which is a trustful centralized database
See also: cosmos/iavl
> - e.g. Graph token (GRT) supports Indexing (search) and Curation of datasets
>> And what about indexing and search queries at volume, again without replication?
> My understanding is that the s Sigstore folks are now more open to the idea of a trustless DLT? "W3C Verifiable Credentials" is a future-proof standardized way to sign RDF (JSON-LD,) documents with DIDs.
WebNN: Web Neural Network API
W3C WebNN
- Src: https://github.com/webmachinelearning/webnn
- Spec: https://www.w3.org/TR/webnn/
WebNN API: https://www.w3.org/TR/webnn/#api
Vectorization: Introduction
> Vectorization is a process by which floating-point computations in scientific code are compiled into special instructions that execute elementary operations (+,-,*, etc.) or functions (exp, cos, etc.) in parallel on fixed-size vector arrays.
I guess this is a scientific computing course or something but I feel like even so it’s important to point out that most processors have many vector instructions that operate on integers, bitfields, characters, and the like. The fundamental premise of “do a thing on a bunch of data at once” isn’t limited to just floating point operations.
Vectorization (disambiguation) https://en.wikipedia.org/wiki/Vectorization :
> Array programming, a style of computer programming where operations are applied to whole arrays instead of individual elements
> Automatic vectorization, a compiler optimization that transforms loops to vector operations
> Image tracing, the creation of vector from raster graphics
> Word embedding, mapping words to vectors, in natural language processing
> Vectorization (mathematics), a linear transformation which converts a matrix into a column vector
Vector (disambiguation) https://en.wikipedia.org/wiki/Vector
> Vector (mathematics and physics):
> Row and column vectors, single row or column matrices
> Vector space
> Vector field, a vector for each point
And then there are a number of CS usages of the word vector for 1D arrays.
Compute kernel: https://en.wikipedia.org/wiki/Compute_kernel
GPGPU > Vectorization, Stream Processing > Compute kernels: https://en.wikipedia.org/wiki/General-purpose_computing_on_g...
sympy.utilities.lambdify.lambdify() https://github.com/sympy/sympy/blob/a76b02fcd3a8b7f79b3a88df... :
> """Convert a SymPy expression into a function that allows for fast numeric evaluation [with the CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, SymPt, numexpr,]
pyorch lambdify PR, sympytorch: https://github.com/sympy/sympy/pull/20516#issuecomment-78428...
sympytorch: https://github.com/patrick-kidger/sympytorch :
> Turn SymPy expressions into PyTorch Modules.
> SymPy floats (optionally) become trainable parameters. SymPy symbols are inputs to the Module.
sympy2jax https://github.com/MilesCranmer/sympy2jax :
> Turn SymPy expressions into parametrized, differentiable, vectorizable, JAX functions.
> All SymPy floats become trainable input parameters. SymPy symbols become columns of a passed matrix.
[deleted]
You can link an OpenPGP key to a German eID
Is there a reputable identity provider that would verify a passport, SSN or similar, preferably in person, and link that to an OpenPGP key with metadata same as in the ID?
Similar to this service, but linking not just the name, but more secure unique identity data. Linking the person’s name to the key is not very useful, since there are many people with that name.
That’s basically a government issued smart card, that would allow the use of OpenPGP A-E-S keys for arbitrary data through a FOSS API.
Keybase was a good idea, but it’s semi dead.
It's not exactly what you're saying but
Is all the best ideas of keybase. Basically if you trust someone has control over multiple different accounts you can also trust their pgp key.
Re: WoT Web of Trust, `keybase pgp -h`, and Web standards: W3C DID Decentralized Identifiers, W3C VC Verifiable Credentials, "Linked Data Signatures for GPG"; there's a URI for the GpgSignature2020 signature suite: https://news.ycombinator.com/item?id=28814802 https://news.ycombinator.com/item?id=35302650
https://news.ycombinator.com/item?id=26758099 ; blockcerts.org, blockcerts-verifier-js, ILP ledger addresses
"Verifiable Credentials Data Model v1.1" W3C Recommendation 03 March 2022 https://www.w3.org/TR/vc-data-model/#ecosystem-overview :
> Holder, Issuer, Subject, Verifier, Verifiable Data Registry
Evaluate and Track Your LLM Experiments: Introducing TruLens for LLMs
https://news.ycombinator.com/item?id=35810320 :
> - [ ] dvc, GitHub Actions, GitLab CI, Gitea Actions,: how to add PROV RDF Linked Data metadata to workflows like DVC.org's & container+command-in-YAML approach
https://news.ycombinator.com/item?id=34619424 https://westurner.github.io/hnlog/#comment-34619424 :
- XAI: Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
- > Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation
- > A more logged approach with IDK all previous queries in a notebook and their output over time would be more scientific-like and thus closer to "Engineering": https://en.wikipedia.org/wiki/Engineering
This Is Why I Teach My Law Students How to Hack
def hasRight(person): -> bool
def haveEqualRights(persons): ->
awesome-legal-nlp: https://github.com/maastrichtlawtech/awesome-legal-nlpA blocky based CAD program
What a great idea.
TIL about jupyterlab-blockly https://jupyterlab-blockly.readthedocs.io/en/latest/ https://jupyterlab-blockly.readthedocs.io/en/latest/other_ex... :
> The JupyterLab-Blockly extension is ready to be used as a base for other projects: you can register new Blocks, Toolboxes and Generators. It is a great tool for fast prototyping."
jupyter-cadquery: https://github.com/bernhard-42/jupyter-cadquery
"Generate code from GUI interactions" https://github.com/Kanaries/pygwalker/issues/90
Why cadquery instead of OpenSCAD: https://cadquery.readthedocs.io/en/latest/intro.html#why-cad...
/? awesome finite element analysis site:github.com https://www.google.com/search?q=awesome+finite+element+analy...
AI Game Development Tools (AI-GDT) > 3D Model https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo... :
> blender_jarvis - Control Blender through text prompts with help of ChatGPT*
> Blender-GPT - An all-in-one Blender assistant powered by GPT3/4 + Whisper [speech2text] integration
> BlenderGPT
> chatGPT-maya - Simple Maya tool that utilizes open AI to perform basic tasks based on descriptive instructions.
Removing PGP from PyPI
>Of those 1069 unique keys, about 30% of them were not discoverable on major public keyservers, making it difficult or impossible to meaningfully verify those signatures.
I don't know if it applies to any of those 1069 keys, but note that there is a way of hosting PGP keys that does not depend on key servers: WKD https://datatracker.ietf.org/doc/draft-koch-openpgp-webkey-s... . You host the key at a .well-known URI under the domain of the email address. It's a draft as you can see, but I've seen a few people using it (including myself), and GnuPG supports it.
This is interesting, but it doesn't really solve the key distribution problem: with well-known hosting you now have a (weak) binding to some DNS path, but you're not given any indication of how to discover that path. It's also not clear that a DNS identity is beneficial to PyPI in particular (since PyPI doesn't associate or namespace packages at all w/r/t DNS).
More generally, these kinds of improvements are not a sufficient reason to retain PGP: even with a sensible identity binding and key distribution, PGP is still a mess under the hood. The security of a codesigning scheme is always the weakest signature and/or key, and PGP's flexibility more or less ensures that that weakest link will always be extremely weak.
Right. I've never used PyPI, but TFA makes it sound like the existing support for signing is "We allow the uploader to upload a signature, and the downloader can look up the key indicated in the signature to do the verification." Is that correct? If so, then yes there is a key ID involved but no email address, so a generic downloader would have no choice but to look it up from a key server.
That's correct!
PyPI's support for PGP is very old -- it's hard to get an exact date, but I think it's been around since the very earliest versions of the index (well before it was a storing index like it is now). If I had to guess (speculate wildly), my guess would be that the original implementation was done with a healthy SKS network and strong set in mind -- without those things, PGP's already weak identity primitives are more or less nonexistent with just signatures.
GPG ASC upload support was quietly added later IIRC. EWDurbin might recall
Well, I think there should be broader discussion of this inadequacy.
"Implement "hook" support for package signature verification." (2013) https://github.com/pypi/warehouse/issues/1638#issuecomment-2...
"GPG signing - how does that really work with PyPI?" https://github.com/pypa/twine/issues/157#issuecomment-101460...
"Better integration with conda/conda-forge for building packages" https://github.com/pyodide/pyodide/issues/795#issuecomment-1...
Conda now has their own package cryptographic signature software supply chain security control control. Unfortunately, conda's isn't yet W3D DIDs with Verifiable Credentials and sigstore either.
Also, [CycloneDX] SBOMs don't have any package archive or package file signatures; so when you try to audit what software you have on all the containers on your infrastructure there's no way to check the cryptographic signatures of the package authors and maintainers against what's installed on disk.
docker help sbom
# check_signatures python conda zipapps apt/dnf/brew/chocolatey_nuget git /usr/local
And without clients signing before uploading, we can only verify Data Integrity (1) at the package archive level; (2) with pypi's package signature key.Now that you have removed GPG ASC signature upload support, is there any way for publishers to add cryptographic signatures to packages that they upload to pypi? FWIU only "the server signs uploads" part of TUF was ever implemented?
Why do we use GPG ASC signatures instead of just a checksum over the same channel?
> Why do we use GPG ASC signatures instead of just a checksum over the same channel?
Could you elaborate on what you mean by this? PyPI computes and supplies a digest for every uploaded distribution, so you can already cross-check integrity for any hosted distribution.
GPG was nominally meant to provide authenticity for distributions, but it never really served this purpose. That's why it's being removed.
> Why do we use GPG ASC signatures instead of just a checksum over the same channel?
You can include an md5sum or a sha512sum string next to the URL that the package is downloaded from (for users to optionally check after downloading a package); but if that checksum string is uploaded over the same channel (HTTPS/TLS w/ a CA cert bundle) as the package, the checksum string could have been MITM'd/tampered with, too. A cryptographically-signed checksum can be verified once the pubkey is retrieved over a different channel (GPG: HKP is HTTPS/TLS with cert pinning IIRC), and a MITM would have to spend a lot of money to forge that digital publisher signature.
Twine COULD/SHOULD download uploads to check the PyPI TUF signature, which could/should be shipped as a const in twine?
And then Twine should check publisher signatures against which trusted map of package names to trusted keys?
1) the server signs what's uploaded using one or more TUF keys shared to RAM on every pypi upload server.
2) the client uploads a cryptographic signature (made using their own key) along with the package, and the corresponding public key is trusted to upload for that package name, and the client retrieves said public key and verifies the downloaded package's cryptographic signature before installing.
FWIU, 1 (PyPI signs uploads with TUF) was implemented, but 2 (users sign their own packages before uploading the signed package and signature, (and then 1)) was never implemented?
Your understanding is a little off: we worked on integrating TUF into PyPI for a while, but ran into some scalability/distributability issues with the reference implementation. It's been a few years, but my recollection was that the reference implementation assumed a lot of local filesystem state, which wasn't compatible with Warehouse's deployment (no local state other than tempfiles, everything in object storage).
To the best of my knowledge, the current state of TUF for PyPI is that we performed a trusted setup ceremony for the TUF roots[1], but that no signatures were ever produced from those roots.
For the time being, we're looking at solutions that have less operational overhead: Sigstore[2] is the main one, and it uses TUF under the hood to provide the root of trust.
python-tuf [1] back then assumed that everything was manipulated locally, yes, but a lot has changed since then: you can now read/write metadata entirely in memory, and integrate with different key management backend systems such as GCP.
More importantly, I should point out that while Sigstore's Fulcio will help with key management (think of it as a managed GPG, if you will), it will not help with securely mapping software projects to their respective OIDC identities. Without this, how will verifiers know in a secure yet scalable way which Fulcio keys _should_ be used? Otherwise, we would then be back to the GPG PKI problem with its web of trust.
This is where PEP 480 [2] can help: you can use TUF (especially after TAP 18 [3]) to do this secure mapping. Marina Moore has also written a proposal called Transparent TUF [4] for having Sigstore manage such a TUF repository for registries like PyPI. This is not to mention the other benefits that TUF can give you (e.g., protection from freeze, rollback, and mix-and-match attacks). We should definitely continue discussing this sometime.
[1] https://github.com/theupdateframework/python-tuf
[2] https://peps.python.org/pep-0480/
[3] https://github.com/theupdateframework/taps/blob/master/tap18...
[4] https://docs.google.com/document/d/1WPOXLMV1ASQryTRZJbdg3wWR...
> [4] [TUFT: Transparent TUFT] : https://docs.google.com/document/d/1WPOXLMV1ASQryTRZJbdg3wWR...
W3C ReSpec: https://github.com/w3c/respec/wiki
blockcerts-verifier (JS): https://github.com/blockchain-certificates/blockcerts-verifi...
blockchain-certificates/cert-verifier (Python): https://github.com/blockchain-certificates/cert-verifier
https://news.ycombinator.com/item?id=35896445 :
> Can SubtleCrypto accelerate any of the W3C Verifiable Credential Data Integrity 1.0 APIs? vc-data-integrity: https://w3c.github.io/vc-data-integrity/ ctrl-f "signature suite"
>> ISSUE: Avoid signature format proliferation by using text-based suite value The pattern that Data Integrity Signatures use presently leads to a proliferation in signature types and JSON-LD Contexts. This proliferation can be avoided without any loss of the security characteristics of tightly binding a cryptography suite version to one or more acceptable public keys. The following signature suites are currently being contemplated: eddsa-2022, nist-ecdsa-2022, koblitz-ecdsa-2022, rsa-2022, pgp-2022, bbs-2022, eascdsa-2022, ibsa-2022, and jws-2022.
https://github.com/theupdateframework/taps/blob/master/tap18... :
> TUF "targets" roles may delegate to Fulcio identities instead of private keys, and these identities (and the corresponding certificates) may be used for verification.
s/fulcio/W3C DID/g may have advantages, or is there already a way to use W3C DID Decentralized Identifiers to keep track of key material in RDFS properties of a DID class?
Where does the user specify the cryptographic key to sign a package before uploading?
Serverside TUF keys are implemented FWICS, but clientside digital publisher signatures (like e.g. MS Windows .exe's have had when you view the "Properties" of the file for many years now) are not yet implemented.
Hopefully I'm just out of date.
> Where does the user specify the cryptographic key to sign a package before uploading?
With Sigstore, they perform an OIDC flow against an identity provider: that results in a verifiable identity credential, which is then bound to a short-lived (~15m) signing key that produces the signatures. That signing key is simultaneously attested through a traditional X.509 PKI (it gets a certificate, that certificate is uploaded to an append-only transparency log, etc.).
So: in the normal flow, the user never directly specifies the cryptographic key -- the scheme ensures that they have a strong, ephemeral one generated for them on the spot (and only on their client device, in RAM). That key gets bound to their long-lived identity, so verifiers don't need to know which key they're verifying; they only need to determine whether they trust the identity itself (which can be an email address, a GitHub repository, etc.).
Signature tells you who signed it.
Of course, if you haven't put any effort in system to end-to-end verify whether it's right signature it doesn't matter.
pip checks that a given was signed with the pypi key but does not check for a signature from the publisher. And now there's no way to host any type of cryptographic signatures on pypi.
There is no e2e: pypi signs what's uploaded.
(Noting also that packages don't have to be encrypted in order to have cryptographic signatures; only the signature is encrypted, not the whole package)
Yeah the whole thing looks like throwing away baby with bathwater; the package should
* get a signature for author ("the actual author published it") + some metadata with list of valid signing keys (in case project have more authors or just for key rotation * get a signature for hosting provider that confirms "yes, that actual user logged in and uploaded the package" * (the hardest part) key management on client side so the user have to do least amount of work possible in when downloading/updating valid package.
If user doesn't want to go to effort to validate whether the public key of author is valid so be it but at very least system should alert on tampering with the provider (checking the hosting signature) or the author key changing (compromised credentials to the hosting provider).
It still doesn't prevent "the attacker steals key off author's machine" but that is by FAR the rarest case and could be pretty reasonably prevented by just using hardware tokens. Hell, fund them for key contributors.
Nginx 1.25.0: experimental HTTP/3 support
HTTP/3: https://en.wikipedia.org/wiki/HTTP/3
QUIC: https://en.wikipedia.org/wiki/QUIC
DoQ: DNS-over-QUIC: https://en.wikipedia.org/wiki/Domain_Name_System#Transport_p...
RFC9520: DNS over Dedicated QUIC Connections: https://datatracker.ietf.org/doc/html/rfc9250
If perpetual motion machines don't exist, what powers earth's natural systems?
Earth's rotation > Origin: https://en.wikipedia.org/wiki/Earth%27s_rotation#Origin :
> Earth's original rotation was a vestige of the original angular momentum of the cloud of dust, rocks, and gas that coalesced to form the Solar System. This primordial cloud was composed of hydrogen and helium produced in the Big Bang, as well as heavier elements ejected by supernovas. As this interstellar dust is heterogeneous, any asymmetry during gravitational accretion resulted in the angular momentum of the eventual planet. [58]
> However, if the giant-impact hypothesis for the origin of the Moon is correct, this primordial rotation rate would have been reset by the Theia impact 4.5 billion years ago. Regardless of the speed and tilt of Earth's rotation before the impact, it would have experienced a day some five hours long after the impact. [59] Tidal effects would then have slowed this rate to its modern value.
Solar energy: fission, fusion; [thermal] radiation,: https://en.wikipedia.org/wiki/Solar_energy
Circumstellar habitable zone: https://en.wikipedia.org/wiki/Circumstellar_habitable_zone :
> In astronomy and astrobiology, the circumstellar habitable zone (CHZ), or simply the habitable zone, is the range of orbits around a star within which a planetary surface can support liquid water given sufficient atmospheric pressure. [1][2][3][4][5] The bounds of the CHZ are based on Earth's position in the Solar System and the amount of radiant energy it receives from the Sun.
Earth > Hydrosphere, Atmosphere: https://en.wikipedia.org/wiki/Earth :
> Earth's hydrosphere is the sum of Earth's water and its distribution. Most of Earth's hydrosphere consists of Earth's global ocean. Nevertheless, Earth's hydrosphere also consists of water in the atmosphere and on land, including clouds, inland seas, lakes, rivers, and underground waters down to a depth of 2,000 m (6,600 ft).
Meta AI announces Massive Multilingual Speech code, models for 1000+ languages
I was super excited at this, but digging through the release [0] one can see the following [1]. While using Bible translations is indeed better than nothing, I don't think the stylistic choices in the Bible are representative of how people actually speak the language, in any of the languages I can speak (i.e. that I am able to evaluate personally).
Religious recordings tend to be liturgical, so even the pronunciation might be different to the everyday language. They do address something related, although more from a vocabulary perspective to my understanding [2].
So one of their stated goals, to enable people to talk to AI in their preferred language [3], might be closer but certainly a stretch to achieve with their chosen dataset.
[0]: https://about.fb.com/news/2023/05/ai-massively-multilingual-...
[1]: > These translations have publicly available audio recordings of people reading these texts in different languages. As part of the MMS project, we created a dataset of readings of the New Testament in more than 1,100 languages, which provided on average 32 hours of data per language. By considering unlabeled recordings of various other Christian religious readings, we increased the number of languages available to more than 4,000. While this data is from a specific domain and is often read by male speakers, our analysis shows that our models perform equally well for male and female voices. And while the content of the audio recordings is religious, our analysis shows that this doesn’t bias the model to produce more religious language.
[2]: > And while the content of the audio recordings is religious, our analysis shows that this doesn’t bias the model to produce more religious language.
[3]: > This kind of technology could be used for VR and AR applications in a person’s preferred language and that can understand everyone’s voice.
This paragraph justifies the decision, IMHO.
> Collecting audio data for thousands of languages was our first challenge because the largest existing speech datasets cover 100 languages at most. To overcome this, we turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research.
I think the choice was just between having any data at all, or not being able to support that language.
While AFAIU the UN: UDHR United Nations Universal Declaration of Human Rights is the most-translated document in the world, relative to the given training texts there likely hasn't been as much subjective translation analysis of UDHR.
Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law https://github.com/maastrichtlawtech/awesome-legal-nlp#bench...
A "who hath done it" exercise:
[For each of these things, tell me whether God, Others, or You did it:] https://twitter.com/westurner/status/1641842843973976082?
"Did God do this?"
The UN's UDHR likely yields <5mins of audio data in any language which is kinda useless as far as training data goes.
Compared to e.g. religious text translation, I don't know how much subjective analysis there is for UDHR translations. It's pretty cut and dry: e.g. "Equal Protection of Equal Rights" is pretty clear.
"About the Universal Declaration of Human Rights Translation Project" https://www.ohchr.org/en/human-rights/universal-declaration/... :
> At present, there are 555 different translations available in HTML and/or PDF format.
E.g. Buddhist scriptures are also multiply translated; probably with more coverage in East Asian languages.
Thomas Jefferson, who wrote the US Declaration of Independence, had read into Transcendental Buddhism and FWIU is thus significantly responsible for the religious (and nonreligious) freedom We appreciate in the United States today.
Weapons contractors hitting Department of Defense with inflated prices
[flagged]
> is it any wonder we see mainstream media enthusiastically endorsing more and more weapons for Ukraine? this is your "textbook" demonstration of the military-industrial-complex
I love how people are so jaded when America is on the right side of a conflict for the first time in a long time. America is supplying aid to a country to defend itself from an imperialistic invader, it gets to get rid of its expiring weapon stocks by donating them to a country that is decimating their biggest military rival.
This arrangement is win win win for everyone involved.
[flagged]
This narrative does not jive with my memory. There were extremely vocal critics of the invasion of Iraq, long before troops were on the ground. Some of the U.S. closest allies refused to participate. Much of the international community was opposed to the invasion, even with the U.S. doing whatever it could to push for a coalition.
Comparing the invasion of Iraq with the current situation in Ukraine is like comparing apples with hand grenades.
> This narrative does not jive with my memory.
In the years since, lots of people came out and said "they were always against the war" likely creating false memories. Here is one example, it's very common: https://www.factcheck.org/2019/09/bidens-record-on-iraq-war/
The war went on for decades, during that time both Democrats and Republicans had control of the presidency and congress at various points, nobody saw fit to even bring a vote to end the war. They would go years without mentioning it, and now that they want to start another they "were always against it" all of the sudden.
Public opinion was really 50-50 [1] in the US so you're going to hear a lot of people that say they were against it because a lot of people were.
Of course, it went from a high of 2/3 for to now 2/3 against so the 1/3 flip-flopers will also be a lot of people.
[1]: https://en.wikipedia.org/wiki/Public_opinion_in_the_United_S...
List of congressional opponents of the Iraq War: https://en.wikipedia.org/wiki/List_of_congressional_opponent...
That administration increased expenses by trillions and then cut tax revenue by trillions ( "starve the beast", "scorched earth") and now we're at shutdown showdown again without money for healthcare, infrastructure, or education.
We need to fund Q12 and K12CS education and update curricula to win the actual competition. Without funding due to defense offense spending without ROI, we cannot fund education and will thus also lose the real war: the War on Healthcare.
Repeal of the 2002 AUMF: https://en.wikipedia.org/wiki/Repeal_of_the_2002_AUMF
Senate Report on Iraqi WMD Intelligence: https://en.wikipedia.org/wiki/Senate_Report_on_Iraqi_WMD_Int...
Had they floated "we're going to start multi-trillion-dollar war and cut then cut taxes" in the election, I doubt they would've won Florida by a hanging chad in 1999. .
(The full cost of war calculations should include lifetime medical for now-disabled veterans, whose families also need healthcare.)
I think you should pay the bills for your war now.
FWIW the initial requested outlay to relieve them of apparently over-spec mortar tubes / new wmd production was ~$80b in 2003.
Real costs of war included fuel surcharges on groceries, which increased CPI (Consumer Price Inflation) and stressed families' budgets leading into to The Great Recession; then the worst recession since the great depression (1929-1939) prior to WWII (1939-1945).
The actual impact on the (OPEC-specified) production rate (not price) of oil due to tbh relatively minimal Iraqi production capabilities having been offline due to US invasion in order to world police save the day does not at all explain the action in the oil commodity market through those years; costs of war calculations - after deaths and orphans and missed school person/years - do not include what we paid for gas at the pump.
The Great Recession: https://en.wikipedia.org/wiki/Great_Recession
World oil market chronology: https://en.wikipedia.org/wiki/World_oil_market_chronology_fr... ; from $30/barrel in 2000 to $140/barrel in 2008 (and DoD is the largest institutional consumer of oil in the world, especially when at war)
PyTorch for WebGPU
Very impressive work. Would be interesting to do some benchmarks versus PyTorch.
On a side-note, I'm not sure if it is because I've looked at so many autograd engines by now, but it is really cool to see that after the years of different frameworks having been developed, most people seem to agree on some concepts and structure on how to implement something like this. It is pretty easy to dive into this, even without being particularly skilled in JS/TS.
Wondering how such frameworks will look in a couple years.
Could there be something like emscripten-forge/requests-wasm-polyfill for PyTorch with WebGPU? https://github.com/emscripten-forge/requests-wasm-polyfill
How does the performance of webgpu-torch compare to compiling PyTorch to WASM with emscripten and WebGPU?
tfjs benchmarks: Environment > backend > {WASM, WebGL, CPU, WebGPU, tflite} https://tensorflow.github.io/tfjs/e2e/benchmarks/local-bench... src: https://github.com/tensorflow/tfjs/tree/master/e2e/benchmark...
tensorflow/tfjs https://github.com/tensorflow/tfjs
tfjs-backend-wasm https://github.com/tensorflow/tfjs/tree/master/tfjs-backend-...
tfjs-backend-webgpu https://github.com/tensorflow/tfjs/tree/master/tfjs-backend-...
([...], tflite-support, tflite-micro)
From facebookresearch/shumai (a JS tensor library) https://github.com/facebookresearch/shumai/issues/122 :
> It doesn't make sense to support anything besides WebGPU at this point. WASM + SIMD is around 15-20x slower on my machine[1]. Although WebGL is more widely supported today, it doesn't have the compute features needed for efficient modern ML (transformers etc) and will likely be a deprecated backend for other frameworks when WebGPU comes online.
tensorflow rust has a struct.Tensor: https://tensorflow.github.io/rust/tensorflow/struct.Tensor.h...
"ONNX Runtime merges WebGPU backend" https://github.com/microsoft/onnxruntime https://news.ycombinator.com/item?id=35696031 ... TIL about wonnx: https://github.com/webonnx/wonnx#in-the-browser-using-webgpu...
microsoft/onnxruntime: https://github.com/microsoft/onnxruntime
Apache/arrow has language-portable Tensors for cpp: https://arrow.apache.org/docs/cpp/api/tensor.html and rust: https://docs.rs/arrow/latest/arrow/tensor/struct.Tensor.html and Python: https://arrow.apache.org/docs/python/api/tables.html#tensors https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...
Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h
Are the recommendable ways to cast e.g. arrow Tensors to pytorch/tensorflow?
FWIU, Rust has a better compilation to WASM; and that's probably faster than already-compiled-to-JS/ES TensorFlow + WebGPU.
What's a fair benchmark?
> What's a fair benchmark?
- /? pytorch tensorflow benchmarks webgpu 2023 site:github.com https://www.google.com/search?q=pytorch+tensorflow+benchmark...
- [tfjs benchmarks]
- huggingface/transformers:src/transformers/benchmark https://github.com/huggingface/transformers/tree/main/src/tr...
Tell HN: The next generation of videogames will be great with midjourney
carson-katri/dream-textures (Stable Diffusion, Blender) https://github.com/carson-katri/dream-textures
nv-tlabs/GET3D https://github.com/nv-tlabs/GET3D
/? midjourney stable diffusion DALL-E "Imagen" comparison https://www.google.com/search?q=midjourney+stable+diffusion+...
"Google will label fake images created with its [Imagen] A.I" (2023) https://news.ycombinator.com/item?id=35896000
"Show HN: Polymath: Convert any music-library into a sample-library with ML" https://news.ycombinator.com/item?id=34782526 :
> Other cool #aiart things:
JavaScript state machines and statecharts
State machine: https://en.wikipedia.org/wiki/Finite-state_machine
https://docs.viewflow.io/bpmn/index.html :
> Unlike Finite state machine based workflow, BPMN allows parallel task execution, and suitable to model real person collaboration patterns.
BPMN: Business Process Model and Notation: https://en.wikipedia.org/wiki/Business_Process_Model_and_Not...
On the difference between the map and the territory.
> - Can TLA+ find side-channels (which bypass all software memory protection features other than encryption-in-RAM)?
From https://news.ycombinator.com/item?id=31617335 :
>> Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ help with that at all?
From https://news.ycombinator.com/item?id=33563857 :
> - TLA+ Model checker https://en.wikipedia.org/wiki/TLA%2B#Model_checker :
>> The TLC model checker builds a finite state model of TLA+ specifications for checking invariance properties
This library implements Harel state charts, which are a superset of FSMs that allow for parallel and nested states.
Not sure what TLA+ has to do with either.
Rising number of lithium battery incidents on airplanes worry crew
"Researchers craft a fully edible battery" (2023) https://news.ycombinator.com/item?id=35885696
The Simplest Universal Turing Machine Is Proved
https://en.wikipedia.org/wiki/Church–Turing–Deutsch_principl... :
> In computer science and quantum physics, the Church–Turing–Deutsch principle (CTD principle) is a stronger, physical form of the Church–Turing thesis formulated by David Deutsch in 1985.[1] The principle states that a universal computing device can simulate every physical process.
Are qubits enough to simulate qudits and qutrits?
Run Llama 13B with a 6GB graphics card
From skimming, it looks like this approach requires CUDA and thus is Nvidia only.
Anyone have a recommended guide for AMD / Intel GPUs? I gather the 4 bit quantization is the special sauce for CUDA, but I’d guess there’d be something comparable for not-CUDA?
4-bit quantization is to reduce the amount of VRAM required to run the model. You can run it 100% on CPU if you don't have CUDA. I'm not aware of any AMD equivalent yet.
Looks like there are several projects that implement the CUDA interface for various other compute systems, e.g.:
https://github.com/ROCm-Developer-Tools/HIPIFY/blob/master/R...
https://github.com/hughperkins/coriander
I have zero experience with these, though.
"Democratizing AI with PyTorch Foundation and ROCm™ support for PyTorch" (2023) https://pytorch.org/blog/democratizing-ai-with-pytorch/ :
> AMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm™ open software ecosystem that brings stable support for AMD Instinct™ accelerators as well as many Radeon™ GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from “Beta” to “Stable” was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from “Beta” to “Stable” came in the PyTorch 1.12 release (June 2022)
> [...] PyTorch ecosystem libraries like TorchText (Text classification), TorchRec (libraries for recommender systems - RecSys), TorchVision (Computer Vision), TorchAudio (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12.
> Key libraries provided with the ROCm software stack including MIOpen (Convolution models), RCCL (ROCm Collective Communications) and rocBLAS (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.
https://news.ycombinator.com/item?id=34399633 :
>> AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
https://github.com/intel/intel-extension-for-pytorch :
> Intel® Extension for PyTorch extends PyTorch with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch xpu device, Intel® Extension for PyTorch provides easy GPU acceleration for Intel discrete GPUs with PyTorch
https://pytorch.org/blog/celebrate-pytorch-2.0/ (2023) :
> As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.
> The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization
DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html
Matter Raspberry Pi GPIO Commander – Turn Your Pi into a Matter Lighting Device
(Context: Matter is an application layer protocol to control smart home devices, bring them into your Wifi or Thread network, and they ought to work with Homekit/Google Home/Alexa/Smartthings/Home Assistant etc. at the same time)
I'm currently working on a Matter device myself and the overengineering is unreal.
The idea is great (standardized messages over IP, no internet required, open source SDK on Github) but if they had just used MQTT with some standardized JSON messages and two or three fixed onboarding flows it would have solved the same problems with 1% of the effort.
But now the project is absolutely massive, almost 1k pages for the base spec, 1M+ lines of (mostly C++) code for the core SDK, no stable or even well defined API, almost no documentation. Progress is glacial, unsuprisingly. Bug fixes do not seem to be backported to older release branches but there is no source code compatibility either, so the security story will be interesting for sure...
Most discussions seem to happen on an internal Discord server and Wiki for paying members only, so the project is not that open either.
I really hope that the entire thing gets scrapped and a simpler approach will rise from the ashes...
> I really hope that the entire thing gets scrapped and a simpler approach will rise from the ashes...
Do we need a new thing? ZigBee and mqtt already exist. Maybe something a little nicer over IP networks?
"Securing name resolution in the IoT: DNS over CoAP" https://news.ycombinator.com/item?id=32186286
From https://github.com/project-chip/connectedhomeip#architecture... :
> Matter aims to build a universal IPv6-based communication protocol for smart home devices. The protocol defines the application layer that will be deployed on devices and the different link layers to help maintain interoperability. The following diagram illustrates the normal operational mode of the stack:
> [...] It is built with market-proven technologies using Internet Protocol (IP) and is compatible with Thread and Wi-Fi network transports.
> Matter was developed by a Working Group within the Connectivity Standards Alliance (Alliance). This Working Group develops and promotes the adoption of the Matter standard, a royalty-free connectivity standard to increase compatibility among smart home products, with security as a fundamental design tenet. The vision that led major industry players to come together to build Matter is that smart connectivity should be simple, reliable, and interoperable.
> [...] The code examples show simple interactions, and are supported on multiple transports -- Wi-Fi and Thread -- starting with resource-constrained (i.e., memory, processing) silicon platforms to help ensure Matter’s scalability.
Would it make sense to have an mqtt transport for matter? Or a bridge; like homeassistant or similar?
https://github.com/home-assistant/core#featured-integrations lists mqtt
Is there already a good (security) comparison of e.g. http basic auth, x10, ZigBee, mqtt, matter?
Exploring the native use of 64-bit posit arithmetic in scientific computing
Unum (number format) > Posit (Type III Unum) https://en.wikipedia.org/wiki/Unum_(number_format) :
> Posits have superior accuracy in the range near one, where most computations occur. This makes it very attractive to the current trend in deep learning to minimise the number of bits used. It potentially helps any application to accelerate by enabling the use of fewer bits (since it has more fraction bits for accuracy) reducing network and memory bandwidth and power requirements.
> [...] Note: 32-bit posit is expected to be sufficient to solve almost all classes of applications [citation needed]
Researchers craft a fully edible battery
"Researchers find major storage capacity in metal-free aqueous batteries" https://www.inceptivemind.com/researchers-finds-major-storag... :
"The role of the electrolyte in non-conjugated radical polymers for metal-free aqueous energy storage electrodes." (2023) DOI: 10.1038/s41563-023-01518-z
"A sustainable battery with a biodegradable electrolyte made from crab shells" (2022) https://www.sciencedaily.com/releases/2022/09/220901135827.h... :
"A sustainable chitosan-zinc electrolyte for high-rate zinc-metal batteries" (2022) http://dx.doi.org/10.1016/j.matt.2022.07.015
Chitosan https://en.wikipedia.org/wiki/Chitosan :
> Chitosan /ˈkaɪtəsæn/ is a linear polysaccharide composed of randomly distributed β-(1→4)-linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). It is made by treating the chitin shells of shrimp and other crustaceans with an alkaline substance, such as sodium hydroxide.
Is there a sustainable way to produce Chitosan, which is an electrolyte?
https://westurner.github.io/hnlog/ Ctrl-F "anode" :
> Here's a discussion about the lower costs of hemp supercapacitors as compared with graphene super capacitors: https://news.ycombinator.com/item?id=16814022
"Why Salt Water may be the Future of Batteries" https://youtu.be/vm2hNNA4lvM ($5/kwh, energy density, desalinization too, graphene production too)
Flow battery > Other types > Membraneless https://en.wikipedia.org/wiki/Flow_battery#Other_types
MSG is the most misunderstood ingredient of the century. That’s finally changing
MSG: Monosodium Glutamate: https://en.wikipedia.org/wiki/Monosodium_glutamate
Glutamate–glutamine cycle: https://en.wikipedia.org/wiki/Glutamate%E2%80%93glutamine_cy...
"Multiple Mechanistically Distinct Modes of Endoc annabinoid Mobilization at Central Amygdala Glutamatergic Synapses" (2014) https://www.cell.com/neuron/fulltext/S0896-6273(14)00017-8
Glutamine https://en.wikipedia.org/wiki/Glutamine :
> Glutamine is synthesized by the enzyme glutamine synthetase from glutamate and ammonia. The most relevant glutamine-producing tissue is the muscle mass, accounting for about 90% of all glutamine synthesized. Glutamine is also released, in small amounts, by the lungs and brain. [20] Although the liver is capable of relevant glutamine synthesis, its role in glutamine metabolism is more regulatory than producing, since the liver takes up large amounts of glutamine derived from the gut. [7]
"Glutamine-to-glutamate ratio in the nucleus accumbens predicts effort-based motivated performance in humans" (2020) https://www.nature.com/articles/s41386-020-0760-6
> High glutamine-to-glutamate ratio predicts the ability to sustain motivation: The researchers found that individuals with a higher glutamine-to-glutamate ratio had a higher success rate and a lower perception of effort. https://neuroscienceschool.com/2020/10/11/how-to-sustain-mot...
ICYMI: You may appreciate this video...
Is cancer all about cellular energy from oxygen?
/? lactic acid atp: https://www.google.com/search?q=lactic+acid+atp
- "What is lactic acid?" https://www.livescience.com/what-is-lactic-acid
> Although blood lactate concentration does increase during intense exercise, the lactic acid molecule itself dissociates and the lactate is recycled and used to create more ATP.
> "Your body naturally metabolizes the lactic acid, clearing it out. The liver can take up some of the lactic acid molecules and convert them back to glucose for fuel," says Grover. "This conversion also reduces the acidity in the blood, thus removing some of the burning sensation. This is a natural process that occurs in the body. Things such as stretching, rolling, or walking will have little to no impact."
> The burning sensation you feel in your legs during a heavy workout probably isn't caused by lactic acid, but instead by tissue damage and inflammation.
> It’s also important to remember that lactate itself isn’t 'bad'. In fact, research in [Bioscience Horizons] suggests that lactate is beneficial to the body during and after exercise in numerous ways. For example, lactate can be used directly by the brain and heart for energy or converted into glucose in the liver or kidneys, which can then be used by nearly any cell in the body for energy.
- https://chem.libretexts.org/Courses/University_of_Kentucky/U... :
> Lactic Acid Fermentation: You may have not been aware that your muscle cells can ferment. Fermentation is the process of producing ATP in the absence of oxygen, through glycolysis alone. Recall that glycolysis breaks a glucose molecule into two pyruvate molecules, producing a net gain of two ATP and two NADH molecules. Lactic acid fermentation is the type of anaerobic respiration carried out by yogurt bacteria (Lactobacillus and others) and by your own muscle cells when you work them hard and fast.
Is that relevant to cancer and oxygen, IDK.
Hopefully NIRS + ultrasound ("HIFU" & modern waveguides) can inexpensively treat many more forms of cancer.
From https://news.ycombinator.com/item?id=35812168 :
> https://news.ycombinator.com/item?id=35617859 ... "A simple technique to overcome self-focusing, filamentation, supercontinuum generation, aberrations, depth dependence and waveguide interface roughness using fs laser processing" [w/ Dual Beams]
See this page fetch itself, byte by byte, over TLS
https://github.com/jawj/subtls (A proof-of-concept TypeScript TLS 1.3 client) is implemented with the SubtleCrypto API.
TIL about SubtleCrypto https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt... :
> The SubtleCrypto interface of the Web Crypto API provides a number of low-level cryptographic functions. Access to the features of SubtleCrypto is obtained through the subtle property of the Crypto object you get from the crypto property.
decrypt()
deriveBits()
deriveKey()
digest()
encrypt()
exportKey()
generateKey()
importKey()
sign()
unwrapKey()
verify()
wrapkey()
Can SubtleCrypto accelerate any of the W3C Verifiable Credential Data Integrity 1.0 APIs? vc-data-integrity: https://w3c.github.io/vc-data-integrity/ ctrl-f "signature suite"> ISSUE: Avoid signature format proliferation by using text-based suite value The pattern that Data Integrity Signatures use presently leads to a proliferation in signature types and JSON-LD Contexts. This proliferation can be avoided without any loss of the security characteristics of tightly binding a cryptography suite version to one or more acceptable public keys. The following signature suites are currently being contemplated: eddsa-2022, nist-ecdsa-2022, koblitz-ecdsa-2022, rsa-2022, pgp-2022, bbs-2022, eascdsa-2022, ibsa-2022, and jws-2022.
But what about "Kyber, NTRU, {FIPS-140-3}? [TLS1.4/2.0?]" i.e. PQ Post-Quantum signature suites? Why don't those need to be URIs, too?
Loophole-free Bell inequality violation with superconducting circuits
"Loophole-free Bell test ‘Spooky action at a distance’, no cheating (tudelft.nl)" (2015) https://news.ycombinator.com/item?id=10430675
/? ./hnlog: "nonlocal", "unitarity":
From "The fundamental thermodynamic costs of communication" (2023) https://westurner.github.io/hnlog/#story-34765393 :
> Isn't there more entropy if we consider all possible nonlocal relations between bits; or, is which entropy metric independent of redundant coding schemes between points in spacetime?
From "New neural network architecture inspired by neural system of a worm" (2023) https://news.ycombinator.com/item?id=34809725 :
> Is there an information metric which expresses maximal nonlocal connectivity between bits in a bitstring; that takes all possible (nonlocal, discontiguous) paths into account?
> `n_nodes2` only describes all of the binary, pairwise possible relations between the bits or qubits in a bitstring?*
> "But what is a convolution" https://www.3blue1brown.com/lessons/convolutions
> Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord
From "Formal methods only solve half my problems" (2022) https://westurner.github.io/hnlog/#comment-31625499 :
> It might be argued that side channels exist whenever there is a shared channel; which is always, because plenoptic functions, wave function, air gap, ram bus mhz, nonlocal entanglement
> Category:Side-channel attacks https://en.wikipedia.org/wiki/Category:Side-channel_attacks
/? ./hnlog: "entanglement" ... Quantum FlyTrap
Show HN: Mineo.app – Better Python Notebooks
Hello everyone,
I would like to introduce our startup to HN: Mineo.app. Mineo.app is a production-ready SaaS Python notebook that provides a complete environment for building your data applications: Dashboards, Reports, and Data Pipelines based on Python notebooks.
Key features:
* Superpowered jupyter-compatible Python notebooks with extra goodies like: version control, commenting support, custom docker images, etc... enhanced with no code components that allow to create beautiful dashboards and reports.
* Data Pipelines: Ability to schedule and run one or more notebooks.
* Integrated file system to manage your files and projects with detailed permissions and groups.
We have a freemium licensing model, so you can start using Mineo just by registering with your Github/Google/Microsoft account for free without a credit card. And it's free for educational purposes ;-)
Diego.
I think it was a little too early share but since you asked...
Crafting workflows out of notebooks is a really bad idea; an anti-pattern. If you want to go down the road of "workflows for data scientists" look at https://featurebyte.com/
Your messaging indicates you also want to appeal to business intelligence users, with their data exploration needs. In this space I give the crown to https://hex.tech/ How are you going to be different or better?
Also, you should fix your showcase so prospective users don't have to log in. I'll just nope out of your site.
Have you considered going open core? That would make me look more favorably at your product given your lack of indisputable product- and technical superiority.
And last but not least, I wish you success!
> Crafting workflows out of notebooks is a really bad idea; an anti-pattern. If you want to go down the road of "workflows for data scientists"
https://westurner.github.io/hnlog/ Ctrl-F "DVC" ( https://dvc.org/ ) , https://westurner.github.io/hnlog/#comment-24261118 "Ten Simple Rules for Reproducible Computational Research", “Ten Simple Rules for Creating a Good Data Management Plan”, PROV
pygwalker https://github.com/Kanaries/pygwalker :
> PyGWalker: Turn your pandas dataframe into a Tableau-style User Interface for visual analysis
"Generate code from GUI interactions; State restoration & Undo" https://github.com/Kanaries/pygwalker/issues/90
The Scientific Method is testing, so testing (tests, assertions, fixtures) should be core to any scientific workflow system.
- [ ] (It's not possible to run `!pytest` in a Jupyter notebook without installing an extension with JupyterLite in WASM onnly where there's not yet a terminal or even yet a slow-but-usable [cheerpx] webvm bridged to jupyter kernel WASM ~process-space.)
awesome-jupyter#testing: https://github.com/markusschanta/awesome-jupyter#testing
ml-tooling/best-of-jupyter lists papermill/papermill under "Interactive Widgets/Visualization" https://github.com/ml-tooling/best-of-jupyter#interactive-wi...
"Markdown based notebooks" would store files next to the .ipynb.md, which implies a need for an MHTML/ZIP-like archive (for report notebook artifacts produced by scientific workflow systems with provenance metadata); but W3C Web Bundles avoid modifying linked resources with new specs: https://github.com/jupyter/enhancement-proposals/pull/103#is...
Google will label fake images created with its A.I
> Google’s approach is to label the images when they come out of the AI system, instead of trying to determine whether they’re real later on. Google said Shutterstock and Midjourney would support this new markup approach. Google developer documentation says the markup will be able to categorize images as trained algorithmic media, which was made by an AI model; a composite image that was partially made with an AI model; or algorithmic media, which was created by a computer but isn’t based on training data.
Can it store at least: (1) the prompt; and (2) the model which purportedly were generated by a Turing robot with said markup specification? Is it schema.org JSON-LD?
It's IPTC: https://developers.google.com/search/docs/appearance/structu...
If IPTC-to-RDF i.e./e.g. schema:ImageObject (schema:CreativeWork > https://schema.org/ImageObject) mappings are complete, it would be possible to sign IPTC metadata with W3C Verifiable Credentials (and e.g. W3C DIDs) just like any other [JSON-LD,] RDF; but is there an IPTC schema extension for appending signatures, and/or is there an IPTC graph normalization step that generates equivalent output to a (web-standardized) JSON-LD normalization function?
/? IPTC jsonschema: https://github.com/ihsn/nada/blob/master/api-documentation/s...
/? IPTC schema.org RDFS
IPTC extension schema: https://exiv2.org/tags-xmp-iptcExt.html
[ Examples of input parameters & hyperparameters: from e.g. the screenshot in the README.md of stablediffusion-webui or text-generation-webui: https://github.com/AUTOMATIC1111/stable-diffusion-webui ]
How should input parameters and e.g. LLM model version & signed checksum and model hyperparameters be stored next to a generated CreativeWork? filename.png.meta.jsonld.json or similar?
If an LLM passes the Turing test ("The Imitation Game") - i.e. has output indistinguishable from a human's output - does that imply that it is not possible to stylometrically fingerprint its outputs without intentional watermarking?
https://en.wikipedia.org/wiki/Turing_test
Implicit in the Turing test is the entity doing the evaluation. It's quite possible that a human evaluator could be tricked, but a tool-assisted human, or an AI itself could not be. Or even just some humans could be better at not being tricked than others.
Tell HN: We should start to add “ai.txt” as we do for “robots.txt”
I started to add an ai.txt to my projects. The file is just a basic text file with some useful info about the website like what it is about, when was it published, the author, etc etc.
It can be great if the website somehow ends up in a training dataset (who knows), and it can be super helpful for AI website crawlers, instead of using thousands of tokens to know what your website is about, they can do it with just a few hundred.
Your HTML already has semantic meta elements like author and description you should be populating with info like that: https://developer.mozilla.org/en-US/docs/Learn/HTML/Introduc...
and also opengraph meta tags https://ogp.me/
And also schema.org: https://schema.org/
Thing > CreativeWork > WebSite https://schema.org/WebSite ... scroll down to "Examples" and click the "JSON-LD" and/or "RDFa" tabs. (And if there isn't an example then go to the schema.org/ URL of a superClassOf (rdfs:subClassOf) of the rdfs:Class or rdfs:Property; there are many markup examples for CreativeWork and subtypes).
Also: https://news.ycombinator.com/item?id=35891631
extruct is one way to parse linked data from HTML pages: https://github.com/scrapinghub/extruct
security.txt https://github.com/securitytxt/security-txt :
> security.txt provides a way for websites to define security policies. The security.txt file sets clear guidelines for security researchers on how to report security issues. security.txt is the equivalent of robots.txt, but for security issues.
Carbon.txt: https://github.com/thegreenwebfoundation/carbon.txt :
> A proposed convention for website owners and digital service providers to demonstrate that their digital infrastructure runs on green electricity.
"Work out how to make it discoverable - well-known, TXT records or root domains" https://github.com/thegreenwebfoundation/carbon.txt/issues/3... re: JSON-LD instead of txt, signed records with W3C Verifiable Credentials (and blockcerts/cert-verifier-js)
SPDX is a standard for specifying software licenses (and now SBOMs Software Bill of Materials, too) https://en.wikipedia.org/wiki/Software_Package_Data_Exchange
It would be transparent to disclose the SBOM in AI.txt or elsewhere.
How many parsers should be necessary for https://schema.org/CreativeWork https://schema.org/license metadata for resources with (Linked Data) URIs?
Having a security.txt doesn't stop security researchers asking "Do you have a bounty program?". We replied dozens already that such a file exist, it's not well enough known yet. On the other hand there are search engines crawling those and creating reports, which is nice.
JSON-LD or RDFa (RDF in HTML attributes) in at least the /index.html the HTML footer should be sufficient to indicate that there is structured linked data metadata for crawlers that then don't need an HTTP request to a .well-known URL /.well-known/ai_security_reproducibility_carbon.txt.jsonld.json
OSV is a new format for reporting security vulnerabilities like CVEs and an HTTP API for looking up CVEs from software component name and version. https://github.com/ossf/osv-schema
A number of tools integrate with OSV-schema data hosted by osv.dev: https://github.com/google/osv.dev#third-party-tools-and-inte... :
> We provide a Go based tool that will scan your dependencies, and check them against the OSV database for known vulnerabilities via the OSV API.
> Currently it is able to scan various lockfiles [ repo2docker REES config files like and requirements.txt, Pipfile lock, environment.yml, or a custom Dockerfile, ], debian docker containers, SPDX and CycloneDB SBOMs, and git repositories.
Language models can explain neurons in language models
LLMs are quickly going to be able to start explaining their own thought processes better than any human can explain their own. I wonder how many new words we will come up with to describe concepts (or "node-activating clusters of meaning") that the AI finds salient that we don't yet have a singular word for. Or, for that matter, how many of those concepts we will find meaningful at all. What will this teach us about ourselves?
First of all, our own explanations about ourselves and our behaviour are mostly lies, fabrications, hallucinations, faulty re-memorization, post hoc reasoning:
"In one well-known experiment, a split-brain patient’s left hemisphere was shown a picture of a chicken claw and his right hemisphere was shown a picture of a snow scene. The patient was asked to point to a card that was associated with the picture he just saw. With his left hand (controlled by his right hemisphere) he selected a shovel, which matched the snow scene. With his right hand (controlled by his left hemisphere) he selected a chicken, which matched the chicken claw. Next, the experimenter asked the patient why he selected each item. One would expect the speaking left hemisphere to explain why it chose the chicken but not why it chose the shovel, since the left hemisphere did not have access to information about the snow scene. Instead, the patient’s speaking left hemisphere replied, “Oh, that’s simple. The chicken claw goes with the chicken and you need a shovel to clean out the chicken shed”" [1]. Also [2] has an interesting hypothesis on split-brains: not two agents, but two streams of perception.
[1] 2014, "Divergent hemispheric reasoning strategies: reducing uncertainty versus resolving inconsistency", https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4204522
[2] 2017, "The Split-Brain phenomenon revisited: A single conscious agent with split perception", https://pure.uva.nl/ws/files/25987577/Split_Brain.pdf
Not supported by neuroimaging. Promoted without evidence or sufficient causal inference.
https://www.health.harvard.edu/blog/right-brainleft-brain-ri... :
> But, the evidence discounting the left/right brain concept is accumulating. According to a 2013 study from the University of Utah, brain scans demonstrate that activity is similar on both sides of the brain regardless of one's personality.
> They looked at the brain scans of more than 1,000 young people between the ages of 7 and 29 and divided different areas of the brain into 7,000 regions to determine whether one side of the brain was more active or connected than the other side. No evidence of "sidedness" was found. The authors concluded that the notion of some people being more left-brained or right-brained is more a figure of speech than an anatomically accurate description.
Here's wikipedia on the topic: "Lateralization of brain function" https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...
Furthermore, "Neuropsychoanalysis" https://en.wikipedia.org/wiki/Neuropsychoanalysis
Neuropsychology: https://en.wikipedia.org/wiki/Neuropsychology
Personality psychology > ~Biophysiological: https://en.wikipedia.org/wiki/Personality_psychology
MBTI > Criticism: https://en.wikipedia.org/wiki/Myers%E2%80%93Briggs_Type_Indi...
Connectome: https://en.wikipedia.org/wiki/Connectome
This is not relevant to GP's comment. It has nothing to do with "are there fixed 'themes' that are operated in each hemisphere." It has to do with more generally, does the brain know what the brain is doing. The answer so far does not seem to be "yes."
Says who? There is actual evidence to support that our brain doesn't "know" what it is doing on a subconscious level? As far as I'm aware it's more that conscious humans don't understand how our brain works.
I think the correct statement is "so far the answer is we don't know"
The split brain experiments very very clearly indicate that different parts of the brain can independently conduct behavior and gain knowledge independently of other parts.
How or if this generalizes to healthy brains is not super clear, but it does actually provide a good explanatory model for all sorts of self-contradictory behavior (like addiction): the brain has many semi-independent “interests” that are jockeying for overall control of the organism’s behavior. These interests can be fully contradictory to each other.
Correct, ultimately we do not know. But it’s actually a different question than your rephrasing.
Given that functional localization varies widely from subject to subject per modern neuroimaging, how are split brain experiments more than crude attempts to confirm functional specialization (which is already confirmed without traumatically severing a corpus callosum) "hemispheric" or "lateral"?
Neuroimaging indicates high levels of redundancy and variance in spatiotemporal activation.
Studies of cortices and other tissues have already shown that much of the neural tissue of the brain is general purpose.
Why is executive functioning significantly but not exclusively in the tissue of the forebrain, the frontal lobes?
Because there’s a version of specialization that is, “different regions are specialized but they all seem to build consensus” and there’s a version that is “different regions are specialized and consensus does not seem to be necessary or potentially even usual or possible.”
These offer very different interpretations of cognition and behavior, and the split brain experiments point toward the latter.
Functional specialization > Major theories of the brain> Modularity or/and Distributive processing: https://en.wikipedia.org/wiki/Functional_specialization_(bra... :
> Modularity: [...] The difficulty with this theory is that in typical non-lesioned subjects, locations within the brain anatomy are similar but not completely identical. There is a strong defense for this inherent deficit in our ability to generalize when using functional localizing techniques (fMRI, PET etc.). To account for this problem, the coordinate-based Talairach and Tournoux stereotaxic system is widely used to compare subjects' results to a standard brain using an algorithm. Another solution using coordinates involves comparing brains using sulcal reference points. A slightly newer technique is to use functional landmarks, which combines sulcal and gyral landmarks (the groves and folds of the cortex) and then finding an area well known for its modularity such as the fusiform face area. This landmark area then serves to orient the researcher to the neighboring cortex. [7]
Is there a way to address the brain with space-filling curves around ~loci/landmarks? For brain2brain etc
FWIU, Markham's lab found that the brain is at max 11D in some places; But an electron wave model (in the time domain) may or must be sufficient according to psychoenergetics (Bearden)
> Distributive processing: [...] McIntosh's research suggests that human cognition involves interactions between the brain regions responsible for processes sensory information, such as vision, audition, and other mediating areas like the prefrontal cortex. McIntosh explains that modularity is mainly observed in sensory and motor systems, however, beyond these very receptors, modularity becomes "fuzzier" and you see the cross connections between systems increase.[33] He also illustrates that there is an overlapping of functional characteristics between the sensory and motor systems, where these regions are close to one another. These different neural interactions influence each other, where activity changes in one area influence other connected areas. With this, McIntosh suggest that if you only focus on activity in one area, you may miss the changes in other integrative areas.[33] Neural interactions can be measured using analysis of covariance in neuroimaging [...]
FWIU electrons are most appropriately modeled with Minkowski 4-space in the time-domain; (L^3)t
Neuroplasticity: https://en.wikipedia.org/wiki/Neuroplasticity :
> The adult brain is not entirely "hard-wired" with fixed neuronal circuits. There are many instances of cortical and subcortical rewiring of neuronal circuits in response to training as well as in response to injury.
> There is ample evidence [53] for the active, experience-dependent re-organization of the synaptic networks of the brain involving multiple inter-related structures including the cerebral cortex.[54] The specific details of how this process occurs at the molecular and ultrastructural levels are topics of active neuroscience research. The way experience can influence the synaptic organization of the brain is also the basis for a number of theories of brain function
"Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
So, to run the same [fMRI, NIRS,] stimulus response activation observation/burn-in again weeks or months later with the same subjects is likely necessary given Representational drift.
IPyflow: Reactive Python Notebooks in Jupyter(Lab)
That’s a pretty nice idea. The problem of knowing what state has been invalidated often drives me away from using a notebook. So it is nice to see this solved.
I often though we would benefit from having some kind of shell, only a mix between ipython qtconsole et jupyter.
Not an editor like jupyter, rather a shell with a REPL flow. But each prompt is like a jupyter cell, and the whole history is saved in a file.
But if you don't create a file, it should work as well. One of the annoying things about jupyter is that you can use it without file on disk unlike ipython shell.
jupyter_console is the IPython REPL for non-ipykernel jupyter kernels.
This magic command logs IPython REPL input and output to a file:
%logstart -o example.log.py
Machine Learning Containers Are Bloated and Vulnerable
https://repo2docker.readthedocs.io/en/latest/ :
> jupyter-repo2docker is a tool to build, run, and push Docker images from source code repositories.
> repo2docker fetches a repository (from GitHub, GitLab, Zenodo, Figshare, Dataverse installations, a Git repository or a local directory) and builds a container image in which the code can be executed. The image build process is based on the configuration files found in the repository.
> repo2docker can be used to explore a repository locally by building and executing the constructed image of the repository, or as a means of building images that are pushed to a Docker registry.
> repo2docker is the tool used by BinderHub to build images on demand
There are maintenance advantages and longer time to patch with kitchen-sink ML containers like kaggle/docker-python because it takes work to entirely bump all of the versions in the requirements specification files (and run integration tests to make sure code still runs after upgrading everything or one thing at a time).
What's best practice for including a sizeable dataset in a container (that's been recently re-) built with repo2docker?
Health advisory on social media use in adolescence
This recent article suggests other factors might be more important to adolescent mental health:
Time spent on social media among the least influential factors in adolescent mental health: preliminary results from a panel network analysis https://www.nature.com/articles/s44220-023-00063-7
> There is growing concern about the role of social media use in the documented increase of adolescent mental health difficulties. However, the current evidence remains complex and inconclusive. While increasing research on this area of work has allowed for notable progress, the impact of social media use within the complex systems of adolescent mental health and development is yet to be examined. The current study addresses this conceptual and methodological oversight by applying a panel network analysis to explore the role of social media on key interacting systems of mental health, wellbeing and social life of 12,041 UK adolescents. Here we find that, across time, estimated time spent interacting with social media predicts concentration problems in female participants. However, of the factors included in the current network, social media use is one of the least influential factors of adolescent mental health, with others (for example, bullying, lack of family support and school work dissatisfaction) exhibiting stronger associations. Our findings provide an important exploratory first step in mapping out complex relationships between social media use and key developmental systems and highlight the need for social policy initiatives that focus on the home and school environment to foster resilience.
> Time spent on social media among the least influential factors in adolescent mental health
But it's the one that's easy to assign blame to: it's the evil (sometimes foreign) big tech companies fault.
Increasingly poor economics prospects, environmental crisis and over-competitive society are much tougher issues to crack, and perhaps, in the case of housing for instance, certain demographics would prefer to blame the "evil screens" rather than their own generation's behavior over the years...
although I believe the big tech companies are doing their very best to capture attention at all costs, it is a good question to ask why people are in a position to spend so much time on social media and are doing so in place of other activities. For example, the lack of public spaces where one can spend time without spending money, relatively recent parenting practices and legal obstacles to having unsupervised time outdoors, etc.
I can see it in myself and I suspect others feel similarly. And I suspect kids feel it even more strongly but that's just an assumption of mine. The way "big tech companies are doing their very best to capture attention at all costs" is by designing their apps such that it's physically addicting. Once you're addicted it's hard to do the other things you mentioned because they don't release the same chemicals in your brain.
Would it be abusive or advisable to adapt edtech offerings in light of social media and slot machines' UX user experience findings?
While they should never appease students, can't infotech and edtech learn how to keep their attention, too?
Perhaps prompt engineering can help to create engaging educational content with substantive progress metrics?
"Build a game in JS (like game category XYZ) to teach quantum entropy to beginners"
And then what prompt additions could help to social media-ify the game?
How should social media reinforce human communication behaviors with or without the stated age of the user? Should there be a "D- because that's harassment" panda video to reinforce? Which presidential role models' communication styles should AI emulate?
I find it sad to consider that the most impactful thing to do to improve children's lives would be to ban them from social media due to their age; though, for the record, e.g. Facebook did originally require a .edu email address at an approving institution.
Hopefully, Khanmigo and similar AI edtech offerings will be more engaging than preferentially reviewing unacceptable abuse online; but kids and people still need to learn to interact respectfully online in order to succeed.
Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration
"Direct neuronal reprogramming by temporal identity factors" (2023) https://www.pnas.org/doi/10.1073/pnas.2122168120#abstract :
> Abstract: Temporal identity factors are sufficient to reprogram developmental competence of neural progenitors and shift cell fate output, but whether they can also reprogram the identity of terminally differentiated cells is unknown. To address this question, we designed a conditional gene expression system that allows rapid screening of potential reprogramming factors in mouse retinal glial cells combined with genetic lineage tracing. Using this assay, we found that coexpression of the early temporal identity transcription factors Ikzf1 and Ikzf4 is sufficient to directly convert Müller glial (MG) cells into cells that translocate to the outer nuclear layer (ONL), where photoreceptor cells normally reside. We name these “induced ONL (iONL)” cells. Using genetic lineage tracing, histological, immunohistochemical, and single-cell transcriptome and multiome analyses, we show that expression of Ikzf1/4 in MG in vivo, without retinal injury, mostly generates iONL cells that share molecular characteristics with bipolar cells, although a fraction of them stain for Rxrg, a cone photoreceptor marker. Furthermore, we show that coexpression of Ikzf1 and Ikzf4 can reprogram mouse embryonic fibroblasts to induced neurons in culture by rapidly remodeling chromatin and activating a neuronal gene expression program. This work uncovers general neuronal reprogramming properties for temporal identity factors in terminally differentiated cells.
Is it possible to produce or convert Müller glial cells with Nanotransfection (stroma reprogramming), too?
From https://news.ycombinator.com/item?id=33127646 https://westurner.github.io/hnlog/#comment-33129531 re: "Retinoid restores eye-specific brain responses in mice with retinal degeneration" (2022) :
> Null hypothesis: A Nanotransfection (vasculogenic stromal reprogramming) intervention would not result in significant retinal or corneal regrowth
> ... With or without: a nerve growth factor, e.g. fluoxetine to induce plasticity in the adult visual cortex, combination therapy with cultured conjunctival IPS, laser mechanical scar tissue evisceration and removal, local anesthesia, robotic support, Retinoid
> Nanotransfection: https://en.wikipedia.org/wiki/Tissue_nanotransfection :
>> Most reprogramming methods have a heavy reliance on viral transfection. [22][23] TNT allows for implementation of a non-viral approach which is able to overcome issues of capsid size, increase safety, and increase deterministic reprogramming
Ask HN: Did anyone ever create GitLaw?
A thread from 11 years ago[0] proposed a "A GitHub for Laws and Legal Documents". Did anyone ever develop something similar to this? Even without the ability to propose changes, being able to view a git history of passed bills would be useful for anyone who would want to analyze the changes over time.
If a tool has been built, please share it. If not, what are the major blockers to at least building the history / diff tool?
[0]: https://news.ycombinator.com/item?id=3967921
Various localities upload their statutes to a git repository hosted as a GitHub project; but I'm not aware of any using Pull Request (PR) workflows to propose or debate legislative bills (or to enter or link to case law precedent for review).
The US Library of Congress operates the THOMAS system for legislative bills. How could THOMAS be improved; in order to support evidence-based policy; in order to improve democracy in democratic republics like the US? https://en.wikipedia.org/wiki/THOMAS
How do state systems for legislative bills document workflows and beyond compare to THOMAS, the US federal Congress system? Why do we need 50+1 independent [open source?] software applications; could states work together on such essential systems?
IIRC, LOC invested in improving THOMAS with: markup language to typographically-readable HTML support?
It may be helpful to send an email to your state with the regex regular expression pattern necessary to make online statute section and subsection references a href links instead of non-clickable string. TIL the section sign ("§") is older than the hyperlink; https://en.wikipedia.org/wiki/Section_sign
Parliamentary informatics: https://en.wikipedia.org/wiki/Parliamentary_informatics
FWIU, diffing documents is a solved problem.
How could pull request workflows be improved in order to support legislative and post-legislative workflows (like controlling for whether a funded plan actually has the proscribed impact)?
High School and College Policy debate (CX; Cross Examination debate) enjoy equal rights to 'fiat'. IRL, we must control for whether the plan and funding have the predicted impact.
E.g. GitHub and GitLab have various features that could support legislative workflows: code owners, multiple approvals required to merge, emoji reactions on Issues and Pull Requests and comments therein such that you don't have to say why you voted a particular way, GPG-signatures required for commit and merge, 2FA.
FWIU, the Aragorn / district0x projects have some of the first DLT-based (Blockchain) systems for democracy; all of the rest of us trust one party to share root access and multi-site backup responsibilities and have insufficient DDOS protections.
There was a discussion on HN where the real blocker is everyone uses Microsoft Word, and all the related tools on it, and no one was going to change their workflow.
Any such legislative solution may involve meeting the users where they are, first. Understanding the whole lifecycle, so to speak.
Show HN: ReRender AI - Realistic Architectural Renders for AutoCAD/Blender Users
Different but the same problem: "Generate a heat sink heat exchanger with maximum efficiency, shaped like a passive solar home"
TIL from off-gridders and homesteaders about passive solar design so that the air moves through the home without HVAC (compared with high rise buildings where it is necessary to pump water up like a gravitational potential water tower.)
Does ReRender AI have features for sustainable architecture?
Prompt: "Design a passive solar high-rise building with maximal energy storage and production"
Notes from "Zero energy ready homes are coming" (2023) https://news.ycombinator.com/item?id=35064493
ReRender AI focuses on rendering, not sustainable design features. Architects must incorporate sustainability themselves. However, the AI can visually represent a passive solar high-rise if provided with the necessary design elements.
Exciton Fission Breakthrough Could Revolutionize Photovoltaic Solar Cells
> Researchers have resolved the mechanism of exciton fission, which could increase solar-to-electricity efficiency by one-third, potentially revolutionizing photovoltaic technology.
“Orbital-resolved observation of singlet fission” by Alexander Neef, Samuel Beaulieu, Sebastian Hammer, Shuo Dong, Julian Maklar, Tommaso Pincelli, R. Patrick Xian, Martin Wolf, Laurenz Rettig, Jens Pflaum and Ralph Ernstorfer, 12 April 2023, Nature. DOI: 10.1038/s41586-023-05814-1 : https://www.nature.com/articles/s41586-023-05814-1 :
> Abstract: Singlet fission may boost photovoltaic efficiency by transforming a singlet exciton into two triplet excitons and thereby doubling the number of excited charge carriers. The primary step of singlet fission is the ultrafast creation of the correlated triplet pair. Whereas several mechanisms have been proposed to explain this step, none has emerged as a consensus. The challenge lies in tracking the transient excitonic states. Here we use time- and angle-resolved photoemission spectroscopy to observe the primary step of singlet fission in crystalline pentacene. Our results indicate a charge-transfer mediated mechanism with a hybridization of Frenkel and charge-transfer states in the lowest bright singlet exciton. We gained intimate knowledge about the localization and the orbital character of the exciton wave functions recorded in momentum maps. This allowed us to directly compare the localization of singlet and bitriplet excitons and decompose energetically overlapping states on the basis of their orbital character. Orbital- and localization-resolved many-body dynamics promise deep insights into the mechanics governing molecular systems and topological materials
Energy conversion efficiency: https://en.wikipedia.org/wiki/Energy_conversion_efficiency
Multiple exciton generation: https://en.wikipedia.org/wiki/Multiple_exciton_generation
Latex users are slower than Word users and make more errors (2014)
And neither typesetting activity results in reusable LinkedData; because it's still not possible to publish Linked Data (or indeed data and its schema) with LaTeX, or Word, or PDF.
ScholarlyArticles' most relevant purpose is to link to the schema:Dataset that the presented analysis is predicated upon.
ScholarlyArticle authors should consider the value of data reuse and (linked data) schema in choosing a typesetting and publishing format.
Is there a better way to publish Linked Data with existing tools like LaTeX, PDF, or Word? Which support CSVW? Which support RDF/RDFa/JSON-LD?
How could authors express experimental study controls with URIs; with qualified typed edges to the data (and a cryptographic signature from an IRB and the ScholarlyArticle's authors).
> Is there a better way to publish Linked Data with existing tools like LaTeX, PDF, or Word? Which support CSVW? Which support RDF/RDFa/JSON-LD?
- [ ] ~"How to publish Linked Data with Jupyter notebooks" #todo #LinkedReproducibility #LinkedResearch
- [ ] westurner/nbmeta, jupyter*/?: Linked Data result object for notebooks; with a _repr_mimebundle_() and application/ld+json
- [ ] python 3.x+ grammar: de-restrict use of the walrus assignment operator := so that users can assign to the LD dict result object and implicitly IPython.display.display() it because walrus assignment returns the object assigned (whereas normal = assignment does not return a value)
- [ ] westurner/nbmeta?, JLab, jupyter-book: JSON-LD Playground widget to display and reframe JSON-LD (& maybe YAML-LD, too)
- [ ] jupyterlab: schema.org notebook level bibliographic schema.org/CreativeWork metadata
- [ ] jupyterlab: JS/TypeScript: JSON-LD/YAML-LD editor widget also built on codemirror like JLab or a different existing RDFJS library or?
- [ ] rdflib, jupyter nb, sphinx, pygments: add YAML-LD syntax support (now that there's a W3C spec)
- [ ] MyST markdown, sphinx, docutils: YAML-LD in MyST [in: code-fence attr syntax,] in order to add Linked Data to MyST Markdown documents and thus Jupyter Notebooks and Jupyter Books
- [ ] hypothesis/h, sphinx-comments: Linked Data Annotations (within nested Markdown, like hypothesis' W3C Web Annotations support); cc re: 'MyST-LD' Markdown: quoting "@id", "@context" in YAML-LD
- [ ] atomspace-like TrustValues with RDFstar/SPARQLstar (in YAML-LD because that implies JSON-LD)
- [ ] dvc, GitHub Actions, GitLab CI, Gitea Actions,: how to add PROV RDF Linked Data metadata to workflows like DVC.org's & container+command-in-YAML approach
- [ ] Microsoft/excel, Microsoft/VSCode: How to CSVW and PROV with spreadsheets and or code ; see also Excel speed-running competitons
- [ ] REQ: jupyter/rtc+linkeddata/dokieli: howto integrate Jupyter notebooks with the list of specs in the dokieli README
- [ ] njupyter/nbformat#?: a post- .ipynb notebook + resources spec? W3C Web Bundles have advantages including: you don't have to rewrite URLs in content saved offline like MHTML (mv $1.mhtml $1.mhtml.zip)
- [ ] JupyterLab, VSCode: CoCalc - which supports Time Travel version control over notebooks, LaTeX docs, - added a TODO: ~not_editable code cell metadata item; which is more like lab notebooks in pen; but there's not GUI support in JLab or other Jupyter notebook nbformat implementations yet
- [ ] Jupyter/nbformat, ipywidgets, jupyterlab,: How to save widget state: https://github.com/jupyter-widgets/ipywidgets/issues/2465
> How could authors express experimental study controls with URIs; with qualified typed edges to the data (and a cryptographic signature from an IRB and the ScholarlyArticle's authors).
Verifiable Credentials Data Model v1.1: https://www.w3.org/TR/vc-data-model/
Verifiable Credential Data Integrity 1.0: Securing the Integrity of Verifiable Credential Data: https://www.w3.org/TR/vc-data-integrity/ :
> Abstract: This specification describes mechanisms for ensuring the authenticity and integrity of Verifiable Credentials and similar types of constrained digital documents using cryptography, especially through the use of digital signatures and related mathematical proofs.
Because data quality, data reuse, code reuse, web standard specifications, structured data outputs with provenance metadata for partially-automated metaanalyses,; https://5stardata.info/
The skills gap for Fortran looms large in HPC
> The good news for some HPC simulations and models, both inside of the National Nuclear Security Administration program at the DOE and in the HPC community at large, is that many large-scale physics codes have been rewritten or coded from scratch in C++
When C++ one day goes out of fashion, like Fortran already has, this will turn out to be bad news. Fortran is not a big language, and any programmer can learn Fortran in a matter of weeks. But if you'd need to teach people C++ just so that they'd be able to maintain a legacy code base, that will be considerably more difficult than with Fortran.
Probably in a thousand years nobody will be writing C++. In one year, there will still be a helluva lot of C++, including new projects.
Point being, the premise of your argument isn't granted. Is C++ going away soon? Or is it holding steady and even growing? How can someone objectively know either way?
> Is C++ going away soon? Or is it holding steady and even growing? How can someone objectively know either way?
No. It's not going away anytime soon, but you can look at high-performance languages that are gaining in popularity, and that may point to change in the future. TIOBE isn't perfect, but it's a good indicator of interest, and Rust and Go seem to continue to gain popularity.
Interestingly, and related to this article, Fortran has moved up from #31 to #20 on TIOBE's index, which really does speak to the importance of a math-optimized, high performance, compiled language. Another interesting change is MATLAB moving up from #20 to #14.
TIOBE index is rivaled by the annual Stack Overflow Developer Survey in some regards https://survey.stackoverflow.co/2022/#technology-most-popula... :
> Most popular technologies This year, we're comparing the popular technologies across three different groups: All respondents, Professional Developers, and those that are learning to code.
Most popular technologies > Programming, scripting, and markup languages
The Top 500 Green500 (and the TechEmpower Web Framework benchmarks) are also great resources for estimating what people did this past year; what are the "BigE" of our models in terms of water, kwh of [directly or PPA-offset] sourced clean energy.
The Linux Kernel Module Programming Guide
Ctrl-F "rust"
https://rust-for-linux.com/ links to LWN articles at https://lwn.net/Kernel/Index/#Development_tools-Rust that suggest that only basic modules are yet possible with the rust support in Linux kernels 6.2 and 6.3.
Rust-for-linux links to the Android binder module though: https://rust-for-linux.com/Android-Binder-Driver.html :
> Android Binder Driver: This project is an effort to rewrite Android's Binder kernel driver in Rust.
> Motivation: Binder is one of the most security and performance critical components of Android. Android isolates apps from each other and the system by assigning each app a unique user ID (UID). This is called "application sandboxing", and is a fundamental tenet of the Android Platform Security Model.
> The majority of inter-process communication (IPC) on Android goes through Binder. Thus, memory unsafety vulnerabilities are especially critical when they happen in the Binder driver
... "Rust in the Linux kernel" (2021) https://security.googleblog.com/2021/04/rust-in-linux-kernel... :
> [...] We also need designs that allow code in the two languages to interact with each other: we're particularly interested in safe, zero-cost abstractions that allow Rust code to use kernel functionality written in C, and how to implement functionality in idiomatic Rust that can be called seamlessly from the C portions of the kernel.
> Since Rust is a new language for the kernel, we also have the opportunity to enforce best practices in terms of documentation and uniformity. For example, we have specific machine-checked requirements around the usage of unsafe code: for every unsafe function, the developer must document the requirements that need to be satisfied by callers to ensure that its usage is safe; additionally, for every call to unsafe functions (or usage of unsafe constructs like dereferencing a raw pointer), the developer must document the justification for why it is safe to do so.
> We'll now show how such a driver would be implemented in Rust, contrasting it with a C implementation. [...]
Is this the source for the rust port of the Android binder kernel module?: https://android.googlesource.com/platform/frameworks/native/...
This guide with unsafe rust that calls into the C, and then with next gen much safer rust right next to it would be a helpful resource too.
What of the post-docker container support (with userspaces also written in go) should be cloned to rust first?
What are some good examples of non-trivial Linux kernel modules written in Rust?
On Android the Linux kernel is its own thing, and after Project Treble, it follows a microkernel like approach to drivers, where standard Linux drivers are considered "legacy" since Android 8.
https://source.android.com/docs/core/architecture/hal
Don't use how Linux kernel does things on Android as understanding from how upstream works.
In your opinion, do you think that the microkernel approach is more secure? (Should processes run as separate users with separate SELinux contexts like Android 4.4+)
Why do you think that the Android binder module rust implementation is listed as an example of a Rust for Linux kernel module on the site?
"Android AOSP Can Boot Off Mainline Linux 5.9 With Just One Patch" (2020) https://www.phoronix.com/news/Android-AOSP-Close-Linux-5.9 :
> The Android open-source project "AOSP" with its latest code is very close to being able to boot off the mainline Linux kernel when assuming the device drivers are all upstream.
Other distros support kmods and akmods: https://www.reddit.com/r/PINE64official/comments/ijfbgl/comm... :
> How kmod / akmod // DKMS work is something that the community is maybe not real familiar with.
Of course microkernel approach is more secure, if a driver gets p0wned, corrupts data structures, or plain keeps crashing, it doesn't take the whole kernel with it.
Naturally the issue might be as bad that the whole stack can't recover from, but still much better than corrupting the kernel.
One of the SecDevOps guidelines when hardening servers is that every process should have its own user, yes.
Kmods and akmods run in kernel memory space and aren't ABI stable.
Binder is not a HAL, since binder is how HALs communicate with each other. It's an actual proper Linux driver. The C version is in the upstream kernel as a module you can enable when building the kernel.
Waydroid (Android in containers) requires binder and optionally ashmem, though ashmem is not required anymore because memfd works with vanilla kernel: https://wiki.archlinux.org/title/Waydroid#Kernel_Modules
There is a Google Play Certification process for waydroid devices: https://docs.waydro.id/faq/google-play-certification
(That manual provisioning step is not necessary for e.g. optional widevine DRM on other OSes)
xPrize Wildfire – $11M Prize Competition
On one hand this effort sounds like a good way to generate some interesting concepts. On the other hand, as a long-time resident of a rural, wildfire-prone area, it's also important to educate the public and politicians that there is a large body of proven land management best practices that can already significantly improve prevention, control and mitigation. While we can always do better, we often aren't sufficiently utilizing the tools and techniques we already have.
The issues are more often political, economic and systemic than they are not knowing effective ways to further reduce risks and harm. Things like properly managed controlled burns are under-utilized because they are politically controversial. Enacting and enforcing codes mandating land management around private structures are politically unpopular. Out where I live, the local fire rangers just point-blank tell property owners "If you don't clear out all the brush and downed trees within a couple hundred feet of your structures, we're defending your house after the others that meet code" but no politician is willing to tell voters that. Worse, land owners continue to get permits issued to construct permanent dwellings in isolated locations which are extremely difficult to defend. If they want to build there, I say let them but also issue fair warning if they choose to proceed, they are on their own in the event of fire. Volunteer firefighters shouldn't need to risk their lives to defend houses which should never have been built in inaccessible, indefensible locations.
Wildfire suppression: https://en.wikipedia.org/wiki/Wildfire_suppression#Tactics
History of wildfire suppression in the United States: https://en.wikipedia.org/wiki/History_of_wildfire_suppressio...
Controlled burn: https://en.wikipedia.org/wiki/Controlled_burn
> A controlled or prescribed burn, also known as hazard reduction burning, [1] backfire, swailing, or a burn-off, [2] is a fire set intentionally for purposes of forest management, fire suppression, farming, prairie restoration or greenhouse gas abatement. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. [3] Fire is a natural part of both forest and grassland ecology and controlled fire can be a tool for foresters.
Hydrogel: https://en.wikipedia.org/wiki/Hydrogel
Aerogel: https://en.wikipedia.org/wiki/Aerogel
Water-based batteries have less fire risk FWIU
"1,000% Difference: Major Storage Capacity in Water-Based Batteries Found" (2023) https://scitechdaily.com/1000-difference-major-storage-capac...
> The metal-free water-based batteries are unique from those that utilize cobalt in their lithium-ion form. The research group’s focus on this type of battery stems from a desire for greater control over the domestic supply chain as cobalt and lithium are commonly sourced from outside the country. Additionally, the batteries’ safer chemistry could prevent fires.
That's a good list of useful links. From my experience as a home owner in a high-risk area, I'd add some of the things which can make a big difference are often surprisingly simple. In addition to managing your own land properly, make sure you have a large enough water tank to support firefighters in protecting your land. We got a well tank five times bigger than our typical usage needs. Another key thing is to put your tank next to where a fire truck can access it easily and get the proper adapter installed so the fire truck can direct connect to your tank (it's not included in most).
We also widened the access road on our property to support larger fire trucks and dozers. Obviously, fire-resistant construction materials and techniques are a huge help and not all require extensive retrofitting. There are spark resistant attic air access covers which are easy to install. We needed to re-roof and remodel anyway so we went with fire-resistant roofing and concrete siding. It's also important to have non-cellular local communications because the cell towers often go out first (our community uses 2-meter handsets for emergency comms). Doing those things along with the fact we've cleared all trees and brush out beyond 300 feet and only have metal out-buildings caused our local fire chief to comment that if everyone did what we've done, protecting the whole region would be three times easier. We also have enough generator power (on auto-failover) to pump our own well water to wet down our house and surroundings. With that our house has a really strong probability of surviving with no external help even if our property is directly hit by a major wildfire.
Edit to Add: I forgot to mention another key thing. If you live in a place like this, there's often only a single narrow public road serving your property and others nearby. Make sure you have transportation that'll let you get out cross-country if the road is blocked by either fire or heavy equipment. We have AWD ATVs fueled and always ready to go (plus they are handy around big properties like this). Between us and our neighbors we have a plan to get everyone on our road out without outside help - including a few elderly folk who will need assistance. Firefighters needing to rescue unprepared residents diverts vital resources from fighting the fire and clogs roads - which just makes everything else worse.
Great comments and advice. My neighborhood burned in the #GlassFire [0]. While our property was severely damaged, our home survived, due to many of the hardening techniques you mentioned and great work by CalFire. We had evacuated the night before and were staying in a local hotel watching the flames advance on our house via our security cameras. (Redundant net connections and a generator kept everything online). As our deck started to burn I called 911 to report it, figuring nothing would happen. About 20 minutes later a fireman walked into the frame of the camera and 2 minutes later the deck was out and the house was saved. I was cheering like I won the Super Bowl. So I really appreciate all the hardening I did, on-site water and CalFire.
When we rebuilt the exterior, I installed a set of sprinklers that ring the house and cover the roof/deck. They are not designed to fight the fire, instead I can activate them before a fire arrives to get things good and wet. This perimeter reduces the available fuels, reduces the heat load on the structure and reduces the risk of ember cast. It was a fun project with an ESP controller to sequence the valves and provide remote control.
Over the last few years, the Alert Wildfire Camera system, now over 700 cameras, has been a valuable resource. [1] Early detection of the fire, before it grows to extreme size has made a big difference. In the North Bay of California last year, this early detection and CalFire having nearby Air Attack resources on standby kept many small fires from becoming large ones in this area.
Another great resource is the Watch Duty app. [2] This is a non-profit, volunteer, but extremely professional service. They started in Northern California and are rapidly expanding. It is the go-to resource for Wildfire related information in the area.
Lastly, if anyone is looking at the X prize for satellite detection, I’d spend some time researching MODIS/VIIRS data products from NASA. [3]. A good starting point in some of the challenges of wildfire detection from space.
[0]. Fire tornado outside my window. https://youtu.be/1tZjWqh3-EU
[1]. https://www.alertwildfire.org/
TIL about building materials and fire hazard:
Fire resistance rating: https://en.wikipedia.org/wiki/Fire-resistance_rating
R-value: https://en.wikipedia.org/wiki/R-value_(insulation)
Here's a prompt to help with selecting sustainable building materials: """Generate a JSON-LD document of data with citations comparing building materials by: R-value, Fire Resistance Rating, typical structural integrity longevity in terms of years, VOCs at year 1 and year 5, and Sustainability factors like watt-hours and carbon and other pollutive outputs to produce, transport, distribute, install, and maintain . Include hempcrete (hemp and lime, which can be made from algae), rammed earth with and without shredded hemp hurds, wood, structural concrete, bamboo, and other emerging sustainable building materials.
Demonstrate loading the JSON data into a Pandas DataFrame and how to sort by multiple columns (with references to the docs for the tools used), and how to load the data into pygwalker."""
> a set of sprinklers that ring the house and cover the roof/deck. They are not designed to fight the fire, instead I can activate them before a fire arrives to get things good and wet. This perimeter reduces the available fuels, reduces the heat load on the structure and reduces the risk of ember cast. It was a fun project with an ESP controller to sequence the valves and provide remote control.
I searched a bit and couldn't find any smart home integrations that automatically hopefully turn the lawn and garden sprinklers on when a fire alarm goes off.
Are there any good reasons to not have that be a standard default feature if both smoke detectors and sprinklers are connected to a smart home system?
Coming from an operations perspective, you want to look at the downstream consequences of automation which depend on the details of the installed system. E.g. from this thread: automation might drain water from the 5x oversized well tank before firefighters arrive, seems to me that whether or not that is preferable depends on the details.
> Hydrogel
From https://twitter.com/westurner/status/1572664456210948104 :
>> What about #CO² -based #hydrogels for fire fighting?
>> /? Hydrogel firefighting (2022) https://www.google.com/search?q=hydrogel+firefighting […]
>> IDEA: How to make #hydrogels (and #aerogels) from mostly just {air*, CO², algae, shade, and sunshine,} [on earth, and eventually in space,]?
> Aerogel
From https://twitter.com/westurner/status/1600820322567041024 :
> Problem: #airogel made of CO² is an excellent insulator that's useful for many applications; but it needs structure, so foam+airogel but that requires smelly foam
> Possible solution: Cause structure to form in the airogel.
Backpack shoulder straps on e.g. Jansport backpacks have a geometric rubber mesh that's visible through a plastic window.
## Possible methods of causing structure to form in aerogel
- EM Hz: literally play EM waves (and possibly deliberately inverse convolutions) at the {#airogel,} production process
- Titratation
- Centrifugation
- "Volt grid": apply volts/amps/Hz patterns with a probe array
- Thermal/photonic bath 3d printing
- Pour a lattice-like lens as large as the aerogel sections and allow solar to cause it to slowly congeal to a more structural form in advantageous shapes
- Die-casting, pressure-injection molding
> Water-based batteries have less fire risk FWIU
CAES Compressed Air Energy Storage and Gravitational potential energy storage also have less fire risk then Lithium Ion batteries.
FWIU we already have enough abandoned mines in the world to do all of our energy storage needs?
Could CAES tanks filled with air+CO² help suppress wildfire?
> FWIU we already have enough abandoned mines in the world to do all of our energy storage needs?
Here's a prompt for this one, for the AI:
"Let's think step-by-step: how much depth or volume of empty mines are needed to solve for US energy storage demand with gravitational potential energy storage drones on or off tracks in abandoned mines? Please respond with constants and code as Python SymPy code with a pytest test_main and Hypothesis @given decorator tests"
Show HN: Frogmouth – A Markdown browser for the terminal
Hi HN,
Frogmouth is a TUI to display Markdown files. It does a passable job of displaying Markdown, with code blocks and tables. No image support as yet.
It's very browser like, with navigation stack, history, and bookmarks. Works with both the mouse and keyboard.
There are shortcuts for viewing README.md files and other Markdown on GitHub and GitLab.
License is MIT.
Let me know what you think...
Could Frogmouth display Markdown cells in Jupyter Notebooks on the CLI?
FWIW, IPython is built with Python Prompt Toolkit and Jupyter_console is IPython for non-python kernels; `conda install -y jupyter_console; jupyter kernelspec -h`. But those only do `%logstart -o example.out.py`; Markdown in notebooks on the CLI is basically unheard-of.
I would like to run VSCode with a TUI like this in a terminal via SSH.
Does anyone know a VSCode extension to do this?
FWIU coc.vim works with vim and nvim and works with VSCode LSP support: https://github.com/neoclide/coc.nvim
There are many LSP implementations: https://langserver.org/
awesome-neovim Ctrl-F "DAP": https://github.com/rockerBOO/awesome-neovim#lsp
mason.nvim: https://github.com/williamboman/mason.nvim :
> Portable package manager for Neovim that runs everywhere Neovim runs. Easily install and manage LSP servers, DAP servers, linters, and formatters.
conda-forge
DAP: Debug Adapter Protocol https://microsoft.github.io/debug-adapter-protocol/implement...
COC has been mostly deprecated for native LSPs also. I run ~same pyright I'd use in VSCode in my neovim.
Stanford, Harvard data science no more
Do Stanford or Harvard have a UC BIDS: UC Berkeley Institute of Data Science?
> This is the [open] textbook for the Foundations of Data Science class at UC Berkeley: "Computational and Inferential Thinking: The Foundations of Data Science" http://inferentialthinking.com/ (JupyterBook w/ notebooks and MyST Markdown)
> [#1 Undergrad Data Science program, #2 ranked Graduate Statistics program]
Data literacy: https://en.wikipedia.org/wiki/Data_literacy :
> Data literacy is distinguished from statistical literacy since it involves understanding what data means, including the ability to read graphs and charts as well as draw conclusions from data.[6] Statistical literacy, on the other hand, refers to the "ability to read and interpret summary statistics in everyday media" such as graphs, tables, statements, surveys, and studies. [6]
Data Literacy and Statistical Literacy are essential for good leadership. For citizens to be capable of Evidence-Based Policy, we need Data Driven Journalism (DDJ) and curricular data science in the public high schools.
Debugging a Mixed Python and C Language Stack
GDB can decode Python stack traces as Python code, as long as you have have python-gdb.py from the Python source distribution in the same place as your Python executable.
From https://github.com/jupyterlab/debugger/issues/284 en ENH: Mixed Python/C debugging (GDB,) #284
> "Users can do [mixed mode] debugging with GDB (and/or other debuggers) and log the session in a notebook; in particular in order to teach.
> Your question is specifically about IDEs with support for mixed-mode debugging (with gdb), so I went looking for an answer:
> https://wiki.python.org/moin/DebuggingWithGdb (which is not responsive and almost unreadable on a mobile device) links to https://fedoraproject.org/wiki/Features/EasierPythonDebuggin... , which mentions the py-list, py-up and py-down, py-bt, py-print, and py-locals GDB commands that are also described in * https://devguide.python.org/gdb/ *
> https://wiki.python.org/moin/PythonDebuggingTools Ctrl-F "gdb" mentions: DDD, pyclewn (vim), trepan3k (which is gdb-like and supports breaking at c-line and also handles bytecode disassembly)
> Apparently, GHIDRA does not have a debugger but there is a plugin for following along with gdb in ghidra called https://github.com/Comsecuris/gdbghidra [...] https://github.com/Comsecuris/gdbghidra/blob/master/data/gdb... (zero dependencies)
> https://reverseengineering.stackexchange.com/questions/1392/... lists a number of GUIs for GDB; including voltronnn:
>> There's Voltron, which is an extensible Python debugger UI that supports LLDB, GDB, VDB, and WinDbg/CDB (via PyKD) and runs on macOS, Linux and Windows. For the first three it supports x86, x86_64, and arm with even arm64 support for lldb while adding even powerpc support for gdb. https://github.com/snare/voltron
> "The GDB Python API" https://developers.redhat.com/blog/2017/11/10/gdb-python-api... describes the GDB Python API.
> https://pythonextensionpatterns.readthedocs.io/en/latest/deb... may be helpful [for writing-a-c-function-to-call-any-python-unit-test]
> The GDB Python API docs: https://sourceware.org/gdb/onlinedocs/gdb/Python-API.html
> The devguide gdb page may be the place to list IDEs with support for mixed-mode debugging of Python and C/C++/Cython specifically with gdb?
These days, we have CFFI and Apache Arrow for C+Python etc.
https://wiki.python.org/moin/DebuggingWithGdb lists the GDB debugging symbol packages for various distros
FWIU Fedora GDB now optionally automatically installs debug syms; from attempting to debug TuxMath SDL with GDB before just installing the Flatpak.
https://developers.redhat.com/articles/2021/09/08/debugging-... (2021) describes more recent efforts to improve Python debugging with c extensions
Teach at a Community College
> Learning to teach
To learn, teach.
For contrast, there's also:
Those who can, do. Those who can't, teach.
Not that I 100% agree with that.
If none teach, few can do or teach.
"This is where teachers are paid the most" (2021) https://www.weforum.org/agenda/2021/10/teachers-pay-countrie... :
> Globally, teachers’ salaries vary hugely, according to the OECD’s Education At A Glance 2021 report
Twitter drops “Government-funded”/“state-affiliated“ from NPR, BBC, RT, Xinhua
Perhaps it would be more appropriate to retain the existing label, link to the resource indicating how much state financial support and/or editorial control, and also label other publishers by their known revenue methods:
- advertising supported
- directly politically supported
- PAC-funded
- SPAC-funded
- nonprofit organization [in territories x, y, z]
And maybe also their methods of journalism; in the interest of evidence-based policy by way of evidence-based journalism:
- Source and Method
- Motive and Intent
- Admissibility vs Publishability
- What else?
Media literacy > Media literacy education https://en.wikipedia.org/wiki/Media_literacy#Media_literacy_...
- Do they require disclosure of editorial conflicts of interest?
- Do they require disclosure of journalistic conflicts of interest?
- Do they allow citations?
- Do they require citations?
- Do they list the DOI string in the citation or a dx.doi.org URL for NewsArticles about ScholarlyArticles?
- Which contributors are paid?
- Which parent company owns the media outlet?
Physicists discover that gravity can create light
All quantum fields have their vacuum energy fluctuations. When we change the parameters of the space (e.g. stretch or compress it), the current state is no longer the ground state but becomes a so-called squeezed vacuum state.
This effect is used in laser physics to "split photons" in spontaneous parametric downconversion. That is, an intense laser changes the refractive index of a medium periodically. These oscillations generate a squeezed vacuum state.
Is there a corollary SQG Superfluid Quantum Gravity fluidic description of squeezed coherent states?
And what of a Particle-Wave-Fluid triality?
Noting that I attempted to link to relevant research and cite the source for fluidic corollaries but was prevented from contributing. https://westurner.github.io/hnlog/#comment-35661155
FWIW, so inspired, I continued to write a few better prompts for all of this.
"""Explain how specific (for example Fedi and Turok's) theories of gravitons and superfluidity differ from General Relativity, the Standard Model, prevailing theories of Quantum Gravity and dark matter and/or dark energy non-uniform correction coefficients, classical fluid dynamics, and quantum chaos theory in regards to specific phenomena in the quantum foam.
Also, are is there one wave function or are there many; and are they related by operators expressible with qubit quantum computers or are qudits and qutrits necessary to sufficiently model this domain of n body gravity gravitons in a fluid field?
If graviton fields result in photons, what are the conservation symmetry relations in regards to the exchange of gravitons for photons?"""
And then (though apparently currently one must remove "must use `dask_ml.model_selection.GridSearchCV`" presumably due to current response length limits of Google Bard):
"""Design an experiment as a series of steps and Python code to test (1) whether Bernoulli's equations describe gravitons in a superfluidic field; and also (2) there is conservational symmetry in exchange of gravitons and photons. The Python code must use `dask_ml.model_selection.GridSearchCV`, must use SymPy, define constants in the `__dict__` attribute of a class, return experimental output as a `dict`, have pytest tests and Hypothesis `@given` decorator tests, and a `main(argv=sys.argv)` function with a pytest `test_main`, and use `asyncio`."""
Manipulative Consent Requests
"fraudulent redefinition of consent"
Unconscionability: https://en.wikipedia.org/wiki/Unconscionability :
> Unconscionability (sometimes known as unconscionable dealing/conduct in Australia) is a doctrine in contract law that describes terms that are so extremely unjust, or overwhelmingly one-sided in favor of the party who has the superior bargaining power, that they are contrary to good conscience. Typically, an unconscionable contract is held to be unenforceable because no reasonable or informed person would otherwise agree to it. The perpetrator of the conduct is not allowed to benefit, because the consideration offered is lacking, or is so obviously inadequate, that to enforce the contract would be unfair to the party seeking to escape the contract.
Statute of frauds: https://en.wikipedia.org/wiki/Statute_of_frauds :
> A statute of frauds is a form of statute requiring that certain kinds of contracts be memorialized in writing, signed by the party against whom they are to be enforced, with sufficient content to evidence the contract. [1][2] […]
> Raising the defense: A defendant in a contract case who wants to use the statute of frauds as a defense must raise it as an affirmative defense in a timely manner. [7] The burden of proving that a written contract exists comes into play only when a statute of frauds defense is raised by the defendant.
Statute of frauds > United States > Uniform Commercial Code: https://en.m.wikipedia.org/wiki/Statute_of_frauds#Uniform_Co... :
> Uniform Commercial Code: In addition to general statutes of frauds, under Article 2 of the Uniform Commercial Code (UCC), every state except Louisiana has adopted an additional statute of frauds that relates to the sale of goods. Pursuant to the UCC, contracts for the sale of goods where the price equals $500 or more fall under the statute of frauds, with the exceptions for professional merchants performing their normal business transactions, and for any custom-made items designed for one specific buyer. [42]
IIUC, that means that if the USD amount of a contract for future performance is over $500 the court would regard a statute of frauds argument as just cause for dismissal? Or just goods?
Proliferation of AI weapons among non-state actors could be impossible to stop
Artificial intelligence arms race > Proposals for international regulation: https://en.wikipedia.org/wiki/Artificial_intelligence_arms_r...
Counterproliferation: https://en.wikipedia.org/wiki/Counterproliferation :
> In contrast to nonproliferation, which focuses on diplomatic, legal, and administrative measures to dissuade and impede the acquisition of such weapons, counterproliferation focuses on intelligence, law enforcement, and sometimes military action to prevent their acquisition. [2]
...
def hasRightTo_(person) -> bool:
def hasRight(thing, right) -> bool:
def haveEqualRights(persons:Iterable, rights:Iterable[Callable]) -> bool:
Practical methods of evidentiary proofs: Source, Method; Motive, Intent
Only one pair of distinct positive integers satisfy the equation m^n = n^m
I think taking logs is an unnecessary indirection in the given proof. If n^m = m^n then raising both sides to the power of 1/(nm) gives us n^(1/n) = m^(1/m). So we are looking for two distinct positive integers at which the function x ↦ x^(1/x) takes the same value. The rest of the proof then goes as before.
Differentiating the above function yields (1/x^2)(1-log(x))x^(1/x), which is positive when log(x) < 1 and negative when log(x) > 1. So the function has a maximum at e and decreases on either side of it. Therefore one of our integers must be less than e, and the other greater than it. For the smaller integer there are only two possibilities, 1 and 2. Using 1 doesn't give a solution since the equation x^(1/x) = 1 only has the solution 1. So the only remaining possibility for the smaller number is 2, which does yield the solution 2^4 = 4^2. Since x^(1/x) is strictly decreasing when x > e, there can't be any other solutions with the same value.
Logs will appear no matter what, at some point in the line of reasoning. They are only held back until differentiating in this approach. I think the expression for the derivative of logn/n was much nicer to grapple with.
Seeing as we're in the integer world, I'm not sure logs need to show up ever, there's probably a proof path around unique prime factorisations. E.g. it's more or less immediately obvious that n and m would need to share the same prime factors in the same ratios.
This back-and-forth makes me wonder, is it possible to write proofs about proofs? e.g., you cannot prove this result without logarithms or there must exist a proof that involves prime factors?
I suppose at least some proofs vaguely like that are possible because Gödel's incompleteness theorem is one, although I suspect that that same theorem puts some constraints on these "metaproofs."
Axiomatic system > Axiomatic method https://en.wikipedia.org/wiki/Axiomatic_system#Axiomatic_met...
Inference https://en.wikipedia.org/wiki/Inference ; inductive, deductive, abductive
Propositional calculus > Proofs in propositional calculus: https://en.wikipedia.org/wiki/Propositional_calculus
Quantum logic: https://en.wikipedia.org/wiki/Quantum_logic
https://twitter.com/westurner/status/1609495237738496000 :
> [ Is quantum logic the correct or a sufficient logic for propositional logic? ]
What are quantum "expectation values"; and how is that Axiomatic wave operator system different from standard propositional calculus?
Show HN: IPython-GPT, a Jupyter/IPython Interface to Chat GPT
This would be great in conjunction with e.g. papermill for running the same prompts over time and with the model as a parameter.
Do IPython-GPT or jetmlgpt work in JupyterLite in WASM in a browser tab?
I have only tested jetmlgpt using Jupyter notebooks.
JupyterLite How to docs: https://jupyterlite.readthedocs.io/en/latest/howto/index.htm...
"Create a custom kernel" https://jupyterlite.readthedocs.io/en/latest/howto/extension... :
> We recommend checking out how to create a server extension first
It may also be possible to just pip install the plain python package with micropip, or include it in a `jupyterlite build`; From https://github.com/jupyterlite/jupyterlite/issues/237#issuec... re: 'micropip':
%pip install $@
# __import__('piplite').install($@)
There's also micromamba, for mamba-forge packages (which are build with empack (emscripten) into WASM)Congrats on the launch!
I created Mito, a spreadsheet the lives inside of Jupyter. We've seen a ton of our users using Jupyter and ChatGPT playground side by side.
We're also experimenting with bringing chatgpt into Jupyter through the Mito spreadsheet. (https://blog.trymito.io/5-lessons-learned-from-adding-chatgp...)
Looking forward to trying this out.
Cocalc also has new "generate a Jupyter Notebook from a [ChatGPT] prompt" functionality. https://twitter.com/cocalc_com/status/1644427430223028225
'2023-04-19: LaTeX + ChatGPT "Help me fix this..." buttons into our LaTeX editor' https://cocalc.com/news/latex-chatgpt-help-me-fix-this-butto...
Building a ChatGPT-enhanced Python REPL
"Show HN: IPython-GPT, a Jupyter/IPython Interface to Chat GPT" https://news.ycombinator.com/item?id=35580959 lists a few REPLs
When you buy a book, you can loan it to anyone – a judge says libraries can’t
The headline isn't mentioned in the opinion piece? What the judge ruled was:
> An ebook recast from a print book is a paradigmatic example of a derivative work.
In other words, if a library buy a book, they can't just digitise it and share it without permission.
I'd go so far as to say the article directly contradicts the headline.
The author spends an enormous amount of time explaining how the rights of "end users" have been cemented, such as the right to change formats. If a consumer lends a book, they are, by definition, no longer the "end user" and none of that precedent applies.
It's probably true that anyone can lend a book but that's not an "end user" right. In fact, that allowance is the very reason libraries can exist in the first place.
The case and article are about something completely different (making copies, then distributing the copies). Odd headline.
> that allowance
Right to property: https://en.wikipedia.org/wiki/Right_to_property
Brain images just got 64 million times sharper
9.4 T is quite a strong magnetic field, as it is half the strength of the magnetic field used for plasma confinement in Tokamak Energy’s Demo4 facility. Scaling this technology for human-sized animals will likely be incredibly expensive, but the 5 micron accuracy is surely worth the investment.
7T is already regularly used for human research, and approval for human usage has been granted for 10.5T and I believe for 11.7T (though I'm not sure how many images they've gotten out of that yet).
Yes it is incredibly expensive, but it is in fact already done.
https://thedebrief.org/impossible-photonic-breakthrough-scie... :
> For decades, that [Abbe diffraction] limit has operated as a sort of roadblock to engineering materials, drugs, or other objects at scales smaller than the wavelength of light manipulating them. But now, the researchers from Southampton, together with scientists from the universities of Dortmund and Regensburg in Germany, have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
> According to that research, the key to confining light below the previous impermeable Abbe diffraction limit was accomplished by “storing a part of the electromagnetic energy in the kinetic energy of electric charges.” This clever adaptation, the researchers wrote, “opened the door to a number of groundbreaking real-world applications, which has contributed to the great success of the field of nanophotonics.”
> “Looking to the future, in principle, it could lead to the manipulation of micro and nanometre-sized objects, including biological particles,” De Liberato says, “or perhaps the sizeable enhancement of the sensitivity resolution of microscopic sensors.”
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885
Could such inexpensive coherent laser light sources reduce medical and neuroimaging costs?
"A simple technique to overcome self-focusing, filamentation, supercontinuum generation, aberrations, depth dependence and waveguide interface roughness using fs laser processing" https://scholar.google.com/scholar?start=10&hl=en&as_sdt=5,4... :
> Several detrimental effects limit the use of ultrafast lasers in multi-photon processing and the direct manufacture of integrated photonics devices, not least, dispersion, aberrations, depth dependence, undesirable ablation at a surface, limited depth of writing, nonlinear optical effects such as supercontinuum generation and filamentation due to Kerr self-focusing. We show that all these effects can be significantly reduced if not eliminated using two coherent, ultrafast laser-beams through a single lens - which we call the Dual-Beam technique. Simulations and experimental measurements at the focus are used to understand how the Dual-Beam technique can mitigate these problems. The high peak laser intensity is only formed at the aberration-free tightly localised focal spot, simultaneously, suppressing unwanted nonlinear side effects for any intensity or processing depth. Therefore, we believe this simple and innovative technique makes the fs laser capable of much more at even higher intensities than previously possible, allowing applications in multi-photon processing, bio-medical imaging, laser surgery of cells, tissue and in ophthalmology, along with laser writing of waveguides.
TL Transfer Learning might be useful for training a model to predict e.g. [portable] low-field MRI with NIRS Infrared and/or Ultrasound? FWIU, "Mind2Mind" is one way to ~train a GAN from another already-trained GAN?
From https://twitter.com/westurner/status/1609498590367420416 :
> Idea: Do sensor fusion with all available sensors timecoded with landmarks, and then predict the expensive MRI/CT from low cost sensors
> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?
> Task: Learn a function f() such that f(lowcost_sensor_data) -> expensive_sensor_data
FWIU OpenWater has moved to NIRS+Ultrasound for ~ live in surgery MRI-level imaging and now treatment?
FWIU certain Infrared light wavelengths cause neuronal growth; and Blue and Green inhibit neuronal growth.
What are the comparative advantage and disadvantages of these competing medical imaging and neuroimaging capabilities?
Can a technology like this at some point be used for brain uploads? Or like a cheaper and better version of cryonics - to "backup" the brain until it can be resurrected in some way?
If quantum information is never destroyed – and classical information is quantum information without the complex term i – perhaps our brain states are already preserved in the universe; like reflections in water droplets in the quantum foam.
Lagrangian points, non-intersecting paths through accretion discs, and microscopic black holes all preserve data - modulated energy; information - for some time before reversible or unreversible transformation.
Perhaps Superfluid quantum gravity can afford insight into the interior topology of black holes and other quantum foam phenomena?
(Edit)
Coping: https://en.wikipedia.org/wiki/Coping
Defence mechanisms § Level 4: mature: https://en.wikipedia.org/wiki/Defence_mechanism#Level_4:_mat...
MRI brain images become 64M times sharper
Other thread from earlier today: "Brain images just got 64 million times sharper" https://news.ycombinator.com/item?id=35615778
There’s no universal cordless power tool battery – why?
From "USB-C is about to go from 100W to 240W, enough to power beefier laptops" https://news.ycombinator.com/item?id=27295621 :
> What are the costs to add a USB PD module to an electronic device? https://hackaday.com/2021/04/21/easy-usb%E2%80%91c-power-for...
> - [ ] Create an industry standard interface for charging and using [power tool,] battery packs; and adapters
From "Zero energy ready homes are coming" https://news.ycombinator.com/item?id=35065366 :
# USB power specs (DC)
7.5w = 1.5amp * 5volts # USB
15w = 3a * 5v # USB-C
100w = 5a * 20v # USB-C PD
240w = 5a * 48v # USB-C PD 3.1
Others could also send emails to power tool manufacturers about USB-C PD support and evolving the spec to support power tool battery packs.Aura – Python source code auditing and static analysis on a large scale (2022)
Unfortunately, there hasn't been a release since 2021, Similarly no commits to the branches master, or dev in the past ~14 months.
Hello, author of Aura here. The project is in fact active! But in a different branch called "ambience". Which is a very big refactor into transforming Aura (which is now designed to be run locally as tui) into server/web application. It would allow to automatically monitor and audit all used python packages in an organization by using an http reverse proxy to intercept python package installations. It's taking me currently long time to finish that big refactor as I am currently the only active developer there so apologies if the project seems to be abandoned, I'm just hesitating to merge the changes from ambience branch into main (which is what people see) as the new refactor is not stable yet as compared to master & dev as that was tested and tuned on the whole PyPI.
Very early alpha version is available here: https://ambience.sourcecode.ai if someone is interested in checking it out.
Could aura scan packages at the pulp pypi proxy? https://github.com/pulp
I haven't used pulp so I am not sure but yes in theory. Several schemes are currently supported via URIs (pypi://, git://, http(s):// etc...) so if the destination to scan can be formatted as one of the already supported URI schemes then you can already scan it. URI providers are also using plugin architecture so adding a new one for better integration with pulp (such as autodiscovering packages) should be trivial. Thank you for the suggestion, the pulp project looks interesting and I would definitely check it out!
Gitea can also (scan and build and test and) host python packages [1], conda packages [2], container images, etc.
[1] https://docs.gitea.io/en-us/usage/packages/pypi/
[2] https://docs.gitea.io/en-us/usage/packages/conda/
https:// URLs probably already solve for scanning Python packages hosted by Gitea and/or Pulp with Aura.
From https://news.ycombinator.com/item?id=33563857 :
> Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools: https://news.ycombinator.com/item?id=24511280 https://analysis-tools.dev/tools?languages=python
A new approach to computation reimagines artificial intelligence
Before reading the paper's obliquely mentioned in the article: How is this different from embeddings?
After reading the papers: How is this different from embeddings?
All three of the papers linked to are closed access. (Did I miss one?)
If anyone can link a favorite review paper on this approach, you'd be doing us a favor.
Edit: one of them does link to a preprint version: https://arxiv.org/abs/2203.04571
Great, and the implementation (Neuro-vector-symbolic architectures):
"A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices" (2023) https://arxiv.org/abs/2203.04571 :
> […] our proposed neuro-vector-symbolic architecture (NVSA) [implements] powerful operators on high-dimensional distributed representations that serve as a common language between neural networks and symbolic AI. The efficacy of NVSA is demonstrated by solving the Raven's progressive matrices datasets. Compared to state-of-the-art deep neural network and neuro-symbolic approaches, end-to-end training of NVSA achieves a new record of 87.7% average accuracy in RAVEN, and 88.1% in I-RAVEN datasets. Moreover, compared to the symbolic reasoning within the neuro-symbolic approaches, the probabilistic reasoning of NVSA with less expensive operations on the distributed representations is two orders of magnitude faster.
A number system invented by Inuit schoolchildren
I dislike these pseudo-scientific claims about alternative number systems and methods of paper and pencil arithmetic:
> Because of the tally-inspired design, arithmetic using the Kaktovik numerals is strikingly visual. Addition, subtraction and even long division become almost geometric. The Hindu-Arabic digits are an awkward system, Bartley says, but “the students found, with their numerals, they could solve problems a better way, a faster way.”
I think the students can be praised for having come up with simple to understand and write number system that corresponds to the conventions for counting in Alaskan Inuit language, and it seems appropriate to capture these notations in upcoming Unicode standards.
However, spending time learning base 20 arithmetic has obvious disadvantages that the article ignores. The times tables, memorized in grade school and fundamental to paper and pencil calculations, are now four times larger. Base 20 is not a popular notation for numbers. One important advantage of the number system (Hindu-Arabic) that most of the world uses is that most of the world uses it. I grew up with inches and degrees Fahrenheit and had to learn the metric system to pursue my science education. I'm glad I didn't have to learn how to count as well. We shouldn't make it harder for these kids to enjoy the rest of the world's books, journals, and internet resources about math and science.
> “the students found, with their numerals, they could solve problems a better way, a faster way”
> Base 20 is not a popular notation for numbers […] We shouldn't make it harder for these kids
So much of what you object to is that something they’ve found more intuitive and engaging isn’t what unintuitive disengaging stuff they’ll encounter. But developing intuition for math is far more valuable than developing conformance to how it’s supposed to be done. Who cares if that intuition is developed with some idiosyncrasy from what you consider normal? The math is math, the principles are consistent, the knowledge is transferable. Insisting they learn the same things a different way is totally arbitrary and counterproductive.
I think parent's point is not cultural as you are implying but rather practical. Learning in a base that nobody uses could be easier today and an hindrance later.
I think, all in all, this should not be a big deal. For the gifted kid, they'll find a way to adapt and become the next Einstein. As for the ungifted, it might give them a better leg up and allow them to perform better than they would have, so it's probably a plus anyways.
Finger binary: https://en.wikipedia.org/wiki/Finger_binary :
> Finger binary is a system for counting and displaying binary numbers on the fingers of either or both hands. Each finger represents one binary digit or bit. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both: that is, up to 2**5−1 or 2**10−1 respectively.
- "How to count to 1000 on two hands" by 3blue1brown https://youtu.be/1SMmc9gQmHQ
- "Polynesian People Used Binary Numbers 600 Years Ago - Scientific American" https://www.scientificamerican.com/article/polynesian-people...
What is the comparative value of radixes like Binary, Octal, andHexadecimal compared to Decimal (radix 10)?
Perhaps a radix like eπI would be more useful; though some amost-mystic physicists do tend to radix 9: "nonary" (which is actually ~ also radix-3).
List of numeral systems > By culture / time period, By type of notation https://en.wikipedia.org/wiki/List_of_numeral_systems :
> Numeral systems are classified here as to whether they use positional notation (also known as place-value notation), and further categorized by radix or base.
Digital computing hardware design
I always make a point of demonstrating finger-binary with the number 132. 4 will suffice, though.
California DMV wants to issue car titles as NFTs
Can this also be solved with W3C Verified Claims and ~ https://blockcerts.org/ (and an append-only, multiply distributed/synchronized/replicated/redundantly-backed-up permissioned database/Blockchain)?
And then, again, what is the TPS of the DB? How many read and write Transactions Per Second do the candidate datastores support with or without strong Consistency, Availability, and/or Partition Tolerance? And then what about cryptographic Data Integrity assurances.
FWIU, issuing shares on a Blockchain is challenging due to lost keys / shared cryptographic keys and reissuance?
"Possible" and "beneficial" are separate questions entirely.
The whole point of government is to be a central authority. That's the DMV's entire job: to be a central authority for driver's licenses and vehicle titles. Their database isn't open for everybody to modify, and it also shouldn't be. It also shouldn't be open for everybody to read, because we have a right to privacy.
What benefit does the DMV get from hosting its records on a blockchain? And what happens if that blockchain stops attracting miners? Does the DMV have to mine it? Does the price of mining rigs start to affect how much it costs to register your car? What happens if they get hit by a 51% attack?
Your average Californian is definitely not a crypto user, and doesn't have a digital wallet. One in twenty don't even speak English. What happens when somebody loses their private key? If you have to keep those keys in a conventional database, then a blockchain makes zero sense except as a money-waster.
There is nothing wrong with them just running a regular database, with normal SRE stuff like backups and replication. The California DMV has a lot of problems. But "how do we store records?" isn't one of them.
It takes patience to keep politely explaining the inadequacies of "5 people and the manager and the owner share the sole root backup credentials - which aren't cryptographic keys, so there is no way to tell whether data has been rewritten without the correct key - and tape backup responsibilities; and we have no way to verify the online database against the tape backups [so, can we write it off now]"
Notes on centralized databases with Merkel hashes : https://westurner.github.io/hnlog/ Ctrl-F "trillian", "iavl"
If you've ever set up SQL replication before, you wouldn't be claiming it's adequate.
Are RDS and CloudSQL good enough? What does Accumulo do differently?
Show HN: Skip the SSO Tax, access your user data with OSS
As the former CTO of an Insurtech and Fintech startup I always had the “pleasure” to keep regulators and auditors happy. Think of documenting who has access to what, quarterly access reviews, yearly audits and so on…
Like many others we couldn’t justify the Enterprise-plan for every SaaS tool to simply get access to SSO and SCIM/SAML APIs. For Notion alone the cost would have nearly doubled to $14 per user per month. That’s insane! Mostly unknown to people, SSO Tax also limits access to APIs that are used for managing user access (SCIM/SAML).
This has proven to be an incredibly annoying roadblock that prevented me from doing anything useful with our user data: - You want to download the current list of users and their permissions? Forget about it! - You want to centrally assign user roles and permissions? Good luck with that! - You want to delete user accounts immediately? Yeah right, like that's ever gonna happen!
It literally cost me hours to update our access matrix at the end of every quarter for our access reviews and manually assigning user accounts and permissions.
I figured, there must be a better way than praying to the SaaS gods to miraculously make the SSO Tax disappear (and open up SCIM/SAML along the way). That’s why I sat down a few weeks ago and started building OpenOwl (https://github.com/AccessOwl/open_owl). It allows me to just plug in my user credentials and automatically download user lists, including permissions from SaaS tools.
Granted, OpenOwl is still a work in progress, and it's not perfect. At the moment it's limited to non-SSO login flows and covers only 7 SaaS vendors. My favorite part is that you can configure integrations as “recipes”. The goal was for anybody to be able to add new integrations (IT managers and developers alike). Therefore you ideally don’t even have to write any new code, just tell OpenOwl how the new SaaS vendor works.
What do you think? Have you dealt with manually maintaining a list of users and their permissions? Could this approach get us closer to overcoming parts of the SSO Tax?
For those wondering what the "SSO Tax" is, it refers to the excessive pricing practiced by SaaS providers to access the SSO feature on their product.
A documented rant has made the rounds at https://sso.tax , which lists all vendors and their pricing of SSO.
I always thought this was insane, but now I wonder if "SSO Pricing"/tax is just the "real price" and the "Base pricing" is really the new free trial? Of course the SSO/real pricing is too high, and everyone negotiates it down, but the point is I suspect the "base pricing" is just a trial teaser that's probably not sustainable for many vendors in terms of margins. I'm just guessing here, maybe someone with some inside insight from one of these vendors can advise.
I'd rather consider it "SME pricing" vs "Enterprise pricing". Typically only companies above a certain size use SSO systems, and even larger ones require it for everything. Coincidentally bigger companies are also willing to pay more, so putting a high price on SSO enables SaaS to profit from those deep pockets without pricing themselves out of the market for smaller companies.
Schools, colleges, and universities typically have SSO but no budget or purchase authority.
Just slap an education discount on it and call it a day. There are plenty of reasons to do that anyway, you want students to get trained on your software and use it in their formative years as much as possible.
Many go even further and just give the product away for free for educational institutions and individual students (Github, Jetbrains and Tableau come to mind as examples)
https://github.com/doncicuto/glim :
> Glim is a simple identity access management system that speaks some LDAP and has a REST API to manage users and groups
"Proxy LDAP to limit scope of access #60" https://github.com/doncicuto/glim/issues/60
Twitter Is Blocking Likes and Retweets that Mention Substack
In December they tried to ban mentioning any non-Twitter usernames:
https://techcrunch.com/2022/12/18/twitter-wont-let-you-post-...
Isn't that completely contradictory to their supporting free speech mission?
It sure looks like someone has hijacked Elon and Jack and run off with those operations.
Maybe you have to add the year to the search query to find search results from before when Donny got kicked out fairly for serial disrespect.
Become a more effective censorship apparatus.
What "supporting free speech mission" are you referring to?
https://investor.twitterinc.com/contact/faq/default.aspx :
> What is Twitter's mission statement?
> The mission we serve as Twitter, Inc. is to give everyone the power to create and share ideas and information instantly without barriers. Our business and revenue will always follow that mission in ways that improve – and do not detract from – a free and global conversation
"Defending and respecting the rights of people using our service" https://help.twitter.com/en/rules-and-policies/defending-and...
Ha nice. That doesn't actually say "free speech" but point taken. Of course mission statements are always meaningless, but still, point taken.
PEP 684 was accepted – Per-interpreter GIL in Python 3.12
This is great and was a huge effort by Eric Snow and collaborators to make it happen.
To be clear, this is not "PEP 703 – Making the Global Interpreter Lock Optional in CPython", by Sam Gross. PEP 703 is more ambitious and add quite a bit of complexity to the CPython implementation. PEP 684 mostly cleans things up and encapsulates global state better. The most distruptive change is to allow immortal objects. I hope PEP 703 can go in too, even with the downsides (complexity, bit slower single threaded performance).
"PEP 703 – Making the Global Interpreter Lock Optional in CPython" (2023) https://github.com/python/peps/blob/main/pep-0703.rst https://peps.python.org/pep-0703/
colesbury/nogil https://github.com/colesbury/nogil :
docker run -it nogil/python
docker run -it nogil/python-cuda
Detection of Common Cold from Speech Signals Using Deep Neural Network
"Detection of COVID-19 from voice, cough and breathing patterns: Dataset and preliminary results" (2021) https://scholar.google.com/scholar?hl=en&as_sdt=5,47&sciodt=...
Pandas 2.0
I'm curious if there will be any appreciable performance gains here that are worthwhile. FWIW, last I checked[0], Polars still smokes Pandas in basically every way.
I’ve moved entirely to Polars (which is essentially Pandas written in Rust with new design decisions) with DuckDB as my SQL query engine. Since both are backed by Arrow, there is zero copy and performance on large datasets is super fast (due to vectorization, not just parallelization)
I keep Pandas around for quick plots and legacy code. I will always be grateful for Pandas because there truly was no good dataframe library during its time. It has enabled an entire generation of data scientists to do what they do and built a foundation — a foundation which Polars and DuckDB are now building on and have surpassed.
How well does Polars play with many of the other standard data tools in Python (scikit learn, etc.)? Do the performance gains carry over?
It works as well as Pandas (realize that scikit actually doesn’t support Pandas — you have to cast a dataframe into a Numpy array first)
I generally work with with Polars and DuckDB until the final step, when I cast it into a data structure I need (Pandas dataframe, Parquet etc)
All the expensive intermediate operations are taken care of in Polars and DuckDB.
Also a Polars dataframe — although it has different semantics — behaves like a Pandas dataframe for the most part. I haven’t had much trouble moving between it and Pandas.
You do not have to cast pandas DataFrames when using scikit-learn, for many years already. Additional in recent version there has been increasing support for also returning DataFrames, at least with transformers and checking column names/order.
Yes that support is still not complete. When you pass a Pandas dataframe into Scikit you are implicitly doing df.values which loses all the dataframe metadata.
There is a library called sklearn-pandas which doesn’t seem to be mainstream and dev has stopped since 2022.
From pandas-dataclasses #166 "ENH: pyarrow and optionally pydantic" https://github.com/astropenguin/pandas-dataclasses/issues/16... :
> What should be the API for working with pandas, pyarrow, and dataclasses and/or pydantic?
> Pandas 2.0 supports pyarrow for so many things now, and pydantic does data validation with a drop-in dataclasses.dataclass replacement at pydantic.dataclasses.dataclass.
Model output may or may not converge given the enumeration ordering of Categorical CSVW columns, for example; so consistent round-trip (Linked Data) schema tool support would be essential.
CuML is scikit-learn API compatible and can use Dask for distributed and/or multi-GPU workloads. CuML is built on CuDF and CuPY; CuPy is a replacement for NumPy arrays on GPUs with 100x relative performance.
CuPy: https://github.com/cupy/cupy :
> CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy acts as a drop-in replacement to run existing NumPy/SciPy code on NVIDIA CUDA or AMD ROCm platforms.
> CuPy is an open-source array library for GPU-accelerated computing with Python. CuPy utilizes CUDA Toolkit libraries including cuBLAS, cuRAND, cuSOLVER, cuSPARSE, cuFFT, cuDNN and NCCL to make full use of the GPU architecture.
> The figure shows CuPy speedup over NumPy. Most operations perform well on a GPU using CuPy out of the box. CuPy speeds up some operations more than 100X. Read the original benchmark article Single-GPU CuPy Speedups on the RAPIDS AI Medium blog
CuDF: https://github.com/rapidsai/cudf
CuML: https://github.com/rapidsai/cuml :
> cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects.*
> cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn.
> For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.
FWICS there's now a ROCm version of CuPy, so it says CUDA (NVIDIA only) but also compiles for AMD. IDK whether there are plans to support Intel OneAPI, too.
What of the non-Arrow parts of other pandas-compatible and not pandas-compatible DataFrame libraries can be ported back to Pandas (and R)?
RFdiffusion: Diffusion model generates protein backbones
So I guess this means easier drug discovery? Honesty those wiggly diagrams are meaningless to me I have no bio background
Easier drug discovery is what they tell public and grant agencies. In a roundabout way it’s true. Maybe. Many other hurdles still exist. But what this and other similar tools really are, is significantly advancing basic science in creating our own protein designs.
Before alphafold changed this field, creating your own protein design was considered an insane task (not impossible, bakers lab and others have done it a couple times). But these tools (now we have multiple) allow you to create new proteins From scratch that can do exactly what you want (caveats galore). New enzymes that can catalyze reactions never found in nature for example.
Before this all we could do was take proteins that already exist in nature and modify them. So you can imagine how new this world is.
Large Language models can also generate novel and working protein structures that adhere to a specified purpose https://www.nature.com/articles/s41587-022-01618-2
Can optical tweezers construct such proteins; or is there a more efficient way?
Optical tweezers: https://en.wikipedia.org/wiki/Optical_tweezers
"'Impossible' photonic breakthrough: scientist manipulate light at subwavelength scale" https://thedebrief.org/impossible-photonic-breakthrough-scie... :
> have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
> According to that research, the key to confining light below the previous impermeable Abbe diffraction limit was accomplished by “storing a part of the electromagnetic energy in the kinetic energy of electric charges.” This clever adaptation, the researchers wrote, “opened the door to a number of groundbreaking real-world applications, which has contributed to the great success of the field of nanophotonics.”
> “Looking to the future, in principle, it could lead to the manipulation of micro and nanometre-sized objects, including biological particles,” De Liberato says, “or perhaps the sizeable enhancement of the sensitivity resolution of microscopic sensors.”
"Digging into DNA Repair with Optical Tweezer Technology" https://www.genengnews.com/topics/digging-into-dna-repair-wi...
It's much easier than that! Living cells already have ribosomes that construct proteins and all the other molecular machinery needed to go from DNA sequence to assembled protein. You can order a DNA sequence online and put it into e-coli or yeast cells and those cells will make that protein for you.
TIL about mail-order CRISPR kits. "Mail-Order CRISPR Kits Allow Absolutely Anyone to Hack DNA" (2017) https://www.scientificamerican.com/article/mail-order-crispr...
Protein production: https://en.wikipedia.org/wiki/Protein_production
Tissue Nanotransfection reprograms e.g. fibroblasts into neurons and endothelial cells (for ischemia) using electric charge. Are there different proteins then expressed? Which are the really useful targets?
> The delivered cargo then transforms the affected cells into a desired cell type without first transforming them to stem cells. TNT is a novel technique and has been used on mice models to successfully transfect fibroblasts into neuron-like cells along with rescue of ischemia in mice models with induced vasculature and perfusion
> [...] This chip is then connected to an electrical source capable of delivering an electrical field to drive the factors from the reservoir into the nanochannels, and onto the contacted tissue
https://en.wikipedia.org/wiki/Tissue_nanotransfection#Techni...
Are there lab safety standards for handling yeast or worse? https://en.wikipedia.org/wiki/Gene_drive
"Bacterial ‘Nanosyringe’ Could Deliver Gene Therapy to Human Cells" (2023) https://www.scientificamerican.com/article/bacterial-nanosyr... :
> In a paper published today in Nature, researchers report refashioning Photorhabdus’s syringe—called a contractile injection system—so that it can attach to human cells and inject large proteins into them. The work could provide a way to deliver various therapeutic proteins into any type of cell, including proteins that can “edit” the cell’s DNA. “It’s a very interesting approach,” says Mark Kay, a gene therapy researcher at Stanford University who was not involved in the study. “Where I think it could be very useful is when you want to express proteins that can do genome editing” to correct or knock out a gene that is mutated in a genetic disorder, he says.
> The nano injector could provide a critical tool for scientists interested in tweaking genes. “Delivery is probably the biggest unsolved problem for gene editing,” says study investigator Feng Zhang, a molecular biologist at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology and the Broad Institute of M.I.T. and Harvard. Zhang is known for his work developing the gene editing system CRISPR-Cas9. Existing technology can insert the editing machinery “into a few tissues, blood and liver and the eye, but we don’t have a good way to get to anywhere else,” such as the brain, heart, lung or kidney, Zhang says. The syringe technology also holds promise for treating cancer because it can be engineered to attach to receptors on certain cancer cells.
From "New neural network architecture inspired by neural system of a worm" (2023) https://news.ycombinator.com/item?id=34715188 :
> "I’m skeptical that biological systems will ever serve as a basis for ML nets in practice"
>> First of all, ML engineers need to stop being so brainphiliacs, caring only about the 'neural networks' of the brain or brain-like systems. Lacrymaria olor has more intelligence, in terms of adapting to exploring/exploiting a given environment, than all our artificial neural networks combined and it has no neurons because it is merely a single-cell organism [1].
Which proteins code for organisms that compute?
Llama.cpp 30B runs with only 6GB of RAM now
Author here. For additional context, please read https://github.com/ggerganov/llama.cpp/discussions/638#discu... The loading time performance has been a huge win for usability, and folks have been having the most wonderful reactions after using this change. But we don't have a compelling enough theory yet to explain the RAM usage miracle. So please don't get too excited just yet! Yes things are getting more awesome, but like all things in science a small amount of healthy skepticism is warranted.
Didn't expect to see two titans today: ggerganov AND jart. Can ya'll slow down you make us mortals look bad :')
Seeing such clever use of mmap makes me dread to imagine how much Python spaghetti probably tanks OpenAI's and other "big ML" shops' infra when they should've trusted in zero copy solutions.
Perhaps SWE is dead after all, but LLMs didn't kill it...
You can mmap from python.
The CPython mmap module docs: https://docs.python.org/3/library/mmap.html
zero_buffer (CFFI, 2013) https://github.com/alex/zero_buffer/blob/master/zero_buffer....
"Buffers on the edge: Python and Rust" (2022) https://alexgaynor.net/2022/oct/23/buffers-on-the-edge/ :
> If you have a Python object and want to obtain its buffer, you can do so with memoryview in Python or PyObject_GetBuffer in C. If you’re defining a class and want to expose a buffer, you can do so in Python by… actually you can’t, only classes implemented in C can implement the buffer protocol. To implement the buffer protocol in C, you provide the bf_getbuffer and bf_releasebuffer functions which are called to obtain a buffer from an object and when that buffer is being released, respectively.
iocursor (CPython C API, ~Rust std::io::Cursor) https://github.com/althonos/iocursor
Arrow Python (C++) > On disk and MemoryMappedFile s: https://arrow.apache.org/docs/python/memory.html#on-disk-and...
"Apache Arrow: Read DataFrame With Zero Memory" (2020) https://towardsdatascience.com/apache-arrow-read-dataframe-w...
pyarrow.Tensor: https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...
ONNX is built on protocolbuffers/protobufs (google/protobufs), while Arrow is built on google/flatbuffers.
FlatBuffers https://en.wikipedia.org/wiki/FlatBuffers :
> It supports “zero-copy” deserialization, so that accessing the serialized data does not require first copying it into a separate part of memory. This makes accessing data in these formats much faster than data in formats requiring more extensive processing, such as JSON, CSV, and in many cases Protocol Buffers. Compared to other serialization formats however, the handling of FlatBuffers requires usually more code, and some operations are not possible (like some mutation operations).
We updated our RSA SSH host key
The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised. This basically means that, depending on your level of paranoia, you will have to review all code that has ever interacted with Github repositories, and any code pushed or pulled from private repositories can no longer be considered private.
Github's customers trust GH/MS with their code, which, for most businesses, is a high value asset. It wouldn't surprise me if this all results in a massive lawsuit. Not just against GH as a company, but also to those involved (like the CISO). Also, how on earth was it never discovered during an audit that the SSH private key was plain text and accessible? How has GH ever been able to become ISO certified last year [0], when they didn't even place their keys in a HSM?
Obviously, as a paying customer I am quite angry with GH right now. So I might be overreacting when I write this, but IMO the responsible persons (not talking about the poor dev that pushed the key, but the managers, CISO, auditors, etc.) should be fined, and/or lose their license.
[0] https://github.blog/2022-05-16-github-achieves-iso-iec-27001...
SSH uses ephemeral keys. It's not enough to have the private key and listen to the bytes on the wire, you have to actively MITM the connection. A github employee who has access to the private key and enough network admin privileges to MITM your connection already has access to the disk platters your data is saved on.
Regarding the secrecy of the data you host at github, you should operate under the assumption that a sizeable number of github employees will have access to it. You should assume that it's all sitting unencrypted on several different disk platters replicated at several different geographically separated locations. Because it is.
One of the things that you give up when you host your private data on the cloud is controlling who and under what circumstances people can view or modify your private data. If the content of the data is enough to sink you/your company without remedy you should not store it on the cloud.
I would also add that your ability to pretend to be the client to the server is also limited, if ssh key based client authentication is used. This means that even if the host key is leaked, an attacker will not be able to push in the name of the attacked client. The attacker will be able to pretend to be the server to the client, and thus be able to get the pushed code from the client (even if the client just added one patch, the attacker can pretend to be an empty repo server side and receive the entire repo.
If ssh token based auth is used, it's different of course, because then the server gets access to the token. Ideally Github would invalidate all their auth tokens as well.
The fun fact is that a token compromise (or any other attack) can still happen any point in the future with devices that still have outdated ssh keys. That's a bit unfortunate as no revocation mechanism exists for ssh keys... ideally clients would blacklist the ssh key, given the huge importance of github.
WKD also lacks key revocation and CT Certificate Transparency.
E.g. keybase could do X.509 like Certificate Transparency.
:
$ keybase pgp -h
NAME:
keybase pgp - Manage keybase PGP keys
USAGE:
keybase pgp <command> [arguments...]
COMMANDS:
gen Generate a new PGP key and write to local secret keychain
pull Download the latest PGP keys for people you follow.
update Update your public PGP keys on keybase with those exported from the local GPG keyring
select Select a key from GnuPG as your own and register the public half with Keybase
sign PGP sign a document.
encrypt PGP encrypt messages or files for keybase users
decrypt PGP decrypt messages or files for keybase users
verify PGP verify message or file signatures for keybase users
export Export a PGP key from keybase
import Import a PGP key into keybase
drop Drop Keybase's use of a PGP key
list List the active PGP keys in your account.
purge Purge all PGP keys from Keybase keyring
push-private Export PGP keys from GnuPG keychain, and write them to KBFS.
pull-private Export PGP from KBFS and write them to the GnuPG keychain
help, h Shows a list of commands or help for one command
$ keybase pgp drop -h
NAME:
keybase pgp drop - Drop Keybase's use of a PGP key
USAGE:
keybase pgp drop <key-id>
DESCRIPTION:
"keybase pgp drop" signs a statement saying the given PGP
key should no longer be associated with this account. It will **not** sign a PGP-style
revocation cert for this key; you'll have to do that on your own.
/?q=PGP-style revocation cert
https://www.google.com/search?q=PGP-style+revocation+cert :- "Revoked a PGP key, is there any way to get a revocation certificate now?" https://github.com/keybase/keybase-issues/issues/2963
- "Overview of Certification Systems: X.509, CA, PGP and SKIP"
...
- k8s docker vault secrets [owasp, inurl:awesome] https://www.google.com/search?q=k8s+docker+vault+secrets+owa... https://github.com/gites/awesome-vault-tools
- Why secrets shouldn't be passed in $ENVIRONMENT variables; though e.g. the "12 Factor App" pattern advises to parametrize applications mostly with environment variables that show in /proc/pid/environ but not /proc/pid/cmdline
W3C DID supports GPG proofs and revocation IIRC:
"9.6 Key and Signature Expiration" https://www.w3.org/TR/did-core/#key-and-signature-expiration
"9.8 Verification Method Revocation" https://www.w3.org/TR/did-core/#verification-method-revocati...
Blockerts is built upon W3C DID and W3C Verified Credentials, W3C Linked Data Signatures, and Merkel trees (and JSON-LD). From the Blockerts FAQ https://www.blockcerts.org/guide/faq.html :
> How are certificates revoked?
> Even though certificates can be issued to a cohort of people, the issuer can still revoke from a single recipient. The Blockcerts standard supports a range of revocation techniques. Currently, the primary factor influencing the choice of revocation technique is the particular schema used.
> The Open Badges specification allows a HTTP URI revocation list. Each id field in the revokedAssertions array should match the assertion.id field in the certificate to revoke.
Re: CT and W3C VC Verifiable Credentials (and DNS record types for cert/pubkey hashes that must also be revoked; DoH/DoT + DNSSEC; EDNS): https://news.ycombinator.com/item?id=32753994 https://westurner.github.io/hnlog/#comment-32753994
"Verifiable Credential Data Integrity 1.0: Securing the Integrity of Verifiable Credential Data" (Working Draft March 2023) > Security Considerations https://www.w3.org/TR/vc-data-integrity/#security-considerat...
If a system does not have key revocation it cannot be sufficiently secured.
Ask HN: Where can I find a primer on how computers boot?
As a developer, I recently encountered challenges with GRUB and discovered I lacked knowledge about my computer's boot process. I realized terms like EFI partition, MBR, GRUB, and Bootloader were unfamiliar to me and many of my colleagues. I'm seeking introductory and easy-to-understand resources to learn about these concepts. Any recommendations would be appreciated!
Booting process of Linux: https://en.wikipedia.org/wiki/Booting_process_of_Linux
Booting process of Windows NT since Vista: https://en.wikipedia.org/wiki/Booting_process_of_Windows_NT_...
UEFI > Secure Booting, Boot Stages: https://en.wikipedia.org/wiki/UEFI#Boot_stages
The EFI system partition is conventionally /boot/efi on a Linux system; and there's a signed "shim loader" that GRUB launches, which JMP- launches the kernel+initrd after loading the initrd into RAM (a "RAM drive") and mounting it as the initial root filesystem /, which is pivot_root'd away from after the copy of /sbin/init (systemd) mounts the actual root fs and launches all the services according to the Systemd unit files in order according to a topological sort given their dependency edges: https://en.wikipedia.org/wiki/EFI_system_partition
Runlevels: https://en.wikipedia.org/wiki/Runlevel
runlevel 5 is runlevel 3 (multi-user with networking) + GUI. On a gnome system, GDM is the GUI process that is launched. GDM launches the user's Gnome session upon successful login. `systemctl restart gdm` restarts the GDM Gnome Display Manager "greeter" login screen, which runs basically runs ~startx after `bash --login`. Systemd maps the numbered runlevels to groups of unit files to launch:
telinit 6 # reboot
telinit 3 # kill -15 GDM and all logged in *GUI* sessions
You can pass a runlevel number as a kernel parameter by editing the GRUB menu item by pressing 'e' if there's not a GRUB password set; just the number '3' will cause the machine to skip starting the login greeter (which may be what's necessary to troubleshoot GPU issues). The word 'rescue' as a kernel parameter launches single-user mode, and may be what is necessary to rescue a system failing to boot. You may be able to `telinit 5` from the rescue runlevel, or it may be best to reboot.>The EFI system partition is conventionally /boot/efi on a Linux system;
Still often mounted as /boot/efi but physically residing on the root of the hidden ESP volume as EFI/boot & EFI/ubuntu, for distros like ubuntu.
So when mounted you have something like /boot/efi/boot/bootx64.efi and boot/efi/ubuntu/grub etc. if visible and accessible.
Internet Control Message Protocol (ICMP) Remote Code Execution Vulnerability
Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico
TV/VGA + Serial Console on an RP2040 would be cool.
- rs232
- UART
- USB-TTL w/ configurable (3.3v/5v) voltage levels
- (optional) VGA output
- (optional) WiFi/Bluetooth (RP2040W Pi Pico W)
The OpenWRT hardware/port.serial wiki page explains how to open a serial console with screen and a tty /dev; and about 3.3V/5V bricking a router and/or the cable: https://openwrt.org/docs/techref/hardware/port.serial#use_yo...
PiKVM may have console support but with a Pi 3/4+, not a $4/$6 RP2040 /W with or without low-level WiFi+Bluetooth
"Raspberry Pi Pico Serial Communication Example (MicroPython)" https://electrocredible.com/raspberry-pi-pico-serial-uart-mi...
> The RP2040 microcontroller in Raspberry Pi Pico has two UART peripherals, UART0 and UART1. The UART in RP2040 has the following features: [pinout]
The reason for this port is that I'm using PicoVGA as a VGA terminal for an 8bit retro computer. I'm using USB host mode on the Pico for keyboard and mouse input. I found a Pico W for sale for a reasonable price + shipping so now I can add wifi support.
Can it run a ~GNUscreen text console attached to a serial port attached to pins on the other side of the Pico, and maybe a BLE keyboard/controller? (And how, while I'm at it; Thonny has an AST parse tree menu item)
I don't see why not. Pico has a few dedicated UART pins. Bluetooth support is pretty new, though.
2023-02-10; bluekitchen/btstack: https://github.com/raspberrypi/pico-sdk/commit/c8ccefb9726b8...
> - RS232
RS232: https://en.wikipedia.org/wiki/RS-232
Serial Port: https://en.wikipedia.org/wiki/Serial_port
> - UART
UART: Universal Asynchronous Receiver Transmitter: https://en.wikipedia.org/wiki/Universal_asynchronous_receive...
> - USB-TTL w/ configurable (3.3v/5v) voltage levels
USB-to-serial adapter: https://en.wikipedia.org/wiki/USB-to-serial_adapter
> Most commonly the USB data signals are converted to either RS-232, RS-485, RS-422, or TTL-level UART serial data.
- SunFounder Thales Pi Pico Kit > Components > Slide Switch, Resistor, : https://docs.sunfounder.com/projects/thales-kit/en/latest/co... Kepler has the 2040W: https://docs.sunfounder.com/projects/kepler-kit/en/latest/
/? how to limit voltage to 3.3v: https://www.google.com/search?q=how+to+limit+voltage+to+3.3v
- Zener diode > Voltage shifter: https://en.wikipedia.org/wiki/Zener_diode#Voltage_shifter
- Voltage regulator > DC voltage stabilizer: https://en.wikipedia.org/wiki/Voltage_regulator#DC_voltage_s...
- TIL electronics.stackexchange has CircuitLab built-in: https://electronics.stackexchange.com/questions/241537/3-3v-... . Autodesk TinkerCAD is a neat, free, web-based circuit simulator, too; though it supports Arduino and bbc:micro but not (yet?) Pi Pico like Wokwi.
There is at least one RP2040 JS simulator:
Wokwi appears to support {MicroPython, CircuitPython,} on {Pi Pico} and also Rust on ESP32:
Wokwi > New Pi Pico project: https://wokwi.com/projects/new/pi-pico
Wokwi > New Pi Pico + MicroPython project: https://wokwi.com/projects/new/micropython-pi-pico
wokwi/rp2040js : https://github.com/wokwi/rp2040js
https://www.hackster.io/Hack-star-Arduino/raspberry-pi-pico-...
> - (optional) WiFi/Bluetooth (RP2040W Pi Pico W)
/? "pi pico" bluetooth 5.2 [BLE] https://www.google.com/search?q=%22pi+pico%22+bluetooth+5.2 :
- From https://www.cnx-software.com/2023/02/11/raspberry-pi-pico-w-... :
>> The Raspberry Pi Pico W board was launched with a WiFi 4 and Bluetooth 5.2 module based on the Infineon CYW43439 wireless chip in June 2022
bluekitchen/btstack is the basis for the Bluetooth support i Pi Pico SDK 1.5.0+: https://github.com/bluekitchen/btstack
> MicroPython
awesome-micropython: https://github.com/mcauser/awesome-micropython
https://github.com/pfalcon/awesome-micropython :
- There is a web-based MicroPython REPL, and it only works over HTTP due to WebSockets WSS, so it's best to clone that file locally: http://micropython.org/webrepl/ https://github.com/micropython/webrepl/
awesome-circuitpython: https://github.com/adafruit/awesome-circuitpython
- microsoft/vscode-python-devicesimulator lacks maintainers and Pi Pico Support: https://github.com/microsoft/vscode-python-devicesimulator/w...
https://vscode.dev/ is the official hosted copy of VSCode as WASM in a browser tab. YMMV with which extensions work after compilation to WASM and without WASI (node/fs,) in a browser tab.
From "MicroPython officially becomes part of the Arduino ecosystem" (2022) https://news.ycombinator.com/item?id=33699666 :
> [in addition to the Arduino IDE support for Pi Pico now] For VSCode, there are a number of extensions for CircuitPython and MicroPython:
> joedevivo.vscode-circuitpython: https://marketplace.visualstudio.com/items?itemName=joedeviv...
> Pymakr https://github.com/pycom/pymakr-vsc/blob/next/GET_STARTED.md
> Pico-Go: https://github.com/cpwood/Pico-Go
https://scratch.mit.edu/ has bbc:microbit (and LEGO Boost) extensions when you click plus at the lower left; but not (yet?) Pi Pico support: https://github.com/LLK https://github.com/LLK/scratch-gui/commit/421d673e714a367ff2... https://github.com/microbit-more/mbit-more-v2
FDIC – SVB FAQ
> The FDIC will pay uninsured depositors an advance dividend within the next week.
Depending on the amount, this can possibly make things a lot better for the startups banking there.
Where does the money come from? I mean that non-sarcastically, confess I've not figured out the way the banking system in the large works.
edit : thanks all for coherent clear responses.
A lot of SVB's assets can be sold immediately in robust markets (e.g. treasuries)
If that was true the bank wouldn't be insolvent.
And yet: https://dfpi.ca.gov/wp-content/uploads/sites/337/2023/03/DFP...
If you have $160B in deposits but only $150B in liquid assets, you're still insolvent. Doesn't mean the assets aren't liquid.
You are only insolvent if you have $160B in withdrawals. If the deposits sat until the bonds matured, the $150B in currently liquid assets would be worth more than $150B.
How would the strict separation between savings deposits and investment banking from Glass-Steagull (1933) (which was repealed by GLBA in 1999) banking regulations have prevented this?
From "1999 Repeal of Glass-Steagall was the worst deregulation enacted in US history" (2022) https://news.ycombinator.com/item?id=30206570 :
> Yeah what was the deal with that dotcom correction in the early 2000s? Did banks invest differently after GLBA said that they can gamble against peoples' savings deposits (because they created a 'sociallist' $100b credit line, called it FDIC, and things like that don't happen anymore)
> Decline of the Glass-Steagall Act: https://en.wikipedia.org/wiki/Decline_of_the_Glass%E2%80%93S...
> Dot-com bubble: https://en.wikipedia.org/wiki/Dot-com_bubble
An Update on USDC and Silicon Valley Bank
23% of the reserve, or $9.7B is held in cash.
I think on Thursday, SVB saw cash outflow requests over $40b in one day. USDC peg is so far below 1 right now there will definitely be a lot of par arbitrage Monday trying to withdraw.
Hopefully it doesn’t take longer than a day for them to settle treasury trades, and maybe some people have lost the keys to their crypto. They could also possibly have a line of credit with instant settlement assuming they’re truly solvent with marketable securities at market prices.
3 month (or less) Treasury Bills are damn close to cash. I don't believe they lose value in practice, not even in the rapidly raising interest rate environment of the last year.
The problem at Silvergate and Silicon Valley Banks is that they bought 10 year or 30 year bonds, not 3 month bills. Its a completely different composition, with completely different properties.
Why lock up the money for so long in such an unstable time as this? Better interest rates?
> Better interest rates?
You betcha.
Here's the rates for March 2021. 3M would have gotten you 0.05% APY, while 10Y gets you 1.45% APY.
https://home.treasury.gov/resource-center/data-chart-center/...
Yeah, it sounded like a perfect plan if no customers wanted to get their funds within the next 10 years. ¯\_(ツ)_/¯
No that's not it. You can sell the bonds any time. But now they're worth considerably less because interest rates are much higher now. They locked themselves in at rate that are now terrible for 10 years.
So weird to think that near-zero interest rates are the best you can do for the next 10 years. I'm pretty sure everyone way saying "lock in this low interest rate before they go up again!"
On Treasuries over what windowed series? https://fred.stlouisfed.org/tags/series?t=treasury%3Byield+c...
Is this the one?
"10-Year Treasury Constant Maturity Minus 3-Month Treasury Constant Maturity" https://fred.stlouisfed.org/series/T10Y3M
Urgent: Sign the petition now
Trump & republicans cut banking regulations and increased capital thresholds from 50 to 250 billion for banks to comply with an FDIC stress test as mandated by Dodd-Frank (which was enacted after the mortgage crisis). SVB would have not gone bankrupt had they been required to pass this stresstest. This change was widely supported by tech companies and banks and here we are again with a bank engaging in risky behavior holding out their hand for taxpayers to bail them out: NO.
Stress test (financial) > Bank stress test , Payment and settlement systems stress test https://en.wikipedia.org/wiki/Stress_test_(financial)#Bank_s...
List of bank stress tests > Americas https://en.wikipedia.org/wiki/List_of_bank_stress_tests#Amer...
https://www.google.com/search?q=increased+capital+thresholds...
From "Transparency & Accountability - EGRRCPA (S. 2155) Rulemakings" https://www.fdic.gov/transparency/egrrcpa.html :
> The FDIC is responsible for a number of rulemakings under the Economic Growth, Regulatory Relief, and Consumer Protection Act (EGRRCPA). This page provides links to proposed and final rules and related documents.
"FDIC Releases Economic Scenarios for 2022 Stress Testing" (2022) https://www.fdic.gov/news/press-releases/2022/pr22019.html
How can the scenarios and policies be improved?
129-year-old vessel still tethered to lifeboat found on floor of Lake Huron
I'd love to dive there someday.
The NOAA page on the wreck (https://thunderbay.noaa.gov/shipwrecks/ironton.html) just has depth listed as "TBA", but the Ohio that it collided with is in >100m.
assuming it's in similar range, that's serious depth. Way into hypoxic trimix zone (ie: you'll be breathing gas that'd kill you if you were breathing it on the surface)
Looks fascinating, but almost certainly out of reach for all but the most extreme technical divers.
Crazy how 100m is such a short distance in most walks of life except for underwater.
It’s seriously non intuitive to a layman the whole pressure thing.
https://en.wikipedia.org/wiki/Decompression_sickness#Ascent_... :
> DCS [Decompression Sickness] is best known as a diving disorder that affects divers having breathed gas that is at a higher pressure than the surface pressure, owing to the pressure of the surrounding water. The risk of DCS increases when diving for extended periods or at greater depth, without ascending gradually and making the decompression stops needed to slowly reduce the excess pressure of inert gases dissolved in the body.
DCS > Prevention > Underwater diving: https://en.wikipedia.org/wiki/Decompression_sickness#Underwa... :
> Decompression time can be significantly shortened by breathing mixtures containing much less inert gas during the decompression phase of the dive (or pure oxygen at stops in 6 metres (20 ft) of water or less). The reason is that the inert gas outgases at a rate proportional to the difference between the partial pressure of inert gas in the diver's body and its partial pressure in the breathing gas; whereas the likelihood of bubble formation depends on the difference between the inert gas partial pressure in the diver's body and the ambient pressure. Reduction in decompression requirements can also be gained by breathing a nitrox mix during the dive, since less nitrogen will be taken into the body than during the same dive done on air. [85]
Google Groups has been left to die
Google Groups has survived 22 years. How many self-hosted Discourse forums can say the same? If you're working on a new, small side-project like a new formal evaluator, are you going to want to spend your limited administrative bandwidth onboarding new contributors, responding to feature suggestions, talking to users, etc, or are you going to want to spend it trying to wrangle self-hosted forum administration and figure out email delivery? the article has a one-line tossed off aside at the end to say "Certainly self-hosted FOSS communities can die, but these are functions of community activity itself rather than the service they’re hosted on", but without considering that the absolute most crucial time for community resources to survive is through periods of lackluster or nonexistent community involvement. If you have a Google Group full of, say, formal methods programming experts, then it can spend 5 years fallow and still be chock-full of absolutely vital historical resources and even spring back into life as people start using it again for discussion. That sort of longevity just can't exist if someone stopped paying the hosting bills for their self-hosted TLA discourse forum 3 years in.
AtariAge.com[1] - first created in 1988, or 25 years ago. Run by some bloke on the internet.
The problem for Google Groups is that no-one can perf-farm it any more, which is the sole reason for projects to survive at Google...
perf-farm ? I think I might know what you mean, but could you unpack?
Not the person who used the term, but I kind of wish I had been. It's excellent. At FAANG-ish companies where "impact" is the key to promotion, there's an inevitable tendency for engineers to prefer working on things that are highly visible like starting new projects or adding new features, vs. things that are highly useful like stability, testing, or code quality. Most internally promoted E6s or above got that way by initiating multiple projects, then dumping them on others when they've served their perf-review-enhancing purpose. "Perf farming" is as good a term as I've seen for it.
> Most internally promoted E6s or above got that way by initiating multiple projects, then dumping them on others
Well, Jeff and Sanjay got to DE/Fellows this way, so this is a good role model. Unfortunately doesn’t scale.
Zero energy ready homes are coming
Contrary to what I am seeing in comments on HN, this is not just for single family detached housing. If you go to the ZERH website, under Program Requirements it says:
Versions 1 and 2, Single Family, Multifamily, and Manufactured Homes
https://www.energy.gov/eere/buildings/zero-energy-ready-home...
Additionally, one of the photos in the featured article has the following caption:
Lopez Community Land Trust built this 561-square-foot affordable home on Lopez Island, Washington, to the high-performance criteria of DOE's Zero Energy Ready Home program that delivers a $20-per-month average monthly energy bill.
I've said for years we need to go back to more passive solar design as our baseline standard because it is both more energy efficient and more comfortable for humans. It protects the environment while improving quality of life.
It's a myth that you can't see gains in both areas.
I'm also happy to see the caption because we need to do a better job of supporting high quality of life in small towns, not just big cities. Trends in recent decades mean small towns suck because they lack services and big cities suck if you aren't one of the wealthy.
That wasn't always true and programs like this can improve and strengthen both small town and urban environments while mitigating the burden suburbs currently represent.
Passive solar building design : https://en.wikipedia.org/wiki/Passive_solar_building_design#...
Low-energy house: https://en.wikipedia.org/wiki/Low-energy_house
List of low-energy building techniques: https://en.wikipedia.org/wiki/List_of_low-energy_building_te...
/? Passive solar home (tbm=isch image search) https://www.google.com/search?q=passive+solar+home&tbm=isch
/? Passive solar house: https://youtube.com/results?search_query=passive+solar+house
/? passive solar house: https://www.pinterest.com/search/pins/?q=passive%20solar%20h...
(Edit)
Maximum solar energy is on the equatorial side of the house.
Full-sun plants prefer maximum solar energy.
10ft (3m) underground it's about 75°F (24°C) all year. Geothermal systems leverage this. (Passive) Walipini greenhouses are partially or fully underground or in a hillside, but must also manage groundwater seepage and flooding; e.g. with a DC solar sump pump and/or drainage channels filled with rock.
Passive solar greenhouses (especially in China and now Canada) have a natural or mounded earthen wall thermal mass on one side, and they lower wool blankets over the upside-down wing airfoil -like transparent side at night and when it's too warm (with a ~4HP motor).
TIL an aquarium heater can heat a tank of water working as a thermal mass in a geodesic growing dome; which can be partially-buried or half-walled with preformed hempcrete block.
Round structures are typically more resilient to wind:
Shear stress > Beam shear: https://en.wikipedia.org/wiki/Shear_stress
Deformation (physics) > Strain > Shear strain: https://en.wikipedia.org/wiki/Deformation_(physics)#Shear_st...
GH topic: finite-element-analysis https://github.com/topics/finite-element-analysis
GH topic: structural-analysis: https://github.com/topics/structural-engineering https://github.com/topics/structural-analysis
What open source software is there for passive home design and zero-energy home design?
Round house: https://www.pinterest.com/search/pins/?q=round%20house
The shearing force due to wind on structures with corners (and passive rooflines) causes racking and compromise of structural integrity; round homes apparently fare best in hurricanes.
Walipini passive solar green houses: https://en.wikipedia.org/wiki/Walipini
Earthship passive solar homes: https://en.wikipedia.org/wiki/Earthship
Underground living: https://en.wikipedia.org/wiki/Underground_living
Root cellar passive refrigeration: https://en.wikipedia.org/wiki/Root_cellar
Ground source heat pump (Geothermal heat pump) https://en.wikipedia.org/wiki/Ground_source_heat_pump
Solar-assisted heat pump: https://en.wikipedia.org/wiki/Solar-assisted_heat_pump
(Geothermal Power = Geothermal Electricity) != (Geothermal Heating)
Geothermal power: https://en.wikipedia.org/wiki/Geothermal_power
Geothermal Heating is so old, https://en.wikipedia.org/wiki/Geothermal_heating
...
AC-to-DC (rectifier; GaNprime, GaN) and DC-to-AC (inverter) are inefficient conversions: it wastes electricity as heat.
Residential and Commercial AC electrical systems have a GFCI ground loop (for the ground pin on standard AC adapters)
WAV: Watts = Amps * Volts
# USB power specs (DC)
7.5w = 1.5amp * 5volts # USB
15w = 3a * 5v # USB-C
100w = 5a * 20v # USB-C PD
240w = 5a * 48v # USB-C PD 3.1
# 110v/120v AC: 15amp; Standard Residential AC in North America:
1500w = 15a * 100v # Microwave oven
1650w = 15a * 110v
1440w = 12a * 120v # AC EV charger
# 110/120 AC: 20amp
2400w = 20a * 120v # Level 1 EV charger
# 240v AC plug: Dryer, Oven, Stove, EV
4800w = 20a * 240v
7200w = 30a * 240v # Public charging station
9600w = 40a * 240v # Level 2 EV charget
14400w = 60a * 240v
120000w = 300a * 400v # Supercharger v2
120kW = 300a * 400v
1000000w = 1000a * 1000v # Megacharger
1000kW = 1000a * 1000v
1MW = 1000a * 1000v
USB > Power related standards: https://en.wikipedia.org/wiki/USB#Power-related_standardsCharging station > Charging time > charger specs table: https://en.wikipedia.org/wiki/Charging_station#Charging_time
(Trifuel) generators do not have catalytic converters.
Wood stoves must be sufficiently efficient; and can be made so with a catalytic combustor or a returning apparatus (and/or thermoelectrics to convert heat to electricity).
/? Catalytic combustor (wood stove) https://www.google.com/search?q=%22catalytic+combustor%22
Wood-burning stove > Safety and pollution considerations > US pollution control requirements: https://en.wikipedia.org/wiki/Wood-burning_stove#US_pollutio...
Gravitational potential energy is less lossy than CAES Compressed Air Energy Storage is less lossy than thermal salt is less lossy than chemical batteries.
'Pain mound' low tech compost driven water heaters are neat too
This is a pile of vaguely relevant facts...
I think in my HN comments I'd prefer a couple of facts and a conclusion, with citations for more detail.
Stochastic gradient descent written in SQL
Title here is wrong. Title in article and headings in article are right: ONLINE gradient descent
It's specifically not stochastic. From the article:
Online gradient descent
Finally, we have enough experience to implement online gradient descent. To keep things simple, we will use a very vanilla version:
- Constant learning rate, as opposed to a schedule.
- Single epoch, we only do one pass on the data.
- Not stochastic: the rows are not shuffled. ⇠ ⇠ ⇠ ⇠
- Squared loss, which is the standard loss for regression.
- No gradient clipping.
- No weight regularisation.
- No intercept term.
Hehe I was wondering if someone would catch that. Rest assured, I know the difference between online and stochastic gradient descent. I admit I used stochastic on Hacker News because I thought it would generate more engagement.
What are some adversarial cases for gradient descent, and/or what sort of e.g. DVC.org or W3C PROV provenance information should be tracked for a production ML workflow?
Gradient descent: https://en.wikipedia.org/wiki/Gradient_descent
Stochastic gradient descent: https://en.wikipedia.org/wiki/Stochastic_gradient_descent
Online machine learning: https://en.wikipedia.org/wiki/Online_machine_learning
adversarial gradient descent site:github.com inurl:awesome : https://www.google.com/search?q=awesome+adversarial+gradient...
https://github.com/EthicalML/awesome-production-machine-lear...
Robust machine learning: https://en.wikipedia.org/wiki/Robustness_(computer_science)#...
Robust gradient descent
We built model & data provenance into our open source ML library, though it's admittedly not the W3C PROV standard. There were a few gaps in it until we built an automated reproducibility system on top of it, but now it's pretty solid for all the algorithms we implement. Unfortunately some of the things we wrap (notably TensorFlow) aren't reproducible enough due to some unfixed bugs. There's an overview of the provenance system in this reprise of the JavaOne talk I gave here https://www.youtube.com/watch?v=GXOMjq2OS_c. The library is on GitHub - https://github.com/oracle/tribuo.
U.S. corn-based ethanol worse for the climate than gasoline, study finds
As others mentioned, this is not a new finding. There is a lot of academic and industrial/economic analysis on this subject. Ethanol from corn was never a good idea economically for anyone other than corn farmers. It is also bad environmentally and places additional competitive demand on our food supply (choice and competition between corn going into food or fuel supply).
Similar analysis has been done on other crop feed-stocks (switch grass, etc.) and none of the analysis has shown that crop derived ethanol is a good idea.
I'm not a leading expert on this subject by any means, but I am a PhD chemical engineer and did a lot of research into alternative fuels science and economics about ten years ago. I'm not up on the latest research and analysis but the fundamental thermodynamics and carbon cycles of crop-based ethanol have not changed much in the past ten years or so.
Corn farmers from Iowa. Where they hold the first caucus of the interminable election season.
No American politician or political party can afford to offend corn farmers. We should count ourselves lucky that our gasoline is only 10% corn.
Can Ethanol be sustainably produced from corn?
Which other crops have sufficient margin given commodity prices?
Can solar and goats and wind and IDK algae+co2 make up the difference?
Is solar laser weeding a more sustainable approach?
What rotations improve the compost and soil situation?
> Can solar and goats and wind and IDK algae+co2 make up the difference?
Can row covers etc be made from corn; from cellulose and what other sustainable inputs (ideally that are waste outputs)?
Compostable thermoformable biopolymers for food packaging
FWIU transparent wood "windows" are made out of treated cellulose? How do they compare with glass and plastic in terms of transparency for full spectrum UV for e.g. plant growth, sanitization, and vitamin D?
> Is solar laser weeding a more sustainable approach?
From "Solar-Powered Plant Protection Equipment: Perspective and Prospects" (2022) https://www.mdpi.com/1996-1073/15/19/7379 :
>> The unmanned system (robot) is well suited for weeding operations, and it helps to minimize the required workforce and herbicide usage while weeding. Two solar-powered weeders, EcoRobot and AVO robot models, are developed by Ecorobotix, Switzerland (Figure 2). These models work more effectively in row crops based on the detection of weeds (>85%), and a herbicides is applied precisely on the weeds to destroy them. The solar power used in EcoRobot and AVO models is 380 W and 1150 W, respectively, and they have a working time of 8 and 12 h once fully charged by solar panels [64].
> What rotations improve the compost and soil situation?
Crop rotation: https://en.wikipedia.org/wiki/Crop_rotation
SymPy makes math fun again
I like sagemath. They have integration with sympy, and they also have many features making python simpler when you want to do mathematical things.
There's latex display feature when in notebook. It looks good, can even have symbolical matrices and display them as you would've expected.
It's on par with Wolfram notebook, but you get to program in python instead of some propietary language.
If you got some really long formula, it's easier to see you got it correctly by displaying it symbolically in latex.
They have other things to smooth out the parts of python which are annoying in math, like exponentiation works with ^, and dividing integers returns a rational number instead of floating point or an integer.
> They have other things to smooth out the parts of python which are annoying in math, like exponentiation works with ^, and dividing integers returns a rational number instead of floating point or an integer.
SageMath also has a multivariate inequality solver IIRC. `*` is repeated multiplication (exponentiation) in Python. If you require preprocessing to translate ^ to *, you don't have valid Python code that'll run with any other interpreter.
Is it easier to import SageMath from a plain Python script now; with conda repackaging?
Type promotion in Python:
from rational import Rational
assert Rational(1, 4) / 2 == Rational(1, 8)
Python 2 had a __future__ import to do floatdiv instead of floordiv: from __future__ import division
assert 3 / 2 == 1.5
assert 3 // 2 == 1
assert 4 / 2 == 2.0
Python 3 returns floats from ints or decimals IIRC: assert 3 / 2 == 1.5
assert 3.0 / 2 == 1.5
assert 3 // 2 == 1
assert 4 / 2 == 2.0
> Is it easier to import SageMath from a plain Python script now; with conda repackaging?
Yes, it's easy. It was always possible, since fortunately none of the Sage library implementation uses the Sage preprocessor. You could always do "sage -python" and you get the Python interpreter that Sage uses, and can import Sage via "import sage.all". Somebody asked me exactly this question at a colloquium talk I gave on Sage yesterday, and here's my answer/demo:
https://youtu.be/dy792FEh1ns?t=1140
In the demo, I use a Jupyter kernel that runs the Python included in Sage-9.8.
Disclaimer: I created the Sage preparser long ago, in order to make Sage more palatable to mathematicians. In particular, it was inspired by audience feedback during a live demo I did at PyCon 2005.
> They have other things to smooth out the parts of python
I should add that this is an interesting chunk of Python code, and we've always planned to separate it out from Sage as a separate library than could be used with Sympy, etc.
https://github.com/sagemath/sage/blob/52a81cbd161ef4d5895325...
It does a surprising number of little clever things we realized would be a good idea over years, and it has some neat ideas from various Python PEP's like [1..10] that were rejected from Python, but are very useful when expressing mathematics. It's also annoying when you just want to format your code using prettier (say), and you can't because Sage isn't Python. :-(
Recently I've been working with the %%ipytest magic command for running pytest in notebooks.
Is there a %%sage IPython magic method that passes code through the preparse.py input transform?
Please tell us what features you’d like in news.ycombinator (2007)
Client-side encryption for Gmail in Google Workspace is now generally available
Not available to users with personal Google Accounts.
Why not?
I’m speculating about how this works, but it probably doesn’t make sense for general-purpose email.
For emails between members of one workspace/domain, you can conceptually perform key exchange to implement client-side encryption. You also don’t have spam concerns within members of a workspace.
There’s no general way to do key exchange between different email domains, and you do have spam concerns (which to address requires scanning the content).
There’s no protocol for doing such things over different email domains in a way that’s compatible and interoperable across software stacks. This technology is proprietary, and would make spam challenging to deal with (which again is not a problem within one workspace).
It’s also not clear to me whether this is really client-side encryption, so much as encryption at a layer higher than the email stack. (It’s pretty hard to do client-side encryption in the browser, in a way that matches user expectations. How do you store/retrieve the encryption key?)
OoenPGP.js is open source and developed by ProtonMail https://openpgpjs.org/ https://github.com/openpgpjs/openpgpjs
A number of Chrome (and I think also Firefox) extensions include their own local copy of OpenPGP.js for use with various webmail services, including GMail.
WKD (and HKP) depends upon HTTPS without cert pinning, FWIU: https://wiki.gnupg.org/WKD
How does an email client use WKD?
1. A user selects a recipient for an email.
2. The email client uses the domain part of the email address to construct which server to ask.
3. HTTPS is used to get the current public key.
The email client is ready to encrypt and send now.
An example:
https://intevation.de/.well-known/openpgpkey/hu/it5sewh54rxz33fwmr8u6dy4bbz8itz4 is the direct method URL for "bernhard.reiter@intevation.de
How/where exactly does it store the user's private keys?
The initial reaction that I have to client-side encryption with a web browser is: how does the client fetch/obtain the encryption key? (For signing outbound messages, or decrypting/verifying inbound.) Local storage wouldn't be appropriate as the sole storage for something like an asymmetric keypair, and it wouldn't provide the behavior that users expect: being able to log in and use a service from any web browser.
No one interacts with a web browser in such a way that it would be appropriate for the browser's local state to be the sole storage of something important like a private key. So this necessitates a service that the client interacts with during login to fetch its private key.
If you can log in with a browser, then that implies that you're fetching the key from somewhere. In the context of hosted Gmail, then that means that Gmail will be storing and providing your key. In which case it's nominally client-side encryption -- and the system that stores and provides your key might be totally separate from the email systems (and tightly controlled) -- but it's still not what I'd think of as client-side encryption generally. It's not really client-side encryption if the service that you're using to store data is the same service (from your perspective) as the one that stores/provides your key (those responsibilities might be separated from an implementation perspective, but they aren't from a user's perspective).
The service can employ various techniques like encrypting its copy of my private key using a symmetric key derived from my passphrase (and then discarded) -- and this decryption could perhaps be done client-side in the browser -- but ultimately the service still has the ability to obtain my key (at time of next login, if not any time).
If the service that provides a client-side encryption key could be divorced from a particular use-case -- e.g., if I could use my own pluggable key-providing service with Gmail, which my browser uses to fetch my key on login -- then the model would make a bit more sense. (You'd still have to trust a service like Gmail not to steal your key, but at least you don't have to trust it to store the key as well.)
Would it be possible to implement a standardized pluggable key server model? It seems plausible. Imagine that there was a standard HTTP protocol for this purpose, kind of like how WebAuthN is standardized. I log into Gmail, link it to my key server <https://keys.jcrites.example.com>, verify the connection, and then when logging into Gmail, then the web page is allowed to make HTTPS calls to <https://keys.jcrites.example.com> to either fetch my asymmetric key, or alternatively make calls to decrypt or encrypt specific content.
In the former case, it implements client-side encryption using a key server I control, while trusting Gmail not to misuse the key; in the latter case, it implements encryption using a keyserver that I control, and makes remote calls for each encryption operation. In that case Gmail would never have my key, although while browsing Gmail.com it could probably make arbitrary requests to the key server to perform operations using the key -- but you could log what operations are performed.
Looks like you are correct: WebUSB and WebBluetooth and WebAuth don't already cover HSM use cases?
/? secure enclave browser
But WebCrypto: "PROPOSAL: Add support for general (hardware backed) cryptographic signatures and key exchange #263" https://github.com/w3c/webcrypto/issues/263
Show HN: Classic FPS Wolfenstein 3D brought in the browser via Emscripten
TuxMath and TuxTyping are FOSS games written with SDL.
Giving your experience with porting Wolfenstein 3D SDL to emscripten, would it be easier to rewrite TuxMath given the exercise XML files or port it to WASM/emscripten (and emscripten-forge)?
Notably, the TuxMath RPM currently segfaults with recent Fedora but the Flatpak (which presumably statically-ships it's own copy of SDL) does work fine. https://flathub.org/apps/details/com.tux4kids.tuxmath https://github.com/tux4kids/tuxmath
(Other someday priorities: Looking at SensorCraft, wanting to port it to (JupyterLite WASM) notebooks w/ jupyter-book)
The latest Wolfenstein where you're girls that respawn only if tagteam is good, too; Wolfenstein: Youngblood. https://en.wikipedia.org/wiki/Wolfenstein
Porting SDL(2) games with Emscripten to get an actual WASM output is pretty straightforward. Getting the game to run properly is another story. If the game has additional dependencies, you have to build them separatly with Emscripten and link them in finally. Heavy use of the filesystem has to be commented out as well (browser tab is a sandbox).
emscripten-forge builds a ~condaenv of recipes with empack; like conda-forge: https://github.com/emscripten-forge/empack
There's not yet an SDL recipe yet: https://github.com/emscripten-forge/recipes/tree/main/recipe...
Re: emscripten fs implementations: https://github.com/emscripten-core/emscripten/issues/15041#i... https://github.com/jupyterlite/jupyterlite/issues/315
Show HN: Mathesar – open-source collaborative UI for Postgres databases
Hi HN! We just released the public alpha version of Mathesar (https://mathesar.org/, code: https://github.com/centerofci/mathesar).
Mathesar is an open source tool that provides a spreadsheet-like interface to a PostgreSQL database.
I was originally inspired by wanting to build something like Dabble DB. I was in awe of their user experience for working with relational data. There’s plenty of “relational spreadsheet” software out there, but I haven’t been able to find anything with a comparable UX since Twitter shut Dabble DB down.
We're a non-profit project. The core team is based out of a US 501(c)(3).
Features:
* Built on Postgres: Connect to an existing Postgres database or set one up from scratch.
* Utilizes Postgres Features: Mathesar’s UI uses Postgres features. e.g. "Links" in the UI are foreign keys in the database.
* Set up Data Models: Easily create and update Postgres schemas and tables.
* Data Entry: Use our spreadsheet-like interface to view, create, update, and delete table records.
* Data Explorer: Use our Data Explorer to build queries without knowing anything about SQL or joins.
* Schema Migrations: Transfer columns between tables in two clicks in the UI.
* Custom Data Types:: Custom data types for emails and URLs (more coming soon), validated at the database level.
Links:
CODE: https://github.com/centerofci/mathesar
LIVE DEMO: https://demo.mathesar.org/
DOCS: https://docs.mathesar.org/
COMMUNITY: https://wiki.mathesar.org/en/community
WEBSITE: https:/mathesar.org/
SPONSOR US: https://github.com/sponsors/centerofci or https://opencollective.com/mathesar
So happy to realize your stack includes Python, Postgres, Typescript and Svelte because I once flirted with such idea for a personal project. I'm inspired.
If I may ask, have you found using a Typed language such as Typescript has changed your tendency to use Python for other tasks, given that it isn't typed? [0]
[0] - I understand that types can be added on top, but I never found the integration to work that well with mypy when I last tried it many years ago.
I haven't because I'm Python biased. ;) However, for Web I've found myself leaning to TypeScript for type-safety and DX reasons.
Have you ever tried [Pydantic](https://docs.pydantic.dev)!? It may be what you need for type safety / data validation.
JSONLD types are specified with @type, and the range of a @type attribute includes rdfs:Class.
icontract and pycontracts (Design-by-Contract programming) have runtime type and constraint checking; data validation. Preconditions, Command, Postconditions (assertions, assertions of invariance after command C_funcname executed) https://github.com/Parquery/icontract
pydantic_schemaorg: https://github.com/lexiq-legal/pydantic_schemaorg
> Pydantic_schemaorg contains all the models defined by schema.org. The pydantic classes are auto-generated from the schema.org model definitions that can be found on https://schema.org/version/latest/schemaorg-current-https.js... [ https://github.com/schemaorg/schemaorg/tree/main/data/releas... ]
Portable low-field MRI scanners could revolutionize medical imaging
How does this MRI neuroimaging capability differ from openwater's Phase Wave aoproach? https://www.openwater.cc/technology
Is MRI-level neuroimaging possible with just NIRS Near-Infrared Spectroscopy?
First Law of Thermodynamics Breakthrough Upends Equilibrium Theory in Physics
Laws of Thermodynamics (in 2023) https://en.wikipedia.org/wiki/Laws_of_thermodynamics :
> The zeroth law of thermodynamics defines thermal equilibrium and forms a basis for the definition of temperature: If two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.
> The first law of thermodynamics states that, when energy passes into or out of a system (as work, heat, or matter), the system's internal energy changes in accordance with the law of conservation of energy.
> The second law of thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems never decreases. A common corollary of the statement is that heat does not spontaneously pass from a colder body to a warmer body.
> The third law of thermodynamics states that a system's entropy approaches a constant value as the temperature approaches absolute zero. With the exception of non-crystalline solids (glasses), the entropy of a system at absolute zero is typically close to zero. [2]
First law of thermodynamics https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Quantum thermodynamics https://en.wikipedia.org/wiki/Quantum_thermodynamics
Thermal quantum field theory https://en.wikipedia.org/wiki/Thermal_quantum_field_theory :
> In theoretical physics, thermal quantum field theory (thermal field theory for short) or finite temperature field theory is a set of methods to calculate expectation values of physical observables of a quantum field theory at finite temperature.
From the article:
> [170 years ago] the technology of the time dictated the gases or fluids that people would have studied are in equilibrium at the densities and temperatures that they were using back then.”
"Quantifying Energy Conversion in Higher-Order Phase Space Density Moments in Plasmas" (2023) https://link.aps.org/doi/10.1103/PhysRevLett.130.085201
- "Thermodynamics of Computation Wiki" https://news.ycombinator.com/item?id=18146854
- Phase diagram > Types; where is plasma in this diagram https://en.wikipedia.org/wiki/Phase_diagram#Types
Potential applications?:
- "Thin film" and compact pulsed fusion plasma confinement reactor thermoelectric efficiency ; https://en.m.wikipedia.org/wiki/Helion_Energy
- What do plasmas compute, as a computation medium per Constructor Theory and/or an information medium in deep space?
- Can laser transmutation be achieved more efficiently with the heat of a sustained compact fusion plasma reactor? 4He and 3He are useful outputs with an equilibrium price in recent years, apparently.
There should probably be a facility with omnidirectional conveyors for automated sample testing that runs samples next to the heat?
The Missing Semester of Your CS Education
The course videos: "Missing Semester IAP 2020" https://youtube.com/playlist?list=PLyzOVJj3bHQuloKGG59rS43e2...
Software Carpentry has some similar, OER lessons: https://software-carpentry.org/lessons/
DeepMind has open-sourced the heart of AlphaGo and AlphaZero
Worth noting that while AlphaGo and AlphaZero are incredible achievements, the amount of actual code to implement them isn't very much.
If you have the research paper, someone in the field could reimplement them in a few days.
Then there is the large compute cost for training them to produce the trained weights.
So, opensourcing these bits of work without the weights isn't as major a thing as you might imagine.
> If you have the research paper, someone in the field could reimplement them in a few days.
Hi I did this while I was at Google Brain and it took our team of three more like a year. The "reimplementation" part took 3 months or so and the rest of the time was literally trying to debug and figure out all of the subtleties that were not quite mentioned in the paper. See https://openreview.net/forum?id=H1eerhIpLV
Replication crisis: https://en.wikipedia.org/wiki/Replication_crisis :
> The replication crisis (also called the replicability crisis and the reproducibility crisis) is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method,[2] such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
People should publish automated tests. How does a performance-optimizer know that they haven't changed the output of there are no known-good inputs and outputs documented as executable tests? Pytest-hypothesis seems like a nice compact way to specify tests.
AlphaZero: https://en.wikipedia.org/wiki/AlphaZero
GH topic "AlphaZero" https://github.com/topics/alphazero
I believe ther are one or more JAX implementations of AlphaZero?
Though there's not yet a quantum-inference-based self-play (AlphaZero) algorithm?
TIL about the modified snow plow problem is a variation on TSP, and there are already quantum algos capable of optimally solving TSP.
I think I agree with everything you've said here, but just want to note that while we absolutely should (where relevant) expect published code including automated tests, we should not typically consider reproduction that reuses that code to be "replication" per se. As I understand it, replication isn't merely a test for fraud (which rerunning should typically detect) and mistakes (which rerunning might sometimes detect) but also a test that the paper successfully communicates the ideas such that other human minds can work with them.
Sources of variance; Experimental Design, Hardware, Software, irrelevant environmental conditions/state, Data (Sample(s)), Analysis
Can you run the notebook again with the exact same data sample (input) and get the same charts and summary statistics (output)? Is there a way to test the stability of those outputs over time?
Can you run the same experiment (the same 'experimental design'), ceteris paribus (everything else being equal) and a different sample (input) and get a very similar output? Is it stable, differentiable, independent, nonlinear, reversible; Does it converge?
Now I have to go look up the definitions for Replication, Repeatability, Reproducibility
Replication: https://en.wikipedia.org/wiki/Replication (disambiguation)
Replication_(scientific_method) -> Reproducibility https://en.wikipedia.org/wiki/Reproducibility :
> Measures of reproducibility and repeatability: In chemistry, the terms reproducibility and repeatability are used with a specific quantitative meaning. [7] In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is called reproducibility. [8] These measures are related to the more general concept of variance components in metrology.
Replication (statistics) https://en.wikipedia.org/wiki/Replication_(statistics) :
> In engineering, science, and statistics, replication is the repetition of an experimental condition so that the variability associated with the phenomenon can be estimated. ASTM, in standard E1847, defines replication as "... the repetition of the set of all the treatment combinations to be compared in an experiment. Each of the repetitions is called a replicate."
> Replication is not the same as repeated measurements of the same item: they are dealt with differently in statistical experimental design and data analysis.
> For proper sampling, a process or batch of products should be in reasonable statistical control; inherent random variation is present but variation due to assignable (special) causes is not. Evaluation or testing of a single item does not allow for item-to-item variation and may not represent the batch or process. Replication is needed to account for this variation among items and treatments.
Accuracy and precision: https://en.m.wikipedia.org/wiki/Accuracy_and_precision :
> In simpler terms, given a statistical sample or set of data points from repeated measurements of the same quantity, the sample or set can be said to be accurate if their average is close to the true value of the quantity being measured, while the set can be said to be precise if their standard deviation is relatively small.
Reproducible builds; to isolate and minimize software variance: https://en.wikipedia.org/wiki/Reproducible_builds
Re: reproducibility, containers, Jupyter books, REES, repo2docker: https://news.ycombinator.com/item?id=32965961 https://westurner.github.io/hnlog/#comment-32965961 (Ctrl-F #linkedreproducibility)
Emacs is on F-Droid
This F-Droid emacs appears to have been build straight from the emacs repo which is cool! I was hoping for a screenshot but I couldn't find one.
I have emacs installed in Termux on Android. Termux provides an extra button bar above the main keyboard for things like Ctrl, arrow keys, Esc, Meta which makes emacs just about usable in Termux. Without those things it would be unusable!
Without the magic button bar you'd need the hackers keyboard which works very well on tablets but it is just a bit squashed for phones unless you use them in landscape.
https://play.google.com/store/apps/details?id=org.pocketwork...
For good measure, let's share the link to Hacker's Keyboard on F-droid while we are at it :-)
https://f-droid.org/packages/org.pocketworkstation.pckeyboar...
Here's how to install conda, mamba, and pip with MambaForge in Termux w/ proot from FDroid because there are no official APKs on the play store as is now necessary to bless binaries with context labels: https://github.com/westurner/dotfiles/blob/develop/scripts/s...
Physicists Use Quantum Mechanics to Pull Energy Out of Nothing
Energy teleportation in 2023:
> Now in the past year, researchers have teleported energy across microscopic distances in two separate quantum devices, vindicating Hotta’s theory. The research leaves little room for doubt that energy teleportation is a genuine quantum phenomenon."
Quantum foam: https://en.wikipedia.org/wiki/Quantum_foam
Vikings went to Mediterranean for ‘summer jobs’ as mercenaries, left graffiti
Would you put graffiti on your own vehiclr?
Careers for graphic artists with BA degrees
Prehistoric art: https://en.wikipedia.org/wiki/Prehistoric_art
1. "Encino Man" (1992) https://en.wikipedia.org/wiki/Encino_Man https://www.google.com/search?kgmid=/m/05fc8m&hl=en-US&q=Enc...
Social media is a cause, not a correlate, of mental illness in teen girls
Perhaps Facebook should again require an email address at an approving college or university for sign up.
But what about supporting the NCMEC National Center for Missing & Exploited Children, without biometrics due to new laws?
Can you teach your daughters to be considerate of others on the internet?
They used to tell us not to put any personal information online as kids; no real names, etc.
Perhaps social media is a mirror from which we can determine parenting approach?
Are violent video games less bad than social media for which unhealthily-competitive strata?
"Please do not bully people on the internet because:"
"Please be considerate and helpful on the internet because:"
StopBullying.gov: https://www.stopbullying.gov/
"How to Make a Family Media Use Plan" (HealthyChildten.org (AAP)) https://www.healthychildren.org/English/family-life/Media/Pa...
Internet safety: https://en.wikipedia.org/wiki/Internet_safety
[after school] emotional restraint collapse: https://www.google.com/search?q=emotional+restraint+collapse :
; It's exhausting for many of us to withhold nonverbal emotions all day; and so after school a minute to just chill without typical questioning may or may not help prevent bullying on the internet
TIL Iceland provides vouchers for whatever after school activities: taxes pay for sports and other programs.
The fundamental thermodynamic costs of communication
I like how "Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems." makes it sound like biology is not the real world
Took me a while to understand this comment. I'm not a biologist.
A nerve, is a whole big bundle of axons[1]. Looks like each bundle carries hundreds or thousands of connections down to a muscle.
1 http://www.medicine.mcgill.ca/physio/vlab/other_exps/CAP/ner...
I am a neurobiologist and still don't understand the comment. The authors acknowledge biology in the first sentence.
wrt. claims in the full article however I have an issue with this observation...
> Another feature common to many biological and artificial communication systems is that they split a stream of information over multiple channels, i.e., they inverse multiplex. Often this occurs even when a single one of the channels could handle the entire communication load. For example, multiple synapses tend to connect adjacent neurons and multiple neuronal pathways tend to connect brain regions.
I dont see how a "single channel" i.e. a single synapse would suffice as the entire information output unit of a neuron. Making multiple synaptic connections is fundamental to the way neural networks perform computations.
Furthermore, representational drift observes that a biological single neuron's output given activation is not stable over time; which implies that there is greater emergent complexity than is modeled with ANNs (which have stable outputs given training, NN topology parameters, and activation functions that effectively weight training samples (which are usually also noise))
/? Representational drift brain https://www.google.com/search?q=representational+drift+brain ...
"Causes and consequences of representational drift" (2019) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7385530/
> The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
"The geometry of representational drift in natural and artificial neural networks" (2022) https://journals.plos.org/ploscompbiol/article?id=10.1371/jo... :
> [...] We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed “representational drift”. In this study we geometrically characterize the properties of representational drift in the primary visual cortex [...]
> The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
"Neurons are fickle: Electric fields are more reliable for information" (2022) https://www.sciencedaily.com/releases/2022/03/220311115326.h... :
> [...] And when the scientists trained software called a "decoder" to guess which direction the animals were holding in mind, the decoder was relatively better able to do it based on the electric fields than based on the neural activity.
> This is not to say that the variations among individual neurons is meaningless noise, Miller said. The thoughts and sensations of people and animals experience, even as they repeat the same tasks, can change minute by minute, leading to different neurons behaving differently than they just did. The important thing for the sake of accomplishing the memory task is that the overall field remains consistent in its representation.
> "This stuff that we call representational drift or noise may be real computations the brain is doing, but the point is that at that next level up of electric fields, you can get rid of that drift and just have the signal," Miller said.
> The researchers hypothesize that the field even appears to be a means the brain can employ to sculpt information flow to ensure the desired result. By imposing that a particular field emerge, it directs the activity of the participating neurons.
> Indeed, that's one of the next questions the scientists are investigating: Could electric fields be a means of controlling neurons?
/? representational drift site:github.com https://www.google.com/search?q=representational+drift+site%...
Computational neuroscience: https://en.wikipedia.org/wiki/Computational_neuroscience :
> Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.
Are you making some sort of point, or just dumping content?
Perhaps I was too polite. The collapsed entropy (absent real world noise per observation) of the binary relations in the brain is a useful metric.
Interesting take on being impolite - making a basic statement using overtly pretensions and opaque language?
> [...] Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems. We find that the thermodynamic cost in this model is a convex function of the channel capacity, the canonical measure of the communication capability of a channel. We also find that this function is not always monotonic, in contrast to previous results not derived from first principles physics. These results clarify when and how to split a single communication stream across multiple channels. In particular, we present Pareto fronts that reveal the trade-off between thermodynamic costs and channel capacity when inverse multiplexing. Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems.
https://arxiv.org/abs/2302.04320
What is the Shannon entropy interpretation of e.g. (quantum wave function) amplitude encoding?
"Quantum discord" https://en.wikipedia.org/wiki/Quantum_discord
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
Isn't there more entropy if we consider all possible nonlocal relations between bits; or, is which entropy metric independent of redundant coding schemes between points in spacetime?
New neural network architecture inspired by neural system of a worm
It makes a good headline, but reading over the paper (https://www.nature.com/articles/s42256-022-00556-7.pdf) it doesn’t seem biologically-inspired. It seems like they found a way to solve nonlinear equations in constant time via an approximation, then turned that into a neural net.
More generally, I’m skeptical that biological systems will ever serve as a basis for ML nets in practice. But saying that out loud feels like daring history to make a fool of me.
My view is that biology just happened to evolve how it did, so there’s no point in copying it; it worked because it worked. If we have to train networks from scratch, then we have to find our own solutions, which will necessarily be different than nature’s. I find analogies useful; dividing a model into short term memory vs long term memory, for example. But it’s best not to take it too seriously, like we’re somehow cloning a brain.
Not to mention that ML nets still don’t control their own loss functions, so we’re a poor shadow of nature. ML circa 2023 is still in the intelligent design phase, since we have to very intelligently design our networks. I await the day that ML networks can say “Ok, add more parameters here” or “Use this activation instead” (or learn an activation altogether — why isn’t that a thing?).
"I’m skeptical that biological systems will ever serve as a basis for ML nets in practice"
First of all, ML engineers need to stop being so brainphiliacs, caring only about the 'neural networks' of the brain or brain-like systems. Lacrymaria olor has more intelligence, in terms of adapting to exploring/exploiting a given environment, than all our artificial neural networks combined and it has no neurons because it is merely a single-cell organism [1]. Once you stop caring about the brain and neurons and you find out that almost every cell in the body has gap junctions and voltage-gated ion channels which for all intents and purposes implement boolean logic and act as transistors for cell-to-cell communication, biology appears less as something which has been overcome and more something towards which we must strive with our primitive technologies: for instance, we can only dream of designing rotary engines as small, powerful, and resilient as the ATP synthase protein [2].
[1] Michael Levin: Intelligence Beyond the Brain, https://youtu.be/RwEKg5cjkKQ?t=202
[2] Masasuke Yoshida, ATP Synthase. A Marvellous Rotary Engine of the Cell, https://pubmed.ncbi.nlm.nih.gov/11533724
> Once you stop caring about the brain and neurons and you find out that almost every cell in the body has gap junctions and voltage-gated ion channels which for all intents and purposes implement boolean logic and act as transistors for cell-to-cell communication, biology appears less as something which has been overcome and more something towards which we must strive with our primitive technologies: for instance, we can only dream of designing rotary engines as small, powerful, and resilient as the ATP synthase protein [2].
But what of wave function(s); and quantum chemistry at the cellular level? https://github.com/tequilahub/tequila#quantumchemistry
Is emergent cognition more complex than boolean entropy, and are quantum primitives necessary to emulate apparently consistently emergent human cognition for whatever it's worth?
[Church-Turing-Deutsch, Deutsch's Constructor theory]
Is ATP the product of evolutionary algorithms like mutation and selection? Heat/Entropy/Pressure, Titration/Vibration/Oscillation, Time
From the article:
> The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.”
Notes regarding Representational drift* and remarkable resilience to noise in BNNs) from "The Fundamental Thermodynamic Cost of Communication: https://news.ycombinator.com/item?id=34770235
It's never just one neuron.
And furthermore, FWIU, human brains are not directed graphs of literally only binary relations.
In a human brain, there are cyclic activation paths (given cardiac electro-oscillations) and an imposed (partially extracerebral) field which nonlinearly noises the almost-discrete activation pathways and probably serves a feed-forward function; and in those paths through the graph, how many of the neuronal synapses are simple binary relations (between just nodes A and B)?
> The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.
Is there an information metric which expresses maximal nonlocal connectivity between bits in a bitstring; that takes all possible (nonlocal, discontiguous) paths into account?
`n_nodes*2` only describes all of the binary, pairwise possible relations between the bits or qubits in a bitstring?
"But what is a convolution" https://www.3blue1brown.com/lessons/convolutions
Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord
Scientists find first evidence that black holes are the source of dark energy
Introduction to Datalog
In the last few months the mention of Datalog has increased, I wondered how it differed from graph databases and found a clear answer in SO [1]. I am not an incumbent but found graph databases and clause approaches interesting.
[1] https://stackoverflow.com/questions/29192927/a-graph-db-vs-a... (2015)
I did an interview [1] with Kevin Feeney, one of the founders (no longer active) of TerminusDb, which goes into some depth about the difference between RDF stores and (property) graph databases, where the former is more closely aligned with datalog and logic programming. There are links to a really excellent series of blog posts by Kevin on this topic in the show notes.
[1] https://thesearch.space/episodes/5-kevin-feeney-on-terminusd...
With RDF* and SPARQL* ("RDF-star" and "SPARQL-star") how are triple (or quad) stores still distinct from property graphs?
RDFS and SHACL (and OWL) are optional in a triple store, which expects the subject and predicate to be string URIs, and there is an object datatype and optional language:
(?s ?p ?o <datatype> [lang])
(?subject:URI, ?predicate:URI, ?object:datatype, object_datatype, [object_language])
RDFS introduces rdfs:domain and rdfs:range type restrictions for Properties, and rdfs:Class and rdfs:subClassOf.`a` means `rdf:type`; which does not require RDFS:
("#xyz", a, "https://schema.org/Thing")
("#xyz", rdf:type, "https://schema.org/Thing")
Quad stores have a graph_id string URI "?g" for Named Graphs: (?g ?s ?p ?o)
("https://example.org/ns/graphs/0", "#xyz", a, "https://schema.org/Thing")
("https://example.org/ns/graphs/1", "#xyz", a, "https://schema.org/ScholarlyArticle")
There's a W3C CG (Community Group) revising very many of the W3C Linked Data specs to support RDF-star: https://www.w3.org/groups/wg/rdf-starLooks like they ended up needing to update basically most of the current specs: https://www.w3.org/groups/wg/rdf-star/tools
"RDF-star and SPARQL-star" (Draft Community Group Report; 08 December 2022) https://w3c.github.io/rdf-star/cg-spec/editors_draft.html
GH topics: rdf-star, rdfstar: https://github.com/topics/rdf-star, https://github.com/topics/rdfstar
pyDatalog does datalog with SQLAlchemy and e.g. just the SQLite database: https://github.com/pcarbonn/pyDatalog ; and it is apparently superseded by IDP-Z3: https://gitlab.com/krr/IDP-Z3/
From https://twitter.com/westurner/status/1000516851984723968 :
> A feature comparison of SQL w/ EAV, SPARQL/SPARUL, [SPARQL12 SPARQL-star, [T-SPARQL, SPARQLMT,]], Cypher, Gremlin, GraphQL, and Datalog would be a useful resource for evaluating graph query languages.
> I'd probably use unstructured text search to identify the relevant resources first.
Show HN: Polymath: Convert any music-library into a sample-library with ML
Polymath is a open-source tool that converts any music-library into a sample-library with machine learning. It separates songs into stems, quantizes to same BPM, detects key and much more. A game-changing workflow for music producers & DJ
Other cool #aiart things:
- "MusicLM: Generating music from text" https://news.ycombinator.com/item?id=34543748
- /? awesome Generative AI site:GitHub.com https://www.google.com/search?q=awesome+generative+ai+site%3...
- #GenerativeArt #GenerativeMusic
- BespokeSynth DAW: https://github.com/BespokeSynth/BespokeSynth :
> [...] live-patchable environment, so you can build while the music is playing; VST, VST3, LV2 hosting; Python livecoding; MIDI, [...]
> Using Polymath's search capability to discover related tracks, it is a breeze to create a polished, hour-long mash-up DJ set
https://github.com/samim23/polymath#how-does-it-work :
> How does it work?
> - Music Source Separation is performed with the [facebook/demucs] neural network
> - Music Structure Segmentation/Labeling is performed with the [wayne391/sf_segmenter] neural network
> - Music Pitch Tracking and Key Detection are performed with [marl/Crepe] neural network
> - Music Quantization and Alignment are performed with [bmcfee/pyrubberband]
> - Music Info retrieval and processing is performed with [librosa/librosa]
Coffee won’t give you extra energy, just borrow a bit that you’ll pay for later
From the article:
> This is because the caffeine won’t bind forever, and the adenosine that it blocks doesn’t go away. So eventually the caffeine breaks down, lets go of the receptors and all that adenosine that has been waiting and building up latches on and the drowsy feeling comes back – sometimes all at once.
> So, the debt you owe the caffeine always eventually needs to be repaid, and the only real way to repay it is to sleep.
Does drinking water offset the exertion-resultant dehydration that caffeine and other stimulants tend to result in?
[dead]
Let Teenagers Sleep
As a teenager I was diagnosed with "ADD". Very intelligent, but unable to focus or complete assignments.
My life habits were carb heavy unhealthy food (from the cafeteria), soda, lack of sleep due to long school commute, not much exercise
As an adult, I eat no carbs, all meat and vegetables, I work from home and sleep in as far as I need every night. My thinking is laser sharp.
They tried to medicate me with all sorts of anti depression drugs and amphetamines. Turns out I was just very unhealthy, from a basic lifestyle perspective. And the school was pushing that lifestyle. My guidance counseler suggested I dont attend college, just a community college (despite the fact that I got admitted to a decent state school), or maybe go into the "trades".
These large scale school systems treat students like cattle, with zero regard for the long term effects. Many mental health concerns would disappear if people were actually healthy in the most basic sense.
/? ADHD and sleep; REM / non-REM: https://www.google.com/search?q=adhd+and+sleep
Sleep hygiene: https://en.wikipedia.org/wiki/Sleep_hygiene https://www.google.com/search?q=sleep+hygiene
- Enough exercise and water
- Otherwise, limit calories and/or protein in the preceding hours
Sleep induction: https://en.wikipedia.org/wiki/Sleep_induction
Pranayama; breathing yoga: https://en.wikipedia.org/wiki/Pranayama
4-7-8 breathing: https://www.google.com/search?q=4-7-8+breathing
Attributed both to Army and Navy: https://www.fastcompany.com/90253444/what-happened-when-i-tr... :
> The Independent says the technique was first described in a book from 1981 called "Relax and Win: Championship Performance" by Lloyd Bud Winter.
/? "Relax and Win: Championship Performance" https://www.google.com/search?q=%22Relax+and+Win%3A+Champion... https://archive.org/details/Relax-and-Win_Championship-Perfo...
Why is there so much useless and unreliable software?
Linear logic has been known since 1987. The first release of Coq (dependent types for functional programming and writing proofs) was in 1989. The HoTTBook came out in 2013. Ada/SPARK 2014 came out the same year as Java 8 did. We also witnessed the Software Foundations series, the CompCert C compiler, the Sel4 microkernel, and the SPARKNaCl cryptographic library.
Instead of learning about those achievements and aiming to program for the same reliability, clarity, and sophistication, we see an abundance of software that cannot clearly describe their own behavior nor misbehavior.
Instead of incorporating the full functionality of XML/HTML/CSS/SVG/JS/WebGL into the development experience and providing ways to control them at the fundamental level, we reinvent crude approximations like the various web frameworks.
YAML and JSON often trumps XML/XSD until things get out of control, and even then, people still don't learn the lesson. Protobuf, flatbuffer, capnproto, and the like keep reinventing ASN.1.
Naive microservices partially reimplements Erlang's BEAM VM while ignoring all the hard parts that BEAM VM got right. Many people riding the microservice bandwagon have never even heard of Paxos, not to mention TLA+.
Many programmers keep learning new shining frameworks but are reluctant to learn about the crucial fundamentals, e.g., Introduction to Parallel Algorithms and Architectures, nor how to think clearly and unambiguously in the spirit of Coq/Agda/Lean.
No wonder ChatGPT exposes how shallow most of programming is and how lacking most programmers are in actual understanding. Linear logic and dependent types are there to help us design and think with clarity at a high level, but people would rather fumble around with OOP class hierarchies (participate in the pointless is-a/has-a arguments) and "architecture" design that only complicate things.
What is this madness? This doesn't sound like engineering.
The adoption curve of advanced technologies that solve it all. Is it just the cognitive burden of the tooling or the concepts, are the training methods different, a dearth of already trained talent and sufficient post-graduate or post-doctoral instructors, unclear signals between demand and a supply funnel?
The limited availability of Formal Methods and Formal Verification training.
The growing demand for safety critical software for things that are heavy and that move fast and that fly over our heads and our homes.
In order to train, you need to present URLs: CreativeWork(s) in an outline (a tree graph), identify competencies (typically according to existing curricula) and test for comprehension, and what else is necessary to boost retention for and of identified competencies?
There are many online courses, but so few on Computer Science Education. You can teach with autograded Jupyter notebooks and your own edX instance all hosted in OCI containers in your own Kubernetes cloud, for example. Containers and Ansible Playbooks help minimize cost and variance in that part of the learning stack.
We should all learn Coq and TLA+ and Lean, especially. What resources and traversal do you recommend for these possibly indeed core competencies? For which domains are no-code tools safe?
If we were to instead have our LLM (,ChatGPT,Codex,) filter expression trees in order to autocomplete from only Formally Verified code with associated Automated Tests, and e.g. Lean Mathlib, how would our output differ from that of an LLM training on code that may or may not have any tests?
Could that also implement POSIX and which other interfaces, please?
> The adoption curve of advanced technologies that solve it all. Is it just the cognitive burden of the tooling or the concepts, are the training methods different, a dearth of already trained talent and sufficient post-graduate or post-doctoral instructors, unclear signals between demand and a supply funnel?
> The limited availability of Formal Methods and Formal Verification training.*
From "Are software engineering “best practices” just developer preferences?" https://news.ycombinator.com/item?id=28709239 :
>>> From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :
>>>> Which universities teach formal methods?
>>>> - q=formal+verification https://www.class-central.com/search?q=formal+verification
>>>> - q=formal+methods https://www.class-central.com/search?q=formal+methods
>>>> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs? https://news.ycombinator.com/item?id=28513922
Formal methods should be required course or curriculum competency for various Computer Science and Software Engineering credentials.
> The growing demand for safety critical software for things that are heavy and that move fast and that fly over our heads and our homes.
"The Case of the Killer Robot"
> In order to train, you need to present URLs: CreativeWork(s) in an outline (a tree graph), identify competencies (typically according to existing curricula) and test for comprehension, and what else is necessary to boost retention for and of identified competencies?
A Multi-track "ThingSequence" which covers; And there are URIs for the competencies and exercises that cover.
> There are many online courses, but so few on Computer Science Education. You can teach with autograded Jupyter notebooks and your own edX instance all hosted in OCI containers in your own Kubernetes cloud, for example. Containers and Ansible Playbooks help minimize cost and variance in that part of the learning stack.
Automation with tooling is necessary to efficiently compete. In order to support credentialing workflows that qualify point-in-time performance, it is advisable to adopt tools that support automation of grading, for example.
> We should all learn Coq and TLA+ and Lean, especially. What resources and traversal do you recommend for these possibly indeed core competencies?
Links above. When should Formal Methods be introduced in a post-secondary or undergraduate program? Is there any reason that a curriculum can't go from math proofs, to logical proofs, to logical argumentation?
> For which domains are no-code tools safe?
Unfortunately, no-code tools in even low-risk applications can be critical vulnerabilities if persons do not understand their limitations. For example, "we used to have already-printed [offline] forms on hand [before the new computerized workflows [enabled by no-code, no review development workflows]]".
Should there be 2 or 3 redundant processes with an additional component that discards low-level outputs if there is no consensus? Is that plus No-code tools safe?
> If we were to instead have our LLM (,ChatGPT,Codex,) filter expression trees in order to autocomplete from only Formally Verified code with associated Automated Tests, and e.g. Lean Mathlib, how would our output differ from that of an LLM training on code that may or may not have any tests?
You should not be autocompleting from a training corpus of code without tests.
> Could that also implement POSIX and which other interfaces, please?
Eventually, AI will produce an OS kernel that is interface compatible with what we need to run existing code.
How can experts assess whether there has been sufficient review of an [AI-generated] alternative interface implementation?
Not everybody should learn TLA+. It's a pretty niche tool and it's not helpful for a lot of kinds of software!
What are some of the limits of TLA+ for dynamic analysis? Which other tools model variable-latency distributed systems?
I don't think that most software is a variable-latency distributed system. I wouldn't use it for a simulation, for example, or anything which depends at a high level on non-integer numbers (TLA+ can't model-check floats or reals).
The Rust Implementation of GNU Coreutils Is Becoming Remarkably Robust
I wonder why not reimplement coreutils as library functions, to be used within an ad-hoc REPL. It would be much more flexible and extensible.
AFAIK, originally the reason why they were made as programs was that the available programming languages were too cumbersome for such use. Now we have plenty of experience making lightweight scripting languages that are pleasant to use in a live environment (vs premade script), so why give up the flexibility of ad-hoc scripting?
Isn't what you're describing basically a unix "shell" like bash, csh, zsh, etc?
I agree that something like bash leaves a lot to be desired in REPL functionality. Its ubiquity is convenient, however.
DIY 1,500W solar power electric bike (2022)
It's RadWagon 4 with white paint job, upgraded brakes/power controller, and solar panel at the top of the stock roof.
Not quite what I expected when I read the title.
Btw, the panel is 50W, and I assume the battery is stock, 672 Wh. That's 14+ hours of best-case sunlight to recharge fully. I wonder how practical is it.
Yeah, makes more sense to leave the solar panel in your backyard (saving some weight) and have spare battery always sitting on the solar charger.
Then you can bring your bike inside, in the shade, etc. — not having to always find sunlight to park in.
The only way bringing the solar panel with the bike makes sense is a multi-day ride. But as is being pointed out, you're going to be spending a lot more time sunbathing rather than biking under power.
<wagon in tow> Yeah, Good call boss.
The cost figures in this article about [rooftop,] wind do not take into account latest gen [Dyneema] ultralight rooftop solar:
"Rooftop wind energy innovation claims 50% more energy than solar at same cost" (2022) https://pv-magazine-usa.com/2022/10/14/rooftop-wind-energy-i...
> The scalable, “#motionless” #WindEnergy unit can produce 50% more energy than rooftop solar at the same cost, said the company.
> The technology leverages aerodynamics similar to #airfoils in a race car to capture and amplify each building’s airflow. The unit requires about 10% of the space required by solar panels and generates round-the-clock energy. Aeromine said unlike conventional wind turbines that are noisy, visually intrusive, and dangerous to migratory birds, the patented system is motionless and virtually silent.
> An #Aeromine system typically consists of 20 to 40 units installed on the edge of a building facing the predominant wind direction. The company said the unit can minimize energy storage capacity needed to meet a building’s energy needs, producing energy in all weather conditions. With a small footprint on the roof, the unit can be combined with rooftop solar, providing a new tool in the toolkit for decarbonization and energy independence.
"18 Times More Power: MIT Researchers Have Developed Ultrathin Lightweight Solar Cells" (2022) https://scitechdaily.com/18-times-more-power-mit-researchers... :
> When they tested the device, the MIT researchers found it could generate 730 watts of power per kilogram when freestanding and about 370 watts-per-kilogram if deployed on the high-strength Dyneema fabric, which is about 18 times more power-per-kilogram than conventional solar cells.
> “A typical rooftop solar installation in Massachusetts is about 8,000 watts. To generate that same amount of power, our fabric photovoltaics would only add about 20 kilograms (44 pounds) to the roof of a house,” he says.
> They also tested the durability of their devices and found that, even after rolling and unrolling a fabric solar panel more than 500 times, the cells still retained more than 90 percent of their initial power generation capabilities.
E.g. Hyperlite Mountain Gear sells Dyneema ultralight backpacking packs and coats. There are Dyneema Patch Kits that work for various types of gear.
Wise to look at Ultralight backpacking gear before buying regular camping gear. Solarcore Aerogel is warm and light and also in encased in PVA foam rubber which is like a new wet suit. https://twitter.com/westurner/status/1600820322567041024 Kayaking bags are waterproof, but are there yet Dyneema ones?
You mightn't have understood?
730-370 watts/kilogram is the number to beat (for DIY electric bicycle applications)
And rooftop wind is competitive (for charging offline batteries)
Presumably, bicycling is like ultralight hiking: wHr/kg is the or a limit https://en.wikipedia.org/wiki/Kilowatt-hour
A pedaling electric bicycler could tow a solar wagon, eh
Actors Say They’re Being Asked to Sign Away Their Voice to AI
This is starting to sound eerily similar to that BoJack Horseman plotline, where he won an Oscar for a movie he never acted in, because he signed away his likeness and the director just used a 3d model of him because he was always too drunk to be on set.
But in all honesty though, this could be great news - imagine not needing to do recall shoots because the dialogue was bad, and you can fill in a few sentences with this technology instead. Of course, it will be abused and used to not compensate actors for their work properly.
But if it develops rapidly I could imagine games and animation using "stock voices" in their productions, just like designers often use stock imagery. I'm sure someone is happy to sign away their voice for a few grand, and provide it to a stock voice library.
S1M0̸NE (2002) https://en.wikipedia.org/wiki/Simone_(2002_film) Frankenstein.
Her (2013) https://en.wikipedia.org/wiki/Her_(film) She's too smart for you anyway.
Upload (2020, 2022) https://en.wikipedia.org/wiki/Upload_(TV_series) She buys in game credits for his existence.
Faust / The Little Mermaid: https://en.wikipedia.org/wiki/Faust#Cinematic_adaptations
"Final cut, Suit!" - Billy Walsh, Entourage https://en.wikipedia.org/wiki/List_of_recurring_Entourage_ch...
Final cut privilege: https://en.wikipedia.org/wiki/Final_cut_privilege :
> typically reluctant
Why do we create modern desktop GUI apps using HTML/CSS/JavaScript? (2022)
Because Web Standards are Portable and Accessible.
You can recreate the accessible GUI widget tree universe in ASM or WASM, but it probably won't be as accessible as standard HTML form elements unless you spend more time than you have for that component on it.
For example, video game devs tend to do this: their very own text widget (and hopefully a ui scaling factor) with a backgroundColor attribute - instead of CSS's background-color - and then it doesn't support tabindex or screen readers or high contrast mode or font scaling.
It's a widget tree with events either way, but Web Standards are Portable and Accessible (with a comparative performance cost that's probably with it)
> Because Web Standards are Portable and Accessible.
A good http-friendly GUI standard would be also (as I describe nearby). But do note that businesses still use mostly desktops for everyday CRUD and don't want to pay a large "mobile tax" if given a choice. YAGNI has been ignored, as the standards over-focused on social media and e-commerce at the expense of typical CRUD. CRUD ain't sexy but necessary and common.
It's not easy to do both desktop and mobile UI's well and inexpensively in one shot. I believe there are either inherent trade-offs between them, or that the grand unification UI framework has yet to be invented. (At least one that doesn't require UI rocket science.)
Flutter > See also is a good list: https://en.wikipedia.org/wiki/Flutter_(software)
/? flutter python ... TIL about flet: https://github.com/flet-dev/flet https://flet.dev/docs/guides/python/getting-started
Mobile-first development says develop the mobile app first and teh desktop version can get special features later; one responsive layout for Phone, Tablet, and desktop
Phosh (GTK) and KDE Plasma Mobile are alternatives to iOS and Android (which do have a terminal, bash, git, and a way to install CPython (w/ MambaForge ARM64 packages) and IPython)
Ask HN: Advice from people who strength train from home
I have a pullup bar and a Lebert Equalizer kind of thing. I live in a small room at my university. I am planning on training bodyweight or calisthenics as it is called popularly.
HNers who train from home using minimal weights or equipments, can you suggest a path for me.
I am looking for some hints on:
1. What is the bare minimum balanced routine I can start with?
2. How long should I stick to it before I see or feel actual results from it?
3. Diet? Eggs are easily available for me.
4. A bit about your journey. How you started and how have you progressed on parameters of strength, routine, size, energy, etc.
P.S: I came across this youtube channel: https://www.youtube.com/@Kboges where he suggests that to gain strength and general fitness you can train daily with 3 movements but not to failure. Is it possible?
My goals are to have enough muscle and strength so that I don't get tired doing chores lifting something for my household. I want this to go far into my old age so that I don't fall and spend my final years in a nursing home bed.
(Secondhand) Total Gym XLS & 4x 20lb 5gal bucket concrete weights on 1.25" fittings, Basketball filled with sand, occasional Yoga, lately very occasional Inline Skating up and down a hill with wrist guards and a MIPS helmet with a visor
(Soccer, then former middle of the pack Distance Running; then also weight training resistance training in a Weider cage with an 45lb bar and adjustable spotter bars, high pulley, low pulley, leg extension; then Bowflex; and now TotalGym and I prefer it. Lol, you watch the infomercial and you see the testimonials and you think "nobody's that happy with their without ever" and still I really do enjoy this equipment. (I have never been paid to endorse any fitness product or book.))
Calisthenics says that "time under tension" is more relevant than number of reps.
The TotalGym is a decent to good partner stretcher. "Don't slam the stack, and don't waste an opportunity to let it stretch you out"
Dance music has a higher tempo. Various apps will generate workout playlists with AFAIU a BPM ladder
The "high protein foods" part of Keto.
My understanding of Keto: if you eat too much sugar (including starchy carbs) without protein, the body learns to preferentially burn sugar and wastes the protein; so eat protein all day. We do need carbs to efficiently process protein. (And we do need fat for our brains: there is a reference DV daily value for fat for a reason. Ranch, Whole Milk, and Peanut Butter have fat.)
Omega-3s and Omega-6s would be listed under "Polyunsaturated Fats" if it were allowed to list them on the standard Nutrition Facts label instead of elsewhere on the packaging.
Omega 6:3 balance apparently affects endocannabinoid levels. Endocannabinoids help regulate diet and inflammation.
Antioxidants: Vitamins A, C, E,
Electrolytes: Water + Salt + Potassium (H20 + NaCl + K)
There are many lists of foods to eat for inflammation and inflammatory conditions.
Foods rich in anthocyanins tend to be high in nutrients; for example, blueberries, chard, and other dark leafy vegetables and fruits. Blueberry smoothie; premixed, frozen, fresh and washed with sprayed water+vinegar in a clip-on colander.
Pressure cooking is a relatively healthy way to prepare protein.
You can make a dozen hard-boiled or soft-boiled eggs with one instant pot pressure cooker in ~20 minutes and with less water than conventional boiling.
50g/day of Protein is 100% of the DV for a 2000 calorie diet, when you're not trying to gain muscle mass. Bodybuilders consume at least 100g or 150g of protein a day.
Tired of e.g. tuna, eggs, beef; I learned of "The No Meat Athlete Cookbook: Whole Food, Plant-Based Recipes to Fuel Your Workouts—and the Rest of Your Life", which has a bunch of ideas for vegan and vegetarian protein and nutrionally-balanced meals.
FWIU, vegan and vegetarian diets tend to have some common issues like e.g. magnesium deficiency.
I can't be mad at the lion for being omnivorous; but frustrated at the lion for being greedy and selfish in regards to ecology and smell. /? S tool reading infographic
Protein bars (20g: Huel, Aldi), Protein bread (10g/slice), Muscle Milk Pro (50g), Huel, Filtered water: Wide-mouth water bottles, Fish Oil (Omega-3s DHA & EPA), Olive Oil (<~380°F), spray Avocado Oil (Aldi), Peanut Butter, Whole Milk, Multigrain Cheerios over Total (in the 100% DV cereals category),
Supergrains: flax, chia, shelled hemp seed. Super grains mixed into peanut butter = $25 health food store peanut butter.
Healthy Eating Plate: Water, Fruits, Grains, Veggies, Protein. USDA Myplate: people need* Milk/Dairy (which is indeed basically impossible to create a synthetic analogue of, in terms of formula or)
Some greens have little more nutritional content than water and fiber. AFAIU, Chard is as nutritious as Spinach, which has iron (which is what Popeye eats to hopefully eventually woo Olive Oil)
Ice water diminishes appetite. Bread has filling carbs that you can eat with protein* (to stay closer to "ketosis", for example)
HIIT says don't rest for more than 15 seconds / 45 seconds between exercises / sets
My TotalGym workouts now are much more aerobic than when I started getting back to healthy and counted reps. I've put off adding more weight to the carriage bar (that certain TG models lierally support) for like a year and I've rounded-out in areas I mightn't have as a result.
I don't miss free weights, stacks / universal machines. After trying rings with nobody else around in the backyard, the TotalGym wins.
Certain (my parents') Bowflex units can't be upgraded with more or heavier tension rods; though the weighted non-inclined rowing is cool too.
Watched a few "Bodybuilding on a budget" [how to buy protein at at discount grocery store] yt videos. I usually eat cold, but Instant Pot for the win; TIL grill char is carcinogenic. With a second Instant Pot, you can do veggies separately from protein, which takes much longer to cook.
Show HN: DocsGPT, open-source documentation assistant, fully aware of libraries
Hi, This is a very early preview of a new project, I think it could be very useful. Would love to hear some feedback/comments
From https://news.ycombinator.com/item?id=34659668 :
>> How do the responses compare to auto-summarization in terms of Big E notation and usefulness?
> Automatic summarization: https://en.wikipedia.org/wiki/Automatic_summarization
> "Automatic summarization" GH topic: https://github.com/topics/automatic-summarization
Though now archived,
> Microsoft/nlp-recipes lists current NLP tasks that would be helpful for a docs bot: https://github.com/microsoft/nlp-recipes#content
NLP Tasks: Text Classification, Named Entity Recognition, Text Summarization, Entailment, Question Answering, Sentence Similarity, Embeddings, Sentiment Analysis, Model Explainability, and Auto-Annotatiom
On further review, there are more GitHub projects labeled with https://github.com/topics/text-summarization than "automatic-summarization"; e.g. awesome-text-summarization: https://github.com/icoxfog417/awesome-text-summarization and https://github.com/luopeixiang/awesome-text-summarization , which links to what look like relatively current benchmarks for SOTA performance in text summarization from the gh source repo of https://nlpprogress.com/ : https://github.com/sebastianruder/NLP-progress/blob/master/e...
Show HN: I turned my microeconomics textbook into a chatbot with GPT-3
I never really read my micro-econ textbook.
Looking up concepts from the book with Google yields SEO-y results.
So I used GPT-3 to make a custom chatbot I can query at any time.
How do the responses compare to auto-summarization in terms of Big E notation and usefulness?
> How do the responses compare to auto-summarization in terms of Big E notation and usefulness?
Automatic summarization: https://en.wikipedia.org/wiki/Automatic_summarization
"Automatic summarization" GH topic: https://github.com/topics/automatic-summarization
Microsoft/nlp-recipes lists current NLP tasks that would be helpful for a docs bot: https://github.com/microsoft/nlp-recipes#content https://github.com/microsoft/nlp-recipes#content
ChatGPT is a bullshit generator but it can still be amazingly useful
Prompt engineering: https://en.wikipedia.org/wiki/Prompt_engineering
/? inurl:awesome prompt engineering "llm" site:github.com https://www.google.com/search?q=inurl%3Aawesome+prompt+engin...
XAI: Explainable Artificial Intelligence & epistomology https://en.wikipedia.org/wiki/Explainable_artificial_intelli... :
> Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), [1] is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI. [2] It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. [3][4] By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. [5] XAI may be an implementation of the social _ right to explanation _. [6] XAI is relevant even if there is no legal right or regulatory requirement. For example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. [7] These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions. [8]
Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation
(Edit; all human)
/? awesome "explainable ai" https://www.google.com/search?q=awesome+%22explainable+ai%22
- (Many other great resources)
- https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master... :
> Post model-creation analysis, ML interpretation/explainability
/? awesome "explainable ai" "XAI" https://www.google.com/search?q=awesome+%22explainable+ai%22...
I for one rue this commandeering of the word "engineering". No, most activities involving stuff are not "engineering". Especially flailing weakly from a distance at an impenetrable tangle of statistical correlations. What a disservice we are doing ourselves.
A more logged approach with IDK all previous queries in a notebook and their output over time would be more scientific-like and thus closer to "Engineering": https://en.wikipedia.org/wiki/Engineering
> Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.[1] The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application.
"How and why (NN, LLM, AI,) prompt outputs change over time; with different models, and with different training data" {@type=[schema:Review || schema:ScholarlyArticle], schema:dateModified=}
Query:
(@id, "prompt", XML:lang, w3c:prov-enance)
QueryResult:
(@id, query.id, datetime, api_service_uri, "prompt_ouput")
#aiart folks might knowSh1mmer – An exploit capable of unenrolling enterprise-managed Chromebooks
I wouldn't have a career in IT if I hadn't spent many hours at ages 11 to 15 trying to get round my schools network security. My logon was frequently disabled for misuse and I was even suspended for a couple of days once but I learnt more that way than in any class I've ever taken.
I relate to this. As someone currently in high school, messing around with web proxies and code deployment sights, and web-based IDE's trying to run Dwarf Fortress in my school browser has taught me more about computers and networks then just about anything else. It is painfully easy to get around school filters these days. I've never really messed with unenrollment because you do need enrollment to access the testing websites but I've been trying to get into Developer Mode to get linux apps, but the IT guys must have thought ahead on that one.
Chromebooks don't even have a Terminal for the kids. Vim's great, but VScode with Jupyter Notebook support would make the computers we bought for them into great offline calculators, too.
VSCode on a Chromebook requires VMs and Containers which require "Developer Tools" and "Powerwash"; or the APK repack of VSCodium that you can't even sideload and manually update sometimes (because it's not on the 15-30% cut, and must use their payment solution, app store with static analysis and code signing at upload).
AFAIU, Chromebooks with Family Link and Chromebooks for Education do not have a Terminal, bash, git, VMs (KVM), Containers (Docker/Podman/LXC/LXD/gvisor), third-party repos with regular security updates, or even Python; which isn't really Linux (and Windows, Mac, and Linux do already at present support such STEM for Education use cases).
From https://news.ycombinator.com/item?id=30168491 :
> Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"? The current pyodide CPython Jupyter kernel takes like ~25s to start at present, and can load Python packages precompiled to WASM or unmodified Python packages with micropip: https://pyodide.org/en/latest/usage/loading-packages.html#lo...
There's also MambaLite, which is part of the emscripten-forge project; along with BinderLite. https://github.com/emscripten-forge/recipes (Edit: Micropip or Mambalite or picomamba or Zig. : "A 116kb WASM of Blink that lets you run x86_64 Linux binaries in the browser" https://news.ycombinator.com/item?id=34376094 )
It looks like there are now tests for VScode in the default Power washable 'penguin' Debian VM that you get with Chromebook Developer Tools; but still the kids are denied VMs and Containers or local accounts (with kid-safe DoH/DoT at lesat) and so they can't run VScode locally on the Chromebooks that we bought for them.
Why do I need "Developer Tools" access to run VScode and containers on a Chromebook; but not on a Windows, Mac or Linux computer? If containers are good enough for our workloads hosted in the cloud, they should be good enough for local coding and calculating in e.g. Python. https://github.com/quobit/awesome-python-in-education#jupyte...
Good point. Wasn't aware of the Family Link restrictions. Will see what can be done here.
Disclaimer: I work on ChromeOS.
VSCode + containers + the powerwash feature would enable kids to STEM.
Are flatpaks out of the question? Used to be "Gnome and Chrome" on ~Gentoo.
Shouldn't the ChromiumOS host be running SELinux, if the ARC support requires extended filesystem attributes for `ls -alz` and `ps -aufxz` to work?
Chromium and Chrome appear to be running unconfined? AppArmor for Firefox worked years ago?
https://www.google.com/search?q=chromium+selinux ; chrome_selinux ?
It seems foolish to have SELinux in a guest VM but not the host.
Task: "Reprovision" the default VMs and Containers after "Powerwash" `rm -rf`s everything
`adb shell pm list packages` and `adb install` a list of APKs and CRXs.
Here's chromebook_ansible: https://github.com/seangreathouse/chromebook-ansible/blob/ma...
Systemd-homed is portable. Still, "Reprovision" the broken userspace for the user.
Local k8s like microshift that does container-selinux like RH / Fedora, with Gnome and Waydroid would be cool to have for the kids.
Podman-desktop (~Docker Desktop) does k8s now.
K8s defaults to blocking containers that run as root now, and there's no mounting thee --privileged docket socket w/ k8s either. Gitea + DroneCI/ACT/ci_runner w/ rootless containers. Gvisor is considered good enough for shared server workloads.
Repo2docker + caching is probably close to "kid proof" or "reproducible".
VScode has "devcontainer.json". Scipy stacks ( https://jupyter-docker-stacks.readthedocs.io/en/latest/using... ) and Kaggle/docker-python (Google) take how many GB to run locally for users < 13 who we don't afford cloud shells with SSH (Colab with SSH, JupyterHub (TLJH w/ k8s),) for either.
Task: Learn automated testing, bash, git, and python (for Q12 K12CS STEM)
>> *Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"?"
"ENH: Terminal and Shell: BusyBox, bash/zsh, git; WebVM," https://github.com/jupyterlite/jupyterlite/issues/949
I actually use a Web Assembly port of VIM on my school computer.
Nice. TIL about vim.wasm: https://github.com/rhysd/vim.wasm
Jupyter Notebook and Jupyter Lab have a web terminal that's good enough to do SSH and Vim. Mosh Mobile Shell is more resilient to internet connection failure.
Again though, Running everything in application-sandboxed WASM all as the current user is a security regression from the workload isolation features built into VMs and Containers (which Windows, Mac, and Linux computers support in the interests of STEM education and portable component reuse).
When Will Fusion Energy Light Our Homes?
It’s lighting my home right now. Looks like clear skies today.
We get free EM radiation from the free nuclear fusion reaction at the center of our solar system; and all of the other creatures find that sufficient for survival.
None of the other creatures require energy to heat their homes, grow food, travel, keep the lights on, or ship cargo in a large inter-connected grid of supply that feeds billions of people.
Everything is also manufactured out of petroleum derivatives. Without it, we go back to making literally everything out of wood and metal, or not making it. 90% of the items you have contact with everyday is made with some kind of petroleum derivative.
EV vehicles are impossible to manufacture without petroleum, so this is certainly a lot more nuanced than just free energy from space...
Given the petroleum is made from creatures[0] which were entirely powered by sunlight, it should be clear that petroleum can be produced by sunlight.
[0] mostly plants, IIRC
Well, not all plastics are made from petroleum and not all that are made from petroleum need to be made from petroleum, it’s just cheaper because there’s a ton left over. Without a petroleum based economy we would either use petroleum for plastics or use other sources of monomers. Plastic would certainly be a lot more expensive, though, especially during a transition phase to other monomer sources. But it’s simply untrue to say petroleum is required to build EVs, it’s just that today that’s what is used as the basis monomers for the plastics.
It’s also factually true, despite a seeming loss of recognition in the last decade, that fossil fuels are not limitless. We might as well not wait for exhaustion before planning about what to do next, no?
Those are "biopolymers", "biocomposites", but like Soy Dream not "bioplastics"?
FWIU, algae, cellulose, and flax or hemp are strong candidates for sustainable eco-friendly products and packaging.
No I was thinking of monomers like bio plastics. But sure, biopolymers etc can work too as replacements. Silicon based monomer/polymers are also very promising. Carbon is a small amount of the earth, while silicon is extraordinarily plentiful. Silicone rubbers are probably the most common example. They use silicon instead of carbon for the backbone.
uh huh, uh huh, and WHY do you suppose EVERYTHING involves the use of petroleum?
It's a classic capitalist evasion of externalities.
Every single drill site is toxic waste disaster, that no one ever has to pay for (except of course, the impoverised who live down stream).
To ignore the many gigawatts of free energy beamed in from space because petroleum will still be used in some way is evading the question.
The fact remains: free fusion power is heating and lighting our homes every time we open a window shade.
And if the combined mafia influence of the contruction-mafia and the petro-mafia didn't have us building houses with NO relation to how they point at the sun, we would use that energy to MUCH greater efficiency...
have you ever been to a drill site? tons where I live. I'd have a picnic out there today if it weren't so cold. Gotta watch for h2s I suppose.
Yes, in fact I worked on them. Enjoy your picnic on top of all the sludge buried next to the wellhead...
Fusion energy is 30 years away and always will be. https://www.youtube.com/watch?v=JurplDfPi3U
This was before ignition was confirmed in a lab setting though so now the countdown has actually begun
"Ignition" was as much a marketing term as anything. It relied on a very specific accounting of energy-in vs energy-out, which no one doubted could be done.
The fact is that they used 50 kWh of energy and produced 0.7 kWh of energy. The fact that at some tiny part of the flow diagram we achieved > 1:1 energy doesn't change the fact that the actual fraction of energy out compared to energy in has barely changed in ten years.
The latest experiment produced a Q-total of 0.014, while before it was something like 0.012.
We can't just hand-wave away the energy in.
Watched the Helion Learn Engineering video, too: https://www.youtube.com/watch?v=_bDXXWQxK38
Has that net-positive finding been reproduced yet in any other Tokamoks?
How does this compare to Helion's (non-Tokamok, non-Stellerator fusion plasma confinement reactor) published stats for the Trenta and Polaris products?
Could SYLOS or other CPA Chirped Pulse Amplification lasers be useful for this problem with or without the high heat of a preexisting plasma reaction to laser pulse next to? https://www.google.com/search?q=nuclear+waste+cpa
Tell HN: GitHub will delete your private repo if you lose access to the original
I was surprised to see this in my email today:
> Your private repository baobabKoodaa/laaketutka-scripts (forked from futurice/how-to-get-healthy) has been deleted because you are no longer a collaborator on futurice/how-to-get-healthy.
That was an MIT-licensed open source project I worked on years ago. We published the source code for everyone to use, so I certainly did not expect to lose access to it just because someone at my previous company has been doing spring cleaning at GitHub! I had a 100% legal fork of the project, and now it's gone... why?
Turns out I don't even have a local copy of it anymore, so this actually caused me data loss. I'm fine with losing access to this particular codebase, I'm not using HN as customer support to regain access. I just wanted everyone to be aware that GitHub does this.
I have a number of git repos that the original developers deleted - because I sync’d them to a usb stick with gitea. I think that is how you have to do it - never entrust a service, especially a free one, with your only copy of anything you value.
If the YouTube algorithm nukes your account and all your videos, you should be ready to upload them to a new account. Same with anything else digital.
My current is standard is one copy in AWS S3 which is super reliable but too pricy for daily use, and one copy in Cloudflare R2 or Backblaze B2 which might or might not be reliable (time will tell) but is hella cheap for daily use.
> because I sync’d them to a usb stick with gitea
Just a tip: no need to use gitea if you want to replicate a git repository to somewhere else on disk/other disk.
Just do something like this:
mkdir /media/run/usb-drive/my-backup-repo
(cd /media/run/usb-drive/my-backup-repo && git init)
git remote add backup /media/run/usb-drive/my-backup-repo
git push backup master
And now you have a new repository at /media/run/usb-drive/my-backup-repo with a master branch :) It's just a normal git repository, that you also can push to over just the filesystemEven better with
cd /media/run/usb-drive/my-backup-repo && git init --bare
Bare repositories don't have a working directory. You can still git clone / git pull from them to get the contents. You can also git push to them without clobbering any "local changes" (there aren't any).More detail here:
https://www.atlassian.com/git/tutorials/setting-up-a-reposit...
Yeah, better in terms of saving space, but I think it confuses some people, hence I didn't use it in my above example. Previous time I recommended a co-worker to use the `push to a directory` way of copying a git repository, I made them create a bare repository, and they ended up going into the directory to verify it worked and not seeing what they expected. Cue me having to explain the difference between a normal repository and a bare one. It also confused them into thinking that a bare repository isn't just another git repository but a "special" one you can sync to, while the normal one you couldn't.
So in the end, simple is simple :) Unless you're creating remote repositories at scale, you probably won't notice a difference in storage usage.
I hear all that, but --bare is necessary in this case because git (by default) won't let you push to a non-bare filesystem branch:
~/temp/a:master $ git push backup
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 212 bytes | 212.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: is denied, because it will make the index and work tree inconsistent
remote: with what you pushed, and will require 'git reset --hard' to match
remote: the work tree to HEAD.
...
To ../b
! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to '../b'
git clone --mirror
git clone --bare
git push --mirror
git push --all
"Is `git push --mirror` sufficient for backing up my repository?"
https://stackoverflow.com/questions/3333102/is-git-push-mirr... :> So it's usually best to use --mirror for one time copies, and just use normal push (maybe with --all) for normal uses.
git push: https://git-scm.com/docs/git-push
git clone: https://git-scm.com/docs/git-clone
The Qubit Game (2022)
"World Quantum Day: Meet our researchers and play The Qubit Game" https://blog.google/technology/research/world-quantum-day-me... :
> In celebration of World Quantum Day, the Google Quantum AI team wanted to try a different way to introduce people to the world of quantum computing. So we teamed up with Doublespeak Games to make The Qubit Game – a playful journey to building a quantum computer, one qubit at a time, while solving challenges that quantum engineers face in their daily work. If you succeed, you’ll discover new upgrades for your in-game quantum computer, complete big research projects, and hopefully become a little more curious about how we’re building quantum computers.
Additional Q12 (K12 QIS Quantum Information Science) ideas?:
- Exercise: Port QuantumQ quantum puzzle game exercises to a quantum circuit modeling and simulation library like Cirq (SymPy) or qiskit or tequila: https://github.com/ray-pH/quantumQ
- Exercise: Model fair random coin flips with qubit basis encoding in a quantum circuit simulator in a notebook
- Exercise: Model fair (uniformly distributed) 6-sided die rolls with basis state embedding or amplitude embedding or better (in a quantum circuit simulator in a notebook)
- QIS K-12 Framework (for K12 STEM, HS Computer Science, HS Physics) https://q12education.org/learning-materials-framework
- tequilahub/tequila-tutorials: https://github.com/tequilahub/tequila-tutorials
Calculators now emulated at Internet Archive
MAME: https://en.wikipedia.org/wiki/MAME
"TI-83 Plus Calculator Emulation" https://archive.org/details/ti83p-calculator
TI-83 series: https://en.wikipedia.org/wiki/TI-83_series :
> Symbolic manipulation (differentiation, algebra) is not built into the TI-83 Plus. It can be programmed using a language called TI-BASIC, which is similar to the BASIC computer language. Programming may also be done in TI Assembly, made up of Z80 assembly and a collection of TI provided system calls. Assembly programs run much faster, but are more difficult to write. Thus, the writing of Assembly programs is often done on a computer.
I had a TI-83 Plus in middle school, and then bought a TI-83 Plus Silver edition for high school. The TI-83 Plus was the best calculator allowed for use by the program back then. FWIU these days it's the TI-84 Plus, which has USB but no CAS Computer Algebra System.
The JupyterLite build of JupyterLab - and https://NumPy.org/ - include the SymPy CAS Computer Algebra System and a number of other libraries; and there's an `assert` statement in Python; but you'd need to build your own JupyterLab WASM bundle to host as static HTML if you want to include something controversial like pytest-hypothesis. https://jupyterlite.rtfd.io/
Better than a TI-83 Plus emulator? Install MambaForge in a container to get the `conda` and `mamba` package managers (and LLVM-optimized CPython on Win, Mac, Lin) and then `mamba install -y jupyterlab tabulate pandas matplotlib sympy`; or login to e.g. Google Colab, Cocalc, or https://Kaggle.com/learn ( https://GitHub.com/Kaggle/docker-python ) .
To install packages every time a notebook runs:
!python -m pip install # or
%pip install <pkgs>
!conda install -y
!mamba install -y
But NumPy.org, JupyterLite, and Colab, and Kaggle Learn all already have a version of SymPy installed (per their reproducible software version dependency files; requirements.txt, environment.yml (Jupyter REES; repo2docker))Like MAME, which is the emulator for the TI-83 Plus and other calculators hosted by this new Internet Archive project, Emscripten-forge builds WASM (WebAssembly) that runs in an application-sandboxed browser tab as the same user as other browser tab subprocesses.
TI-83 apps:
ACT Math Section app; /? TI-83 ACT app: https://www.google.com/search?q=ti83+act+app
Commodity markets with volatility on your monochrome LCD calculato with no WiFi. SimCity BuildIt has an online commodity marketplace and sims as part of the simulation game. "Category:TI-83&4 series Zilog Z80 games" https://en.wikipedia.org/wiki/Category:TI-83%264_series_Zilo...
Computer Algebra System > Use in education: https://en.wikipedia.org/wiki/Computer_algebra_system#Use_in... :
> CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms[15] though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams.
Machine Learning for Fluid Dynamics Playlist
[Machine Learning for] "Fluid Dynamics" YouTube playlist. Steve Brunton (UW) https://youtube.com/playlist?list=PLMrJAkhIeNNQWO3ESiccZmPss...
Intercepting t.co links using DNS rewrites
That 9-hop shortening example is disgusting, I wonder if that could be alleviated by introducing some protocol:
1. Make all shortening services append a `This-is-a-shortening-service: true` header to all the responses they send.
2. When a link is added to a shortening service, check if the response from the link has the header above and resolve the destination, recursively.
awesome-url-shortener: https://github.com/738/awesome-url-shortener
/? shorturl api OpenAPI https://www.google.com/search?q=shorturl+api+openapi
- TinyURL OpenAPI: https://tinyurl.com/app/dev
- GH topic: url-shortener: https://github.com/topics/url-shortener
A https://schema.org/Thing may have zero or more https://schema.org/url and/or https://schema.org/identifier ; and then first the ?s subject URI that's specified with the `@id` property in JSONLD RDF.
You can add string, schema:Thing, or URI tags/labels with the https://schema.org/about property.
MusicLM: Generating music from text
I don't really understand why this approached is pushed for music. You can overpaint an image, but you can't do that with a song. Cutting an image to reintroduce coherence is easy too. For a song you need midi, or another symbolic representation. That was the approach of pop2piano (unfortunately it is limited to covers, not generating from scratch). And even if a song generated this is OK, listening to half an hour full of AI mistakes is really tiring. With a symbolic representation you could at least fix the mistakes if there is one good output.
I understand what you're saying, although it could be argued that at least for some types of image tasks one would prefer something like an SVG output with layers to make it easier to edit.
For music, I think it's partly an academic question of "can we do it" rather than trying to maximize immediate practical usefulness. There's already quite a bit of work on symbolic music generation (mostly MIDI), a lot of it quite competent, especially in more constrained domains like NES chiptunes or classical piano, so a full text-to-audio pipeline probably seemed a more interesting research problem.
And for a lot of use cases, where people might truly not care too much about tweaking the output to their liking, the generated audio might be good enough; the examples were pretty plausible to my ear, if somewhat lo-fi sounding (probably because it's operating at 24kHz, compared to the more standard 44-48kHz).
In the future a more hybrid approach probably makes sense for at least some applications, where MIDI is generated along with some way of specifying the timbre for each instrument (hopefully something better than general MIDI, though even that would be fun; not sure if it's been done). I'm sure that in the near we'll see a lot more work in the DAW and plugin space to have these kind of things built-in, but in a way that they can be edited by the user.
awesome-sheet-music lists a number of sheet music archives https://github.com/ad-si/awesome-sheet-music
Other libraries of (Royalty Free, Public Domain) sheet music:
Explainable artificial intelligence: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
FWIU, Current LLMs can't yet do explainable AI well enough to satisfy the optional Attribution clause of e.g. Creative Commons licenses?
"Sufficiently Transformative" is the current general copyright burden according to precedent; Transformative use and fair use: https://en.wikipedia.org/wiki/Transformative_use
SQLAlchemy 2.0 Released
I would urge people who have had issues with the documentation to give the 2.0 documentation a try. Many aspects of it have been completely rewritten, both to correctly describe things in terms of the new APIs as well as to modernize a lot of old documentation that was written many years ago.
First off, SQLAlchemy's docs are pretty easy to get to, for a direct link just go to:
It's an esoteric URL I know! ;)
from there, docs that are new include:
- the Quickstart, so one can see in one quick page what SQLAlchemy usually looks like: https://docs.sqlalchemy.org/en/20/orm/quickstart.html
- the Unified Tutorial, which is a dive into basically every important concept across Core / ORM : https://docs.sqlalchemy.org/en/20/tutorial/index.html
- the ORM Querying guide, which is a "how to" for a full range of SQL generation: https://docs.sqlalchemy.org/en/20/orm/queryguide/index.html
it's still a lot to read of course but part of the idea of SQLAlchemy 2.0 was to create a straighter and more consistent narrative, while it continues to take on a very broad-based problem space. If you compare the docs to those of like, PostgreSQL or MySQL, those docs have a lot of sections and text too (vastly more). It's a big library.
From https://docs.sqlalchemy.org/en/20/dialects/ :
> Currently maintained external dialect projects for SQLAlchemy include: [...]
Is there a list of async [SQLA] DB adapters?
The SQLAlchemy 2.0 Release Docs: https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.htm...
Show HN: A script to test whether a program breaks without network access
"Chaos engineering" https://en.wikipedia.org/wiki/Chaos_engineering
dastergon/awesome-chaos-engineering#notable-tools: https://github.com/dastergon/awesome-chaos-engineering#notab...
IIUC, MVVM apps can handle delayed messages - that sit in the outbox while waiting to reestablish network connectivity - better than apps without such layers.
Which mobile apps work during intermittent connectivity scenarios like disasters and disaster relief (where first priority typically is to get comms back online in order to support essential services (with GIF downloads and endless pull-to-refresh))?
Certified 100% AI-free organic content
> Published content will be later used to train subsequent models, and being able to distinguish AI from human input may be very valuable going forward
I find this to be a particularly interesting problem in this whole debacle.
Could we end up having AI quality trend downwards due to AI ingesting its own old outputs and reinforcing bad habits? I think it's a particular risk for text generation.
I've already run into scenarios where ChatGPT generated code that looked perfectly plausible, except for that the actual API used didn't really exist.
Now imagine a myriad fake blogs using ChatGPT under the hood to generate blog entries explaining how to solve often wanted problems, and that then being spidered and fed into ChatGPT 2.0. Such things could end up creating a downwards trend in quality, as more and more of such junk gets posted, absorbed into the model and amplified further.
I think image generation should be less vulnerable to this since all images need tagging to be useful, "ai generated" is a common tag that can be used to exclude reingesting old outputs, and also because with artwork precision doesn't matter so much. If people like the results, then it doesn't matter that much that something isn't drawn realistically.
As someone in SEO, I've been pretty disgusted by the desire for site owners to want to use AI-generated content. There are various opinions on this, of course, but I got into SEO out of interest in the "organic web" vs. everything being driven by ads.
Love the idea of having AI-Free declarations of content as it could / should help to differentiate organic content from generated content. It would be very interesting if companies and site owners wished to self-certify their site as organic with something like an /ai-free.txt.
I don't see the point. There's lots of old content out there that won't get tagged, so lacking the tag doesn't mean it's AI generated. Meanwhile people abusing AI for profit (eg, generating AI driven blogs to stick ads on them) wouldn't want to tag their sites in a way that might get them ignored.
And what are the consequences for lying?
Does use of a search engine violate the "No AI" covenant with oneself?
Variation on the Turning Test: prove that it's not a human claiming to be a computer.
Modeling premises and Meta-analysis are again necessary elements for critical reasoning about Sources and Methods and superpositions of Ignorance and Malice.
Maybe this could encourage the recreation of the original Yahoo! (If you don't remember, Yahoo! started out not as a search engine in the Google sense but as a collection of human curated links to websites about various topics)
I consider Wikipedia to be a massive curated set of information. It also includes a lot of references and links to additional good information / source materials. Companies try to get spin added and it's usually very well controlled. I worry that a lot of ai generated dreck will seep into Wikipedia, but I am hopeful the moderation will continue to function well.
List of Web directories: https://en.wikipedia.org/wiki/List_of_web_directories ; DMOZ FTW
Distributed Version Control > Work model > Pull Request: https://en.wikipedia.org/wiki/Distributed_version_control#Pu...
sindresorhus/awesome: https://github.com/sindresorhus/awesome#contents
bayandin/awesome-awesomeness: https://github.com/bayandin/awesome-awesomeness
"Help compare Comment and Annotation services: moderation, spam, notifications, configurability" https://github.com/executablebooks/meta/discussions/102
Re: fact checks, schema.org/ClaimReview, W3C Verifiable Claims, W3C Verifiable News & Epistemology: https://news.ycombinator.com/item?id=15529140
W3C Web Annotations could contain (cryptographically-signed (optionally with a W3C DID)) Verifiable Claims; comments with signed Linked Data
An incomplete guide to stealth addresses
> Basic stealth addresses can be implemented fairly quickly today, and could be a significant boost to practical user privacy on Ethereum. They do require some work on the wallet side to support them
So how easy is it realistically? I hope it's not going to un-ergonomic like PGP where novices are sometimes seeing to be pasting their private key into e-mails and sending things in plaintext which should have been ciphertext, or otherwise leaking info.
I imagine you have to be really careful not to mess things up here.
Oh, there's WKD: Web Key Directory https://wiki.gnupg.org/WKD#How_does_an_email_client_use_WKD....
gpg --homedir "$(mktemp -d)" --verbose --locate-keys your.email@example.org
https://example.org/.well-known/openpgpkey/hu/0t5sewh54rxz33fwmr8u6dy4bbz8itz2
Is there a pinned certificate for `gpg recv-keys` (that isn't possible with WKD) https://en.wikipedia.org/wiki/Key_server_(cryptographic)#Pro... ?WKD and HKP depend upon TLS and preshared CA certs (PKI or pinned certificates) in all forms AFAIU:
# HKP, HTTPS
gpg --recv-keys an.email@example.org
# WKD
gpg --locate-keys your.email@example.org
Who is trusted with read/write to all keys on the HTTP pubkey server?W3C DIDs are encodable into QR codes, too. And key hierarchy is optional with DIDs.
(Edit)
https://www.w3.org/TR/did-core/#did-controller :
> DID Controller
> A DID controller is an entity that is authorized to make changes to a DID document. The process of authorizing a DID controller is defined by the DID method.
> The controller property is OPTIONAL. If present, the value MUST be a string or a set of strings that conform to the rules in 3.1 DID Syntax. The corresponding DID document(s) SHOULD contain verification relationships that explicitly permit the use of certain verification methods for specific purposes.
> When a controller property is present in a DID document, its value expresses one or more DIDs. Any verification methods contained in the DID documents for those DIDs SHOULD be accepted as authoritative, such that proofs that satisfy those verification methods are to be considered equivalent to proofs provided by the DID subject.
/? "Certificate Transparency" blockchain / dlt ... QKD, ... Web Of Trust and temp keys
What does Interledger Protocol say about these an in-band / in-channel signaling around transactions?
https://westurner.github.io/hnlog/ Ctrl-F "SPSP"
> https://github.com/interledger/rfcs/blob/master/0009-simple-... :
> Relation to Other Protocols: SPSP is used for exchanging connection information before an ILP payment or data transfer is initiated
RFC 8905 specifies "The 'payto:' URI Scheme for Payments" and does support ILP addresses https://www.rfc-editor.org/rfc/rfc8905.html#name-tracking-pa... https://datatracker.ietf.org/doc/rfc8905/ :
> 7. Tracking Payment Target Types
> A registry of "Payto Payment Target Types" is described in Section 10. The registration policy for this registry is "First Come First Served", as described in [RFC8126]. When requesting new entries, careful consideration of the following criteria [...]
DID URIs are probably also already payto: URI-scheme compatible but not yet so registered?
ILP Addresses - v2.0.0 https://interledger.org/rfcs/0015-ilp-addresses/ :
> ILP addresses provide a way to route ILP packets to their intended destination through a series of hops, including any number of ILP Connectors. (This happens after address lookup using a higher-level protocol such as SPSP.) Addresses are not meant to be user-facing, but allow several ASCII characters for easy debugging.
Do Large Language Models learn world models or just surface statistics?
If they don't search for Tensor path integrals, for example, can any NN or symbolic solution ever be universally sufficient?
A generalized solution term expression for complex quantum logarithmic relations:
e**(w*(I**x)*(Pi**z))
What sorts of relation expression term forms do LLMs synthesize from?Can [LLM XYZ] answer prompts like:
"How far is the straight-line distance from (3red, 2blue, 5green) to (1red, 5blue, 7green)?"
> - What are "Truthiness", Confidence Intervals and Error Propagation?
> - What is Convergence?
> - What does it mean for algorithmic outputs to converge given additional parametric noise?
> - "How certain are you that that is the correct answer?"
> - How does [ChatGPT] handle known-to-be or presumed-to-be unsolved math and physics problems?
> - "How do we create room-temperature superconductivity?"
"A solution for room temperature superconductivity using materials and energy from and on Earth"
> - "How will planetary orbital trajectories change in the n-body gravity problem if another dense probably interstellar mass passes through our local system?"
Where will a tracer ball be after time t in a fluid simulation ((super-)fluid NDEs Non-Differential Equations) of e.g. a vortex turbine next to a stream?
How do General Relativity, Quantum Field Theory, Bernoulli's, Navier Stokes, and the Standard Model explain how to read and write to points in spacetime and how do we solve gravity?
Did I forget to cite myself (without a URL)? Notable enough for a citation it isn't.
"[Edu-sig] ChatGPT for py teaching" (2023) Editing Python Mailing List. (2023)
No, LLMs do not learn a sufficient world model for answering basic physics questions that aren't answered in the training corpus; and, AGI-strength AI is necessary for ethical reasoning given the liability in that application domain.
Hopefully, LLMs can at least fill in with possible terms like '4π' given other uncited training corpus data. LLMs are helpful for Evolutionary Algorithmic methods like mutation and crossover, but then straight-up ethical selection.
Ask [the LLM] to return a confidence estimate when it can't know the correct answer, as with hard and thus valuable e.g. physics problems. What tone of voice did Peabody take in explaining to Sherman, and what does an LLM emulate?
Thoughts on the Python packaging ecosystem
This is a great writeup by a central figure in Python packaging, and gets to the core of one of Python packaging's biggest strengths (and weaknesses): the PyPA is primarily an "open tent," with mostly independent (but somewhat standards-driven) development within it.
Pradyun's point about unnecessary competition rings especially true to me, and points to (IMO) a hard reality about where the ecosystem needs to go: at some point, there need to be some prescriptions about the one good tool to use for 99.9% of use cases, and with that will probably come some hurt feelings and disregarded technical opinions (including possibly mine!). But that's what needs to happen in order to produce a uniform tooling environment and UX.
The primary (probably build and) packaging system for a software application should probably support the maximum level of metadata sufficient for downstream repackaging tools.
Metadata for the ultimate software package should probably include a sufficient number of attributes in its declarative manifest:
Package namespace and name,
Per-file paths and Checksums, and at least one cryptographic signature from the original publisher. Whether the server has signed what was uploaded is irrelevant if it and the files within don't match a publisher signature at upload time?
And then there's the permissions metadata, the ACLs and context labels to support any or all of: SELinux, AppArmor, Flatpak, OpenSnitch, etc.. Neither Python packages nor conda packages nor RPM support specifying permissions and capabilities necessary for operation of downstream packages.
You can change the resolver, but the package metadata would need to include sufficient data elements for Python packaging to be the ideal uni-language package manager imho
Google Calls in Help from Larry Page and Sergey Brin for A.I. Fight
It’s interesting that Google—the inventor of the “T” in GPT—is on red alert due to ChatGPT. And Google has a bunch of research in AI (see Dean’s recent blogpost [1]), so what gives with their lack of AI product/execution/strategy? Too scared to upend their existing cash cow maybe, aka innovator’s dilemma?
Also, it doesn’t inspire confidence in Pichai that Sergei and Brin were consulted about something that should be on the forefront of Google’s strategy. Maybe that’s too harsh but I just find all this surprising.
[1] https://ai.googleblog.com/2023/01/google-research-2022-beyon...
why woykd you say they are on "red alert"? Sounds like lazy investors pumping or dumping to me.
Google already has very large LLMs online for search and other applications.
A similar take on similar spin: https://twitter.com/westurner/status/1614002846394892288
From April 2022, "Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance" https://ai.googleblog.com/2022/04/pathways-language-model-pa... :
> In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved state-of-the-art few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains in understanding the capabilities that emerge with few-shot learning as we push the limits of model scale.
> […] More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved [SOTA] few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains [for few-shot LLMs] /2
DeepMind Dreamerv3 is more advanced than all the LLMs.
I don’t think the issue of Google is a lack of advanced AI technology but the transfer and realization of products. So many Google’s products are half-backed, only half-heartedly developed and then be buried at the end. For a giant like Google, they must be stuck in a rut and to get free need all the help they can get. If they continue the business as usual, I afraid they will experience a never-seen before landslide. I believe calling back the founders at this critical time is the right move.
I disagree with your assessment of their comparative performance.
"Beyond Tabula Rasa: Reincarnating Reinforcement Learning" https://ai.googleblog.com/2022/11/beyond-tabula-rasa-reincar... :
> Furthermore, the inefficiency of tabula rasa RL research can exclude many researchers from tackling computationally-demanding problems. For example, the quintessential benchmark of training a deep RL agent on 50+ Atari 2600 games in ALE for 200M frames (the standard protocol) requires 1,000+ GPU days. As deep RL moves towards more complex and challenging problems, the computational barrier to entry in RL research will likely become even higher.
> To address the inefficiencies of tabula rasa RL, we present “Reincarnating Reinforcement Learning: Reusing Prior Computation To Accelerate Progress” at NeurIPS 2022. Here, we propose an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. While some sub-areas of RL leverage prior computation, most RL agents are still largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. We have also released our code and trained agents to enable researchers to build on this work.
Feed-Forward with Prompt Engineering is like RL; which prompt elements should remain given objective or subjective error?
On-demand electrical control of spin qubits (2023)
"On-demand electrical control of spin qubits" (2023) http://dx.doi.org/10.1038/s41565-022-01280-4
> Once called a ‘classically non-describable two-valuedness’ by Pauli, the electron spin forms a qubit that is naturally robust to electric fluctuations. Paradoxically, a common control strategy is the integration of micromagnets to enhance the coupling between spins and electric fields, which, in turn, hampers noise immunity and adds architectural complexity. Here we exploit a switchable interaction between spins and orbital motion of electrons in silicon quantum dots, without a micromagnet. The weak effects of relativistic spin–orbit interaction in silicon are enhanced, leading to a speed up in Rabi frequency by a factor of up to 650 by controlling the energy quantization of electrons in the nanostructure. Fast electrical control is demonstrated in multiple devices and electronic configurations. Using the electrical drive, we achieve a coherence time T2,Hahn ≈ 50 μs, fast single-qubit gates with Tπ/2 = 3 ns and gate fidelities of 99.93%, probed by randomized benchmarking. High-performance all-electrical control improves the prospects for scalable silicon quantum computing. High-performance all-electrical control is a prerequisite for scalable silicon quantum computing. The switchable interaction between spins and orbital motion of electrons in silicon quantum dots now enables the electrical control of a spin qubit with high fidelity and speed, without the need for integrating a micromagnet.
Is this Quantum of Silicon; or Quantum Dots on Silicon?
Used to be that quantum dots were for the the next level display tech beyond OLED, which doesn't require magnets either.
"Rowhammer for qubits: is it possible?" https://www.reddit.com/r/quantum/comments/7osud4/rowhammer_f... and its downstream mentions: https://news.ycombinator.com/item?id=27294577
"Bell's inequality violation with spins in silicon" (2015) https://arxiv.org/abs/1504.03112
I had heard that Bell's actually means that there is a high error rate in transmitting quantum states - 60%, I thought Wikipedia had said - through entanglement relations with physical descriptions. Doesn't entangled satellite communication violate Bell's, too?
Maybe call it and emissions a "Hot Tub Time Machine", eh?
Reverse engineering a neural network's clever solution to binary addition
The trick of performing binary addition by using analog voltages was used in the IAS family of computers (1952), designed by John von Neumann. It implemented a full adder by converting two input bits and a carry in bit into voltages that were summed. Vacuum tubes converted the analog voltage back into bits by using a threshold to generate the carry-out and more complex thresholds to generate the sum-out bit.
Nifty, makes one wonder if logarithmic or sigmoid functions for ML could be done using this method. Especially as we approach the node size limit, perhaps dealing with fuzzy analog will become more valuable.
There are a few startups making analog ML compute. Mythic & Aspinity for example
https://www.eetimes.com/aspinity-puts-neural-networks-back-t...
Veritasium has a bit of an video on Mythic - Future Computers Will Be Radically Different (Analog Computing) - https://youtu.be/GVsUOuSjvcg
From "Faraday and Babbage: Semiconductors and Computing in 1833" https://news.ycombinator.com/item?id=32888210 and then "Qubit: Quantum register: Qudits and qutrits" https://news.ycombinator.com/item?id=31983110:
>>> The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only: [...] Qubit#Physical_implementations: https://en.wikipedia.org/wiki/Qubit#Physical_implementations
> - note the "electrons" row of the table
According to this Table on wikipedia, it's possible to use electron charge (instead of 'spin') to do Quantum Logic with Qubits.
How is that doing quantum logical computations with electron charge different from from what e.g. Cirq or Tequila do (optionally with simulated noise to simulate the Quantum Computer Engineering hardware)?
FWIU, analog and digital component qualities are not within sufficient tolerance to do precise analog computation? (Though that's probably debatable for certain applications at least, but not for general purpose computing architectures?) That is, while you can build adders out of voltage potentials quantified more specifically than 0 or 1, you might shouldn't without sufficient component spec tolerances because noise and thus error.
IMHO, Turing Tumble and Spintronics are neat analog computer games.
(Are Qubits, by Church-Turing-Deutsch, sufficient to; 1) simuluate arbitrary quantum physical systems; or 2) run quantum logical simulations as circuits with low error due to high coherence? https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93... )
>> See also: "Quantum logic gate" https://en.wikipedia.org/wiki/Quantum_logic_gate
Analog computers > Electronic analog computers aren't Electronic digital computers: https://en.wikipedia.org/wiki/Analog_computer#Electronic_ana...
Heat pumps of the 1800s are becoming the technology of the future
In a house I own in Melbourne, Australia, I just replaced an old gas central heating system with 3 new top of the line Daikin mini split heat pumps. The new units can heat the entire house for the same amount of electrical energy that was used to run the FAN in the old gas unit. They are crazy efficient.
Ducts are dead.
The Daikin Alira X is the gold-plated option and cost $8k AUD for 2x2.5kw and 1x7.1kw units including installation. Payback time is about 3 years. The system is oversized, but enables excellent zoning and of course provides cooling which is a must on 40C/104F days.
Why do they seem to be so much more expensive in the US?
> Ducts are dead.
Ducts are still needed to circulate air, especially if you want to remove stale air (e.g., bathrooms, kitchen) and bring in (filtered) fresh air (to bedrooms).
Not being snarky but why don't you just open the window for that?
I have a CO2 detector that I believe is a reasonable proxy for stale air. When it goes above 1000 I simply open the windows. By the time I remember to close the windows the reading is almost always below 500.
Where I live, it can get very cold. Not always very efficient to open windows for 5 months out of the year.
A great option for keeping CO2 levels down in a house is with an HRV (or ERV) [1] that will heat the fresh air coming in to cycle it throughout the house.
From the ERV/HRV (Energy Recovery Ventilation / Heat Recovery Ventilation) wikipedia page: https://en.wikipedia.org/wiki/Energy_recovery_ventilation#Ty... :
> During the warmer seasons, an ERV system pre-cools and dehumidifies; During cooler seasons the system humidifies and pre-heats.[1] An ERV system helps HVAC design meet ventilation and energy standards (e.g., ASHRAE), improves indoor air quality and reduces total HVAC equipment capacity, thereby reducing energy consumption.
> ERV systems enable an HVAC system to maintain a 40-50% indoor relative humidity, essentially in all conditions. ERV's must use power for a blower to overcome the pressure drop in the system, hence incurring a slight energy demand.
In Jan 2023, the ERV wikipedia article has a 'Table of Energy recovery devices by Types of transfer supported': Total and Sensible :
> [ Total & Sensible transfer: Rotary enthalpy wheel, Fixed Plate ]
> [ Sensible transfer only: Heat pipe, Run around coil, Thermosiphon, Twin Towers ]
Latent heat: https://en.wikipedia.org/wiki/Latent_heat :
> In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Sensible heat: https://en.wikipedia.org/wiki/Sensible_heat
There's a broader Category:Energy_recovery page which includes heat pumps. https://en.wikipedia.org/wiki/Category:Energy_recovery
Are heat pumps more efficient than ERVs? Do heat pumps handle relative humidify in the same way as ERVs?:
IME you don't use an ERV alone. You'd use it _with_ a Heat Pump. The ERV is all about transferring heat from one airstream to another. It's _not_ a device that manages indoor temperatures though, just recovers some of the latent energy in the air. In the process it's also managing humidity mainly as a by-product. The humidification properties of the ERV allow you to run the Heat Pump in a more efficient manner. I have an HVAC geek friend who explained the whole process, but essentially (in non physics/fluid dynamics[?] terminology), if the Heat Pump doesn't need to dehumidify air it can operate more efficiently.
How Nvidia’s CUDA Monopoly in Machine Learning Is Breaking
It is quite weird to talk about all the frameworks which are built on top of CUDA eventually, but not talking about Rocm or OpenCL.
OpenAI's Triton compiles down to CUDA atm (if I read their github right), and only supports Nvidia GPUs.
PyTorch 2.0's installation page only mentions CPU and CUDA targets, therefor it's effectively all Nvidia GPUs.
While all the frameworks and abstractions could offer other back-ends in theory the story of anything ML related on the other big name in the industry, AMD, is still poor.
If anybody looses business because of bad decisions it is AMD, not Nvidia, who lead the whole industry. I am not convinced that anything will change in the near future.
From https://news.ycombinator.com/item?id=32904285 re: AMD ROcm, HIPIFY, :
> AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni... [...]
> ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced. [...]
From https://github.com/RadeonOpenCompute/clang-ocl :
> RadeonOpenCompute/OpenCL compilation with clang compiler
A better overview from the docs: "Machine Learning and High Performance Computing Software Stack for AMD GPU" https://rocmdocs.amd.com/en/latest/Installation_Guide/Softwa...
Large language models as simulated economic agents (2022) [pdf]
I love it. Instead of (a) running mathematical experiments that model human beings as utility-maximizing agents in a highly-simplified toy economy (easy and cheap, but unrealistic), or (b) running large-scale social experiments on actual human beings (more realistic, but hard and expensive), the authors propose (c) running large-scale experiments on large language models (LLMs) trained to respond, i.e., behave, like human beings. Recent LLMs seem to model human beings well enough for it!
Abstract:
> Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. These models can be used the same way economists use homo economicus: they can be given endowments, information, preferences, and so on and then their behavior can be explored in scenarios via simulation. I demonstrate this approach using OpenAI’s GPT3 with experiments derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986) and Samuelson and Zeckhauser (1988). The findings are qualitatively similar to the original results, but it is also trivially easy to try variations that offer fresh insights. Departing from the traditional laboratory paradigm, I also create a hiring scenario where an employer faces applicants that differ in experience and wage ask and then analyze how a minimum wage affects realized wages and the extent of labor-labor substitution.
Attempting to draw any kind of conclusions about the real world and human behaviour from a chatbot. Can't decide if this is hilarious or disturbing.
It is so implausible that the training process that creates LLMs might learn features of human behavior that could then be uncovered via experimentation? I showed, empirically, that one can replicate several findings in behavioral economics with AI agents. Perhaps the model "knows" how to behave from these papers, but I think the more plausible interpretation is that it learned about human preferences (against price gouging, status quo bias, & so on) from its training. As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.
> As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.
>> What NN topology can learn a quantum harmonic model?
Can any LLM do n-body gravity? What does it say when it doesn't know; doesn't have confidence in estimates?
>> Quantum harmonic oscillators have also found application in modeling financial markets. Quantum harmonic oscillator: https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
"Modeling stock return distributions with a quantum harmonic oscillator" (2018) https://iopscience.iop.org/article/10.1209/0295-5075/120/380...
... Nudge, nudge.
Behavioral economics: https://en.wikipedia.org/wiki/Behavioral_economics
https://twitter.com/westurner/status/1614123454642487296
Virtual economies do afford certain opportunities for economic experiments.
Homelab analog telephone exchange
You might want to consider looking at Asterisk, the open source PBX. They have a list of analog cards that they support.
Asterisk (PBX) > Derived products https://en.wikipedia.org/wiki/Asterisk_(PBX)
"What is the technology behind Sangoma Meet?" https://help.sangoma.com/community/s/article/What-is-the-tec...
> Sangoma Meet is based on WebRTC, which provides video conferencing and is supported by most of the major web browsers today.
> Our software stack is built upon several open source tools, including Jitsi Meet, FreeSWITCH™, HAProxy, Prometheus, Grafana, collectd, and other tools used for provisioning, deploying and managing the service.
FWIU, Sangoma Talk includes the Sangoma Meet functionality: https://www.sangoma.com/products/communications-services/tea...
> Mobile Soft Client: Take your company phone extension with you anywhere using a mobile soft client. Forward calls from the office, receive voicemails, start a video meeting, and much more! When you call your customers or clients through the Sangoma Talk app, they will see your office phone number, which allows you to maintain your personal device privacy. Available for iOS & Android devices.
GVoice (originally GrandCentral) can't do voice or video call transfer to the mobile soft client app, FWIU
A 116kb WASM of Blink that lets you run x86_64 Linux binaries in the browser
I believe it's for some Linux binaries, statically compiled. Having a portable subset of Linux is pretty cool though.
Maybe this turns into an anti-distro, a collection of portable apps not specific to a Linux distro? (Or Linux, even.)
From https://github.com/simonw/datasette-lite/issues/26 :
> Micropip or Mambalite or picomamba or Zig.
> "Better integration with conda/conda-forge for building packages" [pyodide/pyodide#795]( https://github.com/pyodide/pyodide/issues/795)
> Emscripten-forge > Adding packages: https://github.com/emscripten-forge/recipes#adding-packages
> - https://github.com/emscripten-forge/recipes/tree/main/recipe...
> -- emscripten-forge/recipes/blob/main/recipes/recipes_emscripten/picomamba/recipe.yaml: https://github.com/emscripten-forge/recipes/blob/main/recipe...
> --- mamba-org/picomamba: https://github.com/mamba-org/picomamba
From emscripten-forge/recipes https://github.com/emscripten-forge/recipes :
> Build wasm/emscripten packages with conda/mamba/boa. This repository consists of recipes for conda packages for emscripten. Most of the recipes have been ported from pyodide.
> While we already have a lot of packages built, this is still a big work in progress.
SQLite Wasm in the browser backed by the Origin Private File System
I think this will be great for extensions. Currently the only solid choice is to use the dead simple storage.local which only allows to retrieve things by ID.
There's one problem though, this new API is for the web, so the nature of this storage is temporary - obviously the user must be able to clear it when clearing site data and this what makes it currently an unviable solution for a persistent extensions data storage. https://bugs.chromium.org/p/chromium/issues/detail?id=138321...
I’m getting Flash flashbacks. Flash apps had filesystem access in a designated area, with some amount of user control, that is not easily wiped from the browser; Flash games used that for saves and data. Years after the demise of Flash, the web platform is still catching up.
It's different though, extensions are like local apps and deserve persistent storage, while you are talking about how Flash was being used by remote sources. Also my proposed approach is that this data would be wiped when the extension is removed from the browser - which is what happens for storage.local
Some way to indicate which "WASM contexts" (?) have utilized how much disk space would be great for open source multiple implementations not-Flash, too.
From https://news.ycombinator.com/item?id=32953286 https://westurner.github.io/hnlog/#story-32950199 :
> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tab tabs according to their relative resource utilization
And then that tabs are (sandboxed) subprocesses running as the same user though.
Containers may have unique SELinux MCS labels, and browser tab processes probably should too.
containers/container-selinux: https://github.com/containers/container-selinux
https://github.com/kai5263499/awesome-container-security/iss...
> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU, RAM, Disk, [GPU, TPU, QPU] (Linux: cgroups,)
Like Flash
The i3-gaps project has been merged with i3
Hi. I'm the maintainer of i3-gaps and also a maintainer for i3.
The story of this merge is not only several years long, but a true success story in OSS in my eyes.
I took on i3-gaps by taking an existing patch and rebasing it to the latest i3 HEAD. From there it became popular and I took on the maintainership, eventually contributing to i3 itself and finally becoming a maintainer there as well.
Whilst originally gaps were considered an "anti feature" for i3, years ago we already decided that we'd accept adding gaps into i3. Clearly the fork was popular, and as someone else pointed out here as well, the Wayland "port" of i3, sway, added gaps from the beginning on with great success.
However, the original gaps patch was focused on being small and easy to maintain. It caused a few issues and had some drawbacks. We made it a condition that porting gaps into i3 would have to resolve these issues. Alas, this could've meant a lot of work that no one took on for the years to follow.
Recently, however, the maintainers of i3 got together (a chance to meet arose randomly). During that meeting we decided that it'd be better to just merge the fork and improve it later. And as it happened, Michael, the author and main maintainer of i3, did all that work during the port as well.
What resulted is the end of almost a decade of i3-gaps, and a much better implementation thereof. I'm incredibly happy to see this happen after all this time, and a big shoutout to Michael here for all that work.
Edit: Hadn't realized Michael was commenting here already. I guess leaving the background and story from my side of things doesn't hurt regardless.
What do you think about getting alt-tab support in there? Here to say this: https://github.com/westurner/dotfiles/blob/develop/scripts/i...
You want to use i3 to collapse focus management into a single temporal dimension?
I'll die defending your right to do so, but dear god your taste is atrocious. It's like you finally got out of prison and decided to decorate your bedroom window with iron bars.
In every window manager that supports it, alt tab does two things.
Alt tab with a long hold on alt lets you select another window, albeit from a linear list as you describe, by cycling through with tab and shift-tab.
Quickly typing alt-tab now cycles between the window you came from and the window you just selected. That’s the super useful value of the feature.
Is there an i3 command to (a) leap to another window from a selection and (b) leap back and forth between the window you came from and the window you just chose?
> Is there an i3 command to (a) leap to another window from a selection and (b) leap back and forth between the window you came from and the window you just chose?
Not for windows as far as I know, but for workspaces, yes: https://i3wm.org/docs/userguide.html#back_and_forth https://i3wm.org/docs/userguide.html#workspace_auto_back_and...
For (a) I've been using rofi. I bound a key to open its window switcher which gives me a searchable list of all open windows. There are quite a few other options: https://wiki.archlinux.org/title/I3#Jump_to_open_window
For (b) I don't know what the best option is but I see some people posting their scripts in this thread: https://github.com/i3/i3/issues/838 I think I would bind some keys to mark and then focus by mark if I wanted to do this instead of having a toggle.
So far as I know, the only way to get that functionality is to put the windows next to each other and to navigate them spatially (or to write a script that does it and bind that script to alt+tab).
But grouping windows logically based on how they're used has always just felt right, so I've never really considered that you want to keep track of the order in which windows were previously selected (using alt-tab to navigate even just three or four windows can be quite a trick). It always seemed like a necessary evil since floating-only window managers can't handle the kind of spatial focus-navigation that i3 does.
Do you happen to know an alt tab that just uses the currently visible desktops ? ( I have two monitors, and want to alt-tab between windows without finding the old ones hidden)
---
Also, I feel bad using this thread for feature requests/questions. Lovely work guys, and I am very grateful!
Do you mean currently visible desktops or windows? https://github.com/sagb/alttab is the utility of my choice to get what I wanted.
Show HN: Futurecoder – A free interactive Python course for coding beginners
Some highlights:
- 100% free and open source (https://github.com/alexmojaki/futurecoder), no ads or paid content.
- No account required at any point. You can start instantly. (You can create an account if you want to save your progress online and across devices. Your email is only used for password resets)
- 3 integrated debuggers can be started with one click to show what your code is doing in different ways.
- Enhanced tracebacks make errors easy to understand.
- Useful for anyone: You can have the above without having to look at the course. IDE mode (https://futurecoder.io/course/#ide) gives you an instant scratchpad to write and debug code similar to repl.it.
- Completely interactive course: run code at every step which is checked automatically, keeping you engaged and learning by doing.
- Every exercise has many small optional hints to give you just the information you need to figure it out and no more.
- When the hints run out and you're still stuck, there are 2 ways to gradually reveal a solution so you can still apply your mind and make progress.
- Advice for common mistakes: customised linting for beginners and exercise-specific checks to keep you on track.
- Construct a question that will be well-received on sites like StackOverflow: https://futurecoder.io/course/#question
- Also available in French (https://fr.futurecoder.io/), Tamil (https://ta.futurecoder.io/), and Spanish (https://es-latam.futurecoder.io/). Note that these translations are slightly behind the English version, so the sites themselves are too as a result. If you're interested, help with translation would be greatly appreciated! Translation to Chinese and Portuguese is also half complete, and any other languages are welcome.
- Runs in the browser using Pyodide (https://pyodide.org/). No servers. Stores user data in firebase.
- Progressive Web App (PWA) that can be installed from the browser and used offline.
-----------
A frequent question is how does futurecoder compare to Codecademy? Codeacademy has some drawbacks:
- No interactive shell/REPL/console
- No debuggers
- Basic error tracebacks not suitable for beginners
- No stdin, i.e. no input() so you can't write interactive programs, and no pdb.
- No gradual guidance when you're stuck. You can get one big hint, then the full solution in one go. This is not effective for learners having difficulty.
- Still on Python 3.6 (futurecoder is on 3.10)
I am obviously biased, but I truly believe futurecoder is the best resource for adult beginners. The focus on debugging tools, improved error messages, and hints empowers learners to tackle carefully balanced challenges. The experience of learning feels totally different from other courses, which is why I claim that if someone wants to start learning how to code, futurecoder is the best recommendation you can make.
This looks really good, going through the first couple of tasks it seems well considered.
I'm introducing my 8yo daughter to programming at the moment, she is beginning to play around with Scratch. I'm keeping my eye out some something closer to proper coding though. I think this may be a little too far at the moment, but I may try her out on it with me sitting next to her and see how she gets on!
Is there any way for users to construct their own multiple stage tutorials? (It looks like we can do single questions)
Currently you have the console output, have you considered having a canvas/bitmap output that could be targeted with the various Python drawing and image manipulation apis?
Incredibly generous of you to make it open source!
> Is there any way for users to construct their own multiple stage tutorials?
I really hope some kind of GUI to do that can exist one day, but it's definitely a complicated feature that I'd need help from contributors to build. Same for graphical output.
> (It looks like we can do single questions)
I think you're talking about the question wizard. That's for helping people to write good quality questions about their own struggles to post on StackOverflow and similar sites. It's not for making 'challenges' for others to solve.
> Incredibly generous of you to make it open source!
Thank you! I'm really trying to improve the state of education and make the world a better place. I hope that in addition to directly helping users, I can inspire other educators, raise the bar, and help them build similar products. To this end, futurecoder is powered by many open source libraries that I've created which are designed to also be useful in their own right:
Debuggers: these are integrated in the site but also usable in any environment:
- https://github.com/alexmojaki/birdseye
- https://github.com/alexmojaki/snoop
- https://github.com/alexmojaki/cheap_repr (not a debugger, but used by the above two as well as directly by futurecoder)
Tracebacks:
- https://github.com/alexmojaki/stack_data (this is also what powers the new IPython tracebacks)
- https://github.com/alexmojaki/executing (allows highlighting the exact spot where the error occurred, but also enables loads of other magical applications)
- https://github.com/alexmojaki/pure_eval
You can see a nicer presentation (with pictures) of the above as well as other projects of mine on my profile https://github.com/alexmojaki
Libraries which I specifically extracted from futurecoder to help build another similar educational site https://papyros.dodona.be/?locale=en (which does have a canvas output, at least for matplotlib):
- https://github.com/alexmojaki/sync-message (allows synchronous communication with web workers to make input() work properly)
- https://github.com/alexmojaki/comsync
Thanks! FWICS, futurecoder (and JupyterLite) may be the best way to run `print("hello world!")` in Python on Chromebooks for Education and Chromebooks with Family Link which don't have VMs or Containers ((!) which we rely upon on the server side to host container web shells like e.g. Google Colab and GitHub Codespaces (which aren't available for kids < 13) and cocalc-docker and ml-tooling/ml-workspace and kaggle/docker-python and https://kaggle.com/learn )
Also looked at codesters. quobit/awesome-python-in-education: https://github.com/quobit/awesome-python-in-education
Looks like `Ctrl-Enter` works, just like jupyter/vscode.
iodide-project/iodide > "Compatibility with 'percent' notebook format" which works with VScode, Spyder, pycharm, https://github.com/iodide-project/iodide/issues/2942:
# %%
import sympy as sy
import numpy as np
import scipy as sp
import pandas as pd
# %%
print("hello")
# %%
print("world")
Does it work offline? jupyterlite/jupyterlite "Offline PWA access"
https://github.com/jupyterlite/jupyterlite/issues/941Tell HN: Vim users, `:x` is like `:wq` but writes only when changes are made
`:x` leaves the modification time of files untouched if nothing was changed.
:help :x
Like ":wq", but write only when changes have been made.
I learned to do `:wq` after I learned that `:X` encrypts your file. When typing without really paying attention to the screen, I've twice encrypted my file with password like `cd tmp`, then saved the config file, breaking my system.
After that, I switched to `:wq` (and sometimes `:w` `:q`) which is much safer against over-typing.
Thanks for bringing this up.
I'm toying with either disabling it,
cmap X <Nop>
or, mapping it down to a lower x. cmap X x
1. Does anyone see anything this could interfere with?2. Does anyone know a better way to turn off the `:X` encryption option?
Sadly, having to remap definitely dulls the shine of `:x`.
Mapping to 'x' is dangerous since on a system where you don't have your vimrc you'll get the original behavior of 'X'. Ideally you'd map 'X' to deliver a small electric shock. ;-)
cmap X x
:help X
> Mapping to 'x' is dangerous since on a system where you don't have your vimrc you'll get the original behavior of 'X'.Which is still to prompt? A person could take their chances.
Besides, wouldn't that form of intentionally aversive conditioning actually boost the learning rate and create hyperassociations to the behavior your're attempting to stimulus extinct or just learn over?
Ask HN: Is there academic research on software fragility?
I keep finding articles that more or less talk about this but not some serious research on the topic. Do someone have a few pointers?
Edit: to clarify what I mean by fragility, it's how complex software, when changed, is likely to break with unexpected bugs, i.e., fixing a bug causes more.
How software changes over time?
API versioning, API deprecation
Code bloat: https://en.wikipedia.org/wiki/Code_bloat#
"Category:Software maintenance" costs: https://en.wikipedia.org/wiki/Category:Software_maintenance
Pleasing a different audience with fewer, simpler features
Lack of acceptance tests to detect regressions
Regression testing https://en.wikipedia.org/wiki/Regression_testing :
> Regression testing (rarely, non-regression testing [1]) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. [2] If not, that would be called a regression.
Fragile -> Software brittleness https://en.wikipedia.org/wiki/Software_brittleness
Thanks I will check out those references mentioned by wikipedia and also include the term software brittleness in my searches. It seems related even if maybe not exactly my goal
Yeah IDK if that one usage on Wikipedia is consistent:
> [Regression testing > Background] Sometimes re-emergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Often, a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software. Frequently, a fix for a problem in one area inadvertently causes a software bug in another area.
Seattle Public Schools sues TikTok, YouTube, Instagram over youth mental health
.. as a Seattle dad .. I fucking love this awkward yet clearly precedent-setting case. I know most of the board and the majority of us are fed up and taking the power we have to task and accountability. This has been in the works for over four years and we know the end game: new laws, adopted by other cities and states. .. it’s been a clearly focused project for over 4 years now. One of my friends is on the board. I knew it was coming Q1 2023 and glad they stayed on task. A bevy of mental health experts are core to this - so due diligence. I don’t know the layout of the legal team, but was told they have put this thru the ringer - the goal has always been a legal precedent, then actual legislation then adopted by other entities as well. The Portland School District has a parallel lawsuit as well. The pushback will be WELL propagandized and vilified. I can only imagine what garbage FOX will spew.
The lawsuit is going to be dismissed due to lack of standing.
.. it’s been a clearly focused project for over 4 years now. A colleague is on the board..A bevy of mental health experts are core to this - so due diligence. I don’t know the layout of the legal team, but was told they have put this thru the ringer - the goal has always been a legal precedent, then actual legislation, then adoption by other entities as well. The Portland School District has a parallel lawsuit as well on the tails of ours.
.. I suggest this light reading: https://app.leg.wa.gov/rcw/default.aspx?cite=7.48
Is there a specific law that these firms are supposed to have violated? (Does there need to be? I am not a lawyer.)
Does the school district have standing to sue on behalf of the students? (Is the injury that they pay more for mental health services for students?)
I’m genuinely curious. There’s a surface analogy to the cases against opioid manufacturers and distributors, but those are controlled substances whereas YouTube is clearly not.
There isn’t even a clear causal effect of the from YouTube -> depression, IMO. Research on this has been quite low quality.
And what about the role of the parents who are the owners of the phones on which their children run TikTok?
Again genuine curiosity here, not trying to be a jerk.
The geek wire article mentions that they are referring to the public nuisance law.
Probably would have been more cost effective to have worked with Amazon in Seattle on the Kids launcher on the Fire OS fork of Android (the one that merges the App Store and Launcher for the kids).
It's not safe to allow school administrators to jam and deny students' (possibly distracting) communications at least on their personal devices, eh?
Perhaps students could voluntarily submit to an App Launcher for focusing on school that deprioritizes content streams that haven't been made educational while attending unpaid conpulsory education programs under threat of prosecution for truancy, not nuisance.
Non- FireOS Android forks have the "Digital Wellbeing" tools for helping oneself focus despite persistent distractions that will always exist IRL.
Mechanical circuits: electronics without electricity [video]
Life is like a box of terrible analogies. - Oscar Wilde
This analogy is quite interesting.
Each element and connector are actually equivalents of circuit loops that are spliced together when you are connecting them.
So a spinotronics diode works more like a loop of "diode wire" out of which you can make multiple diodes by splicing other connectors and elements into it.
That's why you can make FULL BRIDGE RECTIFIER with just two spinotronic "diodes".
Electrons are described as fluids when there is superconductivity.
From https://en.wikipedia.org/wiki/Superconductivity :
> Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. [1][2] An electric current through a loop of superconducting wire can persist indefinitely with no power source.
FWIU, an electric current pattern described as EM hertz waves (e.g. as sinusoids) is practically persisted at Lagrangian points and in nonterminal, non-intersecting Lorentz curve paths at least?
IRL electronic components waste energy as heat like steaming, over-pressurized water towers. And erasing bits releases heat instead of dropping the 1 onto the negative or ground "return path"
I agree that Spintronics is a great game for mechanical circuits, which are in certain sufficient ways like electronic circuits, which can't persist qubits for any reasonable unit of time.
Python malware starting to employ anti-debug techniques
I’m wonder if there is room for a security model based around “escrow builds”.
Imagine if PyPi could take pure source code, and run a standardized wheel build for you. That pipeline would include running security linters on the source. Then you can install the escrow version of the artifact instead of the one produced by the project maintainers.
You can even have a capability model - most installers should not need to run onbuild/oninstall hooks. So by default don’t grant that.
This sidesteps a bunch of supply-chain attacks. The cost is that there is some labor required to maintain these escrow pipelines.
With modern build tools I think this might not be unworkable, particularly given that small libraries would be incentivized to adopt standardized structures if it means they get the “green padlock” equivalent.
Libraries that genuinely have special needs like numpy could always go outside this system, and have a “be careful where you install this package from” warning. But most libraries simply have no need for the machinery being exploited here.
Signed, Reproducible builds from source off a trusted build farm are possible with conda-forge, emscripten-forge, Fedora COPR, and OpenSUSE OBS Open Build System https://github.com/pyodide/pyodide/issues/795#issuecomment-1...
What does it mean for a package to have been signed with the key granted to the CI build server?
Does a Release Manager (or primary maintainer) again sign what the build farm produced once? What sort of consensus on PR approval and build output justifies use of the build artifact signing key granted to a CI build server?
How open are the build farm and signed package repo and pubkey server configurations? https://github.com/dev-sec https://pulpproject.org/content-plugins/
The Reproducible Builds project aims to make it possible to not need to trust your build machines, perhaps PyPI could use that approach.
"Did the tests pass" for that signed Reproducible build?
Conda > Adding packages > Running unit tests: https://conda-forge.org/docs/maintainer/adding_pkgs.html#run...
From https://github.com/thonny/thonny/issues/2181 :
> * https://conda-forge.org/docs/maintainer/updating_pkgs.html
> Pushing to regro-cf-autotick-bot branch¶ When a new version of a package is released on PyPI/CRAN/.., we have a bot that automatically creates version updates for the feedstock. In most cases you can simply merge this PR and it should include all changes. When certain things have changed upstream, e.g. the dependencies, you will still have to do changes to the created PR. As feedstock maintainer, you don’t have to create a new PR for that but can simply push to the branch the bot created. There are two alternatives […]
nektos/act is one way to run a github-actions.yml build definition locally; without CI (e.g. GitLab Runner, which requires ~--privileged access to the docker/Podman socket) to check whether you get the exact same build artifacts as the CI build farm https://github.com/nektos/act
A Multi-stage Dockerfile has multiple FROM instructions: you can build 1) a container for running the build which has build essentials like a compiler (GCC, LLVM) and packaging tools and keys; and 2) COPY the build artifact (probably one or more signed software packages) --from the build stage container to a container which appropriately lacks a compiler for production. https://www.google.com/search?q=multi+stage+Dockerfile
Are there guidelines for excluding entropy like the commit hash and build time so that the artifact hashes are exactly the same; are reproducible on my machine, too?
Adding design-by-contract conditions to C++ via a GCC plugin
What's the advantage of invariants over unit testing? Seems like there must be lot of overhead at runtime.
>I was ending up with garbage, and not realizing it until I had visualized it with Graphviz!
>Imagine if we had invariants that we could assert after every property change to the tree.
Have you tried writing unit tests? What didn't work about them that you decided to try invariants?
icontract is one implementation of Design by Contract for Python; which is also like Eiffel, which is considered ~the origin of DbC. icontract is fancier than compile-time macros can be. In addition to Invariant checking at runtime, icontract supports inheritance-aware runtime preconditions and postconditions to for example check types and value constraints. Here are the icontract Usage docs: https://icontract.readthedocs.io/en/latest/usage.html#invari...
For unit testing, there's icontract-hypothesis; with the Preconditions and Postconditions delineated by e.g. decorators, it's possible to generate many of the fuzz tests from the additional Design by Contract structure of the source.
From https://github.com/mristin/icontract-hypothesis :
> icontract-hypothesis combines design-by-contract with automatic testing.
> It is an integration between icontract library for design-by-contract and Hypothesis library for property-based testing.
> The result is a powerful combination that allows you to automatically test your code. Instead of writing manually the Hypothesis search strategies for a function, icontract-hypothesis infers them based on the function’s [sic] precondition
Paper-thin solar cell can turn any surface into a power source
> These durable, flexible solar cells, which are much thinner than a human hair, are glued to a strong, lightweight fabric, making them easy to install on a fixed surface. They can provide energy on the go as a wearable power fabric or be transported and rapidly deployed in remote locations for assistance in emergencies. They are one-hundredth the weight of conventional solar panels, generate 18 times more power-per-kilogram, and are made from semiconducting inks using printing processes that can be scaled in the future to large-area manufacturing.
Because they are so thin and lightweight, these solar cells can be laminated onto many different surfaces. For instance, they could be integrated onto the sails of a boat to provide power while at sea, adhered onto tents and tarps that are deployed in disaster recovery operations, or applied onto the wings of drones to extend their flying range. This lightweight solar technology can be easily integrated into built environments with minimal installation needs.
> [...] They found an ideal material — a composite fabric that weighs only 13 grams per square meter, commercially known as Dyneema
They also make ultralight backpacking backpacks out of Dyneema, but without the UV-curable glue.
> […] “A typical rooftop solar installation in Massachusetts is about 8,000 watts. To generate that same amount of power, our fabric photovoltaics would only add about 20 kilograms (44 pounds) to the roof of a house,” he says
Paper: “Printed Organic Photovoltaic Modules on Transferable Ultra-thin Substrates as Additive Power Sources” (2022) https://doi.org/10.1002/smtd.202200940
Solar energy can now be stored for up to 18 years, say scientists
> Long-term storage of the energy they generate is another matter. The solar energy system created at Chalmers back in 2017 is known as ‘MOST’: Molecular Solar Thermal Energy Storage Systems.
/? MOST Molecular Solar Thermal Energy Storage https://www.google.com/search?q=MOST%3A+Molecular+Solar+Ther... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C44&q=MOS...
> The technology is based on a specially designed molecule of carbon, hydrogen and nitrogen that changes shape when it comes into contact with sunlight.
> It shape-shifts into an ‘energy-rich isomer’ - a molecule made up of the same atoms but arranged together in a different way. The isomer can then be stored in liquid form for later use when needed, such as at night or in the depths of winter.
> A catalyst releases the saved energy as heat while returning the molecule to its original shape, ready to be used again.
> Over the years, researchers have refined the system to the point that it is now possible to store the energy for an incredible 18 years
"Chip-scale solar thermal electrical power generation" (2022) https://doi.org/10.1016/j.xcrp.2022.100789
What prevents this from being scaled up?
Previous submissions (2022): https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Solar panels open crop lands to farming energy
This article promotes false assumptions.
Ordinary solar can work fine in fields, because most plants don't use light beyond a few hours' worth, and endure the heat, after.
A practical arrangement is bifacial panels in vertical fencerows running north-south, to pick up morning and afternoon sun, spaced widely enough for equipment to run between. Blocking morning sun preserves dew, and blocking afternoon cuts heat stress.
Certain cereal crops get slightly lower yield from reduced light, but reduced water loss can make up the difference.
FWIU, also sheep can graze under the shade of solar panels, thus eliminating the need to robo-mow beneath solar panels.
Livestock production improves from reduced weather stress. Lower evaporation loss cuts irrigation load.
California pulls the plug on rooftop solar
This was always a subsidy for rich people by poor people. The power generated by rooftop solar isn’t not worth 30 cents
Yep. I've always considered net metering immoral. Rich folks using poor folks for a free nighttime battery.
The fix is super simple. Just meter by the minute and pay out current wholesale electric rates as you send to the grid. When you buy, you pay retail for delivery depending on the instantaneous market.
Don't want to buy at potentially high rates during evening peak? Sounds like you need to invest in a battery system.
If you want to pretend you are a micro power plant, you should get paid as one.
With Buy-All Sell-All you buy all you use at retail rates and sell all you produce at wholesale rates. Buy-All Sell-All has a later breakeven point than Net Metering, where you buy what you can't produce at retail and sell back the rest at wholesale or better, which is an indirect subsidy for a resilient power grid with residential renewables with external benefits.
What you describe sounds like Buy-All Sell-All, except you're allowed to use and store what you produce before paying retail rates for electricity purchased from the service provider.
Is it anti-competitive to deny residential renewable energy producers the right to use the clean energy they invested in producing if they want to purchase electricity?
Another exclusive monopoly contract: if you buy water from me, you can't use the water you capture yourself.
Net metering in the United States: https://en.wikipedia.org/wiki/Net_metering_in_the_United_Sta...
Net metering > "Post-net metering" successor tariffs: https://en.wikipedia.org/wiki/Net_metering#Post-net_metering...
We want there to be renewable residential energy. Subsidizing renewable energy will hasten adoption. We should subsidize residential renewable energy if we want there to be more renewable energy.
If we make the break-even point later in time, residential renewable energy will be less lucrative.
I’m so confused, is there something that prevents the “use what you produce and sell excess at wholesale?” That seems like it would be the sanest policy. If you can’t use your own power then I think getting paid retail rates is the only fair thing since that’s how much the power is worth to you.
FWIU, Buy-All Sell-All contracts have a "termination of agreement to provide service clause" if the residential renewables are not directly attached to the grid; it's against their TOS to use your own renewable energy and sell the rest, which is probably monopolistic and anti-competitive.
Is it legal to have a cutover so that it's possible to use one's own renewable energy when the power's out, given an exclusive Buy-All Sell-All agreement?
Perhaps there's an opportunity for a solution here: at the junction of batteries, renewables, and local [government-granted-monopoly with exclusive first-mover rights of way over and under other infrastructure] electrical-utility junction; there could be a controller that knows at least:
- 1a) when the grid is down
- 1b) when the grid wants the customer to slowly increase load e.g. after the power has been out
- 1c) when it's safe to send more electricity to the grid e.g. at retail or wholesale or intraday rates
- 2a) how full are the local batteries
- 2b) the current and projected local load && how much of that can be throttled down
- 2ba) how full and heated the hot water tank(s) are
- 2bb) the current and projected external and internal air temperature and humidity
- 2bba) the current and projected internal air temperature and humidity, per e.g. bath fans and attic fans with or without in-wall-controllers with humidistats
- 2bc) projected electrical needs for cooking, baking, microwaving (typically at 100W*15amps=1500W or more)
- 2c) how many volts at how many amps the local renewables are producing
But IIUC, Buy-All Sell-All service provision agreements threaten termination of service if the customer/competitor does anything but sell all locally produced electricity to the grid by direct connection, so an emergency cut-over that charges your batteries off your solar panels instead of the grid (e.g. when the grid is down) is forbidden.
New Docker Desktop: Run WASM Applications Alongside Linux Containers in Docker
This feels huge.
I haven't messed with WASM yet but I do love the idea of being able to build something that targets wasm/wasi with Docker like I'm already doing today and still package all of its dependencies into a single Docker image.
I also love the fact that code that targets wasm can run in a browser or on the machine in an isolate without hard forking.
It used to be called EAR file.
HTTP SXG and Web Bundles (and SRI) - components of W3C Web Packaging - may be useful for a signed WASM package format: https://github.com/WICG/webpackage#packaging-tools
How is WASM distinct from an unsigned binary blob?
Nothing other than trying to reboot the ecosystem for VCs.
https://docs.oracle.com/javase/7/docs/technotes/tools/window...
Sigstore is a CNCF project for centralized asset signatures for packages, containers, software artifacts; Cosign, Gitsign: https://docs.sigstore.dev/#how-to-use-sigstore
Re: TUF, Sigstore, W3C DIDs, CT Certificate Transparency logs, W3C Web Bundles; and reinventing the signed artifact wheel: https://news.ycombinator.com/item?id=30682329 ("Podman can transfer container images without a registry")
From "HTTP Messages Signatures" (~SXG) https://news.ycombinator.com/item?id=29281449 :
> blockcerts/cert-verifier-js ?
blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates/cert-verifier-js
When will emscripten output java byte code?
Just because something existed before something else doesn't mean a competitor can't spring up and have more momentum. It feels like the JVM is massively falling behind and it loses out massively on things like memory efficiency.
Even on that WASM is a follower,
"NestedVM provides binary translation for Java Bytecode. This is done by having GCC compile to a MIPS binary which is then translated to a Java class file. Hence any application written in C, C++, Fortran, or any other language supported by GCC can be run in 100% pure Java with no source changes."
With greetings from 2006, http://nestedvm.ibex.org/
How does the memory usage change? Does Java still require initial RAM reservation? /? Java specify how much RAM https://www.google.com/search?q=java+specify+how+much+ram
VOC transpiles Python to Java bytecode. Py2many transpiles Python to many languages but not yet Java.
Apache Arrow can do IPC to share memory references to structs with schema without modification between many languages now; including JS and WASM. https://arrow.apache.org/
FWIU Service Workers and Task Workers and Web Locks are the browser APIs available for concurrency in browsers and thus WASM. https://github.com/jupyterlab/jupyterlab/issues/1639#issueco...
"WebVM" https://news.ycombinator.com/item?id=30168491 :
> Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"? [Or Git; though there's already isomorphic-git in JS]
"WebGPU" https://news.ycombinator.com/item?id=30601415
Emscripten-compiled WASM can be packaged with ~conda packages and built and hosted by emscripten-forge ( which works like conda-forge, which has Python, R, Julia, Rust) to be imported from JS and WASM. Here's the picomamba recipe.yml on emscripten-forge: https://github.com/emscripten-forge/recipes/blob/main/recipe... and for CPython: https://github.com/emscripten-forge/recipes/blob/main/recipe...
Browsers could run WASM containers, too. How does the browser sandbox+ WASM runtime sandbox (that lacks WASI) compare to the security features of Linux containers?
How do the docstrings look after transpilation?
Are there relative performance benchmarks that help estimate the overhead of the WASM-recompilation and runtime? How much slower is it to run the same operations with the same code in a runtime with WASI support?
Are there cgroups and other container features for WASM applications?
Is there any way to tell whether an unsigned WASM bundle is taking 110% of CPU in a browser tab process?
Do browser tabs yet use cgroups functionality to limit resource exhaustion risks?
Should we be as confident in unsigned WASM in a WASM runtime as with TUF-signed containers?
Ask HN: Which books have made you a better thinker and problem solver?
Your choices needn't be only math books. They can come from any discipline or genre.
When you mention any book please add a line or two as to why it made you a better thinker and problem solver.
This suggestion is humorous, but absolutely true: Potty Training In 3 Days.
Before having children, I thought I was fairly empathetic and introspective, but raising a child helped me realize how superficial those traits in myself were.
I'm being completely honest when I say this book made me a better leader and project manager - having a better understanding of the motivations of others, incentivizing those looking to you for guidance based on their own goals/desires, providing those with tools they need to succeed, and taking a macro view of a problem and allowing those under me to flourish and find creative ways to solve problems that take advantage of their strengths and idiosyncrasies.
I'm in no way suggesting that you infantilize those around you, just that teaching my toddler to shit opened my eyes to the way I approached problems, and Brandi Brucks' book helped me approach things differently with great success!
"The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-Step, Lasting Change for You and Your Child" https://www.google.com/search?kgmid=/g/11h7dr5mm6&hl=en-US&q...
"Everyday Parenting: The ABCs of Child Rearing" (Kazdin, Yale,) https://www.coursera.org/learn/everyday-parenting :
> The course will also shed light on many parenting misconceptions and ineffective strategies that are routinely used.
Re: Effective praise and Validating parenting
https://wrdrd.github.io/docs/consulting/kids
I love Kazdin, great additional suggestion. I didn't know there was a course on Coursera though, thanks for sharing!
Thanks for these suggestions, bought the books & signed up for the course!
Building arbitrary Life patterns in 15 gliders
Interestingly enough, the concept of placing gliders at a distance away seems to touch on the relativity of space and time. Here, with space, we are also encoding the time at which a certain pattern (a glider) appears where it's needed. In a rigid system like the GoL, we can't trade space with time easily, since everything happens at a constant speed, but it makes one wonder...
...if there's a GoL version where time varies somehow¹ with something²
¹ directly?
² amount of activity? mass?
There have been a lot of GoL variants over the years, but I don't remember running into any attempts to vary the speed of evolution in different locations on the same grid.
The idea that all neighbors move to the next tick simultaneously is a fundamental assumption in cellular automata in general. If you try changing that, the optimizations that allow us to simulate CAs at any kind of reasonable speed ... all stop working, pretty much. It's kind of painful even to think about.
Which means there are probably very interesting rules out there somewhere, where CAs run faster/slower depending on pattern density -- it's just going to be very tricky to explore that particular search space.
The "superstep" that we practically impose upon simulations of entropy and emergence is out of accord with our modern understanding of non-regularly-quantizable spacetime. The debuggable Von Neumann instruction pipeline precludes "in-RAM computing" which conceivably does converge if consensus-level error correction is necessary.
The term 'superstep' reminds me of the HashLife algorithm https://en.wikipedia.org/wiki/Hashlife for computing the Game of Life. It computes multiple generations at the same time, and runs at different speeds in different parts of the universe, but only with the purpose of computing CGoL faster, not to introduce any relativity.
How does it "touch on relativity of space and time"?
I think that was just saying "more space between initial gliders implies a longer time needed to complete construction". There's no Einsteinian relativity to be found here.
(A Doppler effect does show up in Conway's Life sometimes, but that's about as far as we get with analogies to the physical universe...!)
How nonlocal are the entanglements in Conway's game of cellular automata, if they're entanglements with symmetry; conservation but emergence? TIL about the effect of two Hadamard gates upon a zero.
Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord :
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
From "Convolution Is Fancy Multiplication" https://news.ycombinator.com/item?id=25194658 :
> FWIW, (bounded) Conway's Game of Life can be efficiently implemented as a convolution of the board state: https://gist.github.com/mikelane/89c580b7764f04cf73b32bf4e94...
Conway's Game is a 2D convolution; without complex phase or constructive superposition.
Convolution theorem: https://en.wikipedia.org/wiki/Convolution_theorem :
> In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
From Quantum Fourier transform: https://en.wikipedia.org/wiki/Quantum_Fourier_transform :
> The quantum Fourier transform can be performed efficiently on a quantum computer with a decomposition into the product of simpler unitary matrices. The discrete Fourier transform on 2^{n} amplitudes can be implemented as a quantum circuit consisting of only O(n^2) Hadamard gates and controlled phase shift gates, where n is the number of qubits.[2] This can be compared with the classical discrete Fourier transform, which takes O(n*(2^n)) gates (where n is the number of bits), which is exponentially more than O(n^2).
MicroPython officially becomes part of the Arduino ecosystem
Thonny has MicroPython support.
"BLD: Install thonny with conda and/or mamba" https://github.com/thonny/thonny/issues/2181
Mu editor has MicroPython support: https://codewith.mu/
For VSCode, there are a number of extensions for CircuitPython and MicroPython:
joedevivo.vscode-circuitpython https://marketplace.visualstudio.com/items?itemName=joedeviv...
Pymakr https://github.com/pycom/pymakr-vsc/blob/next/GET_STARTED.md
Pico-Go: https://github.com/cpwood/Pico-Go
/? CircuitPython MicroPython: https://www.google.com/search?q=circuitpython+micropython
Aurdino IDE now has support for Raspberry Pi Pico.
arduino-pico: https://arduino-pico.readthedocs.io/en/latest/index.html
Rshell and ampy are CLI tools for MicroPython:
rshell: https://github.com/dhylands/rshell
ampy: https://github.com/scientifichackers/ampy
Fedora MicroPython docs: https://developer.fedoraproject.org/tech/languages/python/mi...
awesome-micropython: https://github.com/mcauser/awesome-micropython#ides
awesome-arduino: https://github.com/Lembed/Awesome-arduino
KiCad (ngspice) is an open source tool for circuit simulation. Tinkercad is another.
TIL about Mecanum wheels.
wokwi/rp2040js: https://github.com/wokwi/rp2040js:
> Raspberry Pi Pico Emulator for the Wokwi Simulation Platform. It blinks, runs Arduino code, and even the MicroPython REPL!
What are some advantages of Arduino IDE? (which is cross-platform and now supports MicroPython and Pi Pico W (a $6 IC with MicroUSB and a pinout spec))
ELIZA is Turing Complete
It's referring to the ELIZA scripting language, not the original "therapist" interface. A programming language being Turing-complete isn't really news, it's... the main point, unless you are intentionally trying to avoid being Turing-complete.
> A programming language being Turing-complete isn't really news, it's... the main point, unless you are intentionally trying to avoid being Turing-complete.
For example, Bitcoin "smart contracts" are intentionally not Turing-complete, and there are not per-opcode costs like there are for EVM and eWASM (which are embedded in other programs than Ethereum)
"The Cha Cha Slide Is Turing Complete" https://news.ycombinator.com/item?id=32477593 :
> Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness
> Church-Turing thesis: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
Halting problem: https://en.wikipedia.org/wiki/Halting_problem :
> A key part of the proof is a mathematical definition of a computer and program, which is known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first cases of decision problems proven to be unsolvable. This proof is significant to practical computing efforts, defining a class of applications which no programming invention can possibly perform perfectly.
Quantum Turing machine > History: https://en.wikipedia.org/wiki/Quantum_Turing_machine :
> [...] any quantum algorithm can be expressed formally as a particular quantum Turing machine. However, the computationally equivalent quantum circuit is a more common model.[1][2]
> History: [2005] A quantum Turing machine with postselection was defined by Scott Aaronson, who showed that the class of polynomial time on such a machine (PostBQP) is equal to the classical complexity class PP.
Complexity Zoo > Petting Zoo > {P, NP, PP,}, Modeling Computation > Deterministic Turing Machine https://complexityzoo.net/Petting_Zoo#Deterministic_Turing_M...
-
When I read the title, I too assumed it was about dialectical chatbots and - having just read turtledemo.chaos - wondered whether there's divergence and potentially infinite monkeys and then emergence of a reverse shell to another layer of indirection; turtles all the way down w/ emergence.
Draft RFC: Cryptographic Hyperlinks
This hashlink spec seems to duplicate the existing Named Information (ni:) URI scheme (RFC6920), as well as widely deployed non-standard solutions, namely magnet: URI's. These don't seem to currently support the Multihash format that's recommended here, but could easily be extended/amended to that effect. Not seeing the point of this.
Neither RFC6920 nor magnet: URIs appear to support the "just add a url parameter with the hash to the existing URL" use case, FWICS.
CAS Content-addressable storage: https://en.wikipedia.org/wiki/Content-addressable_storage#Op... : IPFS <CID>/path
RFC6920: https://www.rfc-editor.org/rfc/rfc6920.html
Magnet URI scheme: https://en.wikipedia.org/wiki/Magnet_URI_scheme
draft-sporny-hashlink-07 2021: https://datatracker.ietf.org/doc/html/draft-sporny-hashlink-... :
<url>?hl=<resource-hash>
The URL-parameter method is application-specific though, not sure it’s very useful to have a standard for that. Applications can just use whatever parameter name makes sense for them plus an existing hash/CAS encoding for the value.
How else can browsers check the hash of a file downloaded over HTTPS?
The main focus of the draft RFC is to define a new "hl" hashlink URL scheme. This doesn't necessariy target browsers, though they could certainly choose to support that scheme. The URL-parameter encoding, which is not the recommended encoding, is merely for "existing applications utilizing historical URL schemes", which may or may notinclude browsers. The draft explicitly states: "Implementers should take note that the URL parameter-based encoding mechanism is application specific and SHOULD NOT be used unless the URL resolver for the application cannot be upgraded to support the RECOMMENDED encoding."
HTTP SRI Subresource Integrity allows for specifying which cryptographic hash is presented, like ld-proofs has a "future-proof" signatureSuite attribute : https://developer.mozilla.org/en-US/docs/Web/Security/Subres...
> Subresource Integrity with the <script> element
> You can use the following <script> element to tell a browser that before executing the https://example.com/example-framework.js script, the browser must first compare the script to the expected hash, and verify that there's a match.
<script
src="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" target="_blank" rel="nofollow noopener">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js"
integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
crossorigin="anonymous"></script>
[...]
Note: An integrity value may contain multiple hashes separated by whitespace. A resource will be loaded if it matches one of those hashes
To create a SHA384 SRI HTTP resource hash for the integrity= attr: cat FILENAME.js | openssl dgst -sha384 -binary | openssl base64 -A
### 1.1. Multiple Encodings
A hashlink can be encoded in two different ways, the RECOMMENDED way
to express a hashlink is:
hl:<resource-hash>:<optional-metadata>
To enable existing applications utilizing historical URL schemes to
provide content integrity protection, hashlinks may also be encoded
using URL parameters:
<url>?hl=<resource-hash>
Implementers should take note that the URL parameter-based encoding
mechanism is application specific and SHOULD NOT be used unless the
URL resolver for the application cannot be upgraded to support the
RECOMMENDED encoding.
[...]
#### 3.2.1. Hashlink as a Parameterized URL Example
The example below demonstrates a simple hashlink that provides
content integrity protection for the "http://example.org/hw.txt"
file, which has a content type of "text/plain":
http://example.org/hw.txt?hl=
zQmWvQxTqbG2Z9HPJgG57jjwR154cKhbtJenbyYTWkjgF3e
Hydrogen-producing rooftop solar panels nearing commercialization
Is this more or less flammable than rooftop solar?
Could adjacent H2O and CO2 capture and storage help mitigate hydrogen fire risk?
I really cringe at the thought of H2 at home for multiple reasons.
- a wide explosive concentration range (LEL to UEL). The wider the range, the more opportunity you have for encountering an explosive mix if there's a leak.
- a low ignition energy. A small static discharge has more than enough energy to ignite H2.
- It's a functionally difficult gas to work with. It's an escape artist. You have to use the right materials. You have issues like embrittlement, hydrogen stress cracking. It's not great from a volumetric energy density perspective - more watts are required for pumping X units of energy than a fuel with a higher energy density. To bump up the energy density, you compress it and or liquefy it.. which costs energy, you've got high working pressures, and are possibly dealing with cryogenics as well.
FWIU actually Green hydrogen is where they're actually considered the total "path to value".
Charge the grid with it; whatever it is, if it's "economical": charge the grid with it.
> [ Hydrogen Safety: https://en.wikipedia.org/wiki/Hydrogen_safety ; videos: [ ]]
>> Contents: Prevention, Inerting and purging, Ignition source management (two rocks, HVAC, lighting, ), Mechanical integrity and reactive chemistry, Leaks and flame detection systems, Ventilation and flaring (all facilities that process Hydrogen must have anti-static ventilation systems), Inventory management and facility spacing, Cryogenics, Human factors, Incidents, *Hydrogen codes and standards*, Guidelines*
Would capturing CO2 and water with the same or adjacent PV/TPV+ panels help mitigate Hydrogen Hazard? FWIU, Aerogels and hydrogels can be made from CO2.
A possibility would be to put the H2 into a different storage medium like ammonia or methanol, rather than storing it as gaseous hydrogen.
"With a plan to decarbonize heating systems with hydrogen, Modern Electron raises $30M" (2022) https://techcrunch.com/2022/02/03/with-a-plan-to-decarbonize... :
> The second, which is still under development but about to make its debut, is what they’re calling the Modern Electron Reserve, which rather than burning natural gas — which is mostly CH4, or methane — reduces it to solid carbon (in the form of graphite) and hydrogen gas. The gas is passed on to the furnace to be burned, and converted to both heat and energy, while the graphite is collected for disposal or reuse.
And there's a picture of what's left after they extract just the Hydrogen from PNG/LNG for one day of home heat.
Letting the grass grow longer is one way to absorb carbon locally; longer grass is more efficient at absorbing carbon (e.g. carbon emitted by comparatively inefficiently burning natural gas for heat (an exothermic critical reaction))
Show HN: I built my own PM tool after trying Trello, Asana, ClickUp, etc.
Hey HN,
Over the past two years, I've been building Upbase, an all-in-one PM tool.
I've tried so many project management tools over the years (Trello, Asana, ClickUp, Teamwork, Wrike, Monday, etc.) but they've all fallen short. Most of them are overly complicated and painful to use. Some others, like Trello, are too limited for my needs.
Most importantly, most of these tools tend to be focused on team collaboration and completely ignore personal productivity.
They are useful for organizing my work, but not great at helping me stay focused to get things done.
That's why I decided to build Upbase.
I try to make it clean and simple, without all the bells and whistles. Apart from team collaboration, I added many personal productivity features, including Weekly/Daily planner, Time blocking, Pomodoro Timer, Daily Journal, etc. so I don't need another to-do list app.
Now I can use Upbase to collaborate with my team AND manage your personal stuff at the same time, without all the bloat.
If these resonate with you, then give Upbase a try. It has a Free Forever plan though.
Let me know if you have any feedback or questions!
One thing I've always wanted in a tool like this is the ability to map out probabilities, i.e., we do A, then B but after that we do either C,D or E. Each one on these has an associated probability (C: 0.4, D: 0.5, E: 0.1) and an associated estimate (C: 10 +- 5 normal, D: 12 +- 3 uniform, E: 3 +-1 normal).
The UI would look like a graph and like ms project it could include resource levelling in order to show bottlenecks.
I know this probably sounds complicated but I think it maps to reality fairly well and thus actually simple (fighting reality is hard).
Gantt charts can be made in MS Project, Google Sheets,: https://en.wikipedia.org/wiki/Gantt_chart
Critical path method > Basic techniques: https://en.wikipedia.org/wiki/Critical_path_method#Basic_tec... :
> Components: The essential technique for using CPM [8][9] is to construct a model of the project that includes the following:
> (1) A list of all activities required to complete the project (typically categorized within a work breakdown structure), (2) The time (duration) that each activity will take to complete, (3) The dependencies between the activities and, (4) Logical end points such as milestones or deliverable items.
> Using these values, CPM calculates the *longest path* of planned activities to logical end points or to the end of the project, and *the earliest and latest that each activity can start and finish without making the project longer.* This process determines which activities are "critical" (i.e., on the longest path) and which have "total float" (i.e., can be delayed without making the project longer). In project management, a critical path is the sequence of project network activities which add up to the longest overall duration, regardless if that longest duration has float or not. This determines the *shortest time possible to complete the project.\ "*
Re: [Hilbert curve, Pyschedule, CSP,] Scheduling of [OS, Conference Room,] and other Resources https://news.ycombinator.com/item?id=31777451 https://westurner.github.io/hnlog/#comment-31777451
Complexity and/or Time estimates can be stuffed into nonexclusive namespaced label names on GitHub/GitLab/Gitea:
#ComplexityEstimate:
C:1
C: Fibonacci[n]
C: (A), J, Q, K, (A)
#TimeEstimate:
T:2d
T:5m
#Good First Issue
GitLab EE and Gitea have time tracking on Issues and Pull Requests.Gitea has untyped Issue dependency edges, but there could probably easily be another column in the is-it-a through table for the many-to-many Issue edges table to support typed edges with URIs i.e. JSONLD RDF.
GitLab Free supports the "relates to" Linked Issue relation; EE also supports "blocks"/"is blocked by".
Planning poker: https://en.wikipedia.org/wiki/Planning_poker
Agile estimation: https://www.google.com/search?q=agile+estimation
"Agile Estimating and Planning" (2005) https://g.co/kgs/kDScM7
CPM sounds a lot like a PERT chart, which was invented in the 1950's to help design and build the Polaris nuclear submarines during the Cold War[0]. PERT has been part of Microsoft Project for decades, so it's readily available.
When you really need it (like in the case of tens of thousands of people trying to build and ship a single project at lighting speed) PERT is an extremely powerful and effective project management methodology. If, on the other hand, your "project management division" is you, it's a dangerously seductive time sink that will consume huge amounts of your time building and tuning and gathering and updating data and information, for arbitrarily close to zero direct real benefit and huge net negative benefit. The increase in effectiveness you gain from all that modeling is, in software development projects, negligible and the cost of doing all that modeling is much higher than you think it will be if you've never done it (that's why we don't do waterfall planning in software - it's not that no one's thought of it, it's that it's not effective on projects of any real complexity). As with any approach to planning, PERT works best at a particular scale and project type, and it's typically a quite large scale non-software project.
In my personal opinion, from a software development standpoint, the valuable part of building a PERT chart is doing the work and thinking required to draw a dependency map for your tasks. Drawing all those lines to show what has to be done before what is an incredibly effective tool for helping you flesh out and find dependencies (tasks) you hadn't realized needed to be on your list. Use something like MS Project to build that dependency diagram, then force yourself to stop using Project because it's too seductive at making you feel like the data is giving you power when it's really just consuming your brainpower ineffectively. Use dependency mapping to build a waterfall caliber understanding of what you need to build, then set it aside and use more appropriate agile style approaches to actually work through the project in an optimum manner (which often means not building it in exactly the way you mapped out originally).
[0]https://en.m.wikipedia.org/wiki/Program_evaluation_and_revie...
WBS: Work Breakdown Structure: https://en.wikipedia.org/wiki/Work_breakdown_structure
PERT -> see also ->
"Project network" https://en.wikipedia.org/wiki/Project_network :
> Other techniques: The condition for a valid project network is that it doesn't contain any circular references.*
> Project dependencies can also be depicted by a predecessor table. Although such a form is very inconvenient for human analysis, project management software often offers such a view for data entry.
> An alternative way of showing and analyzing the sequence of project work is the design structure matrix or dependency structure matrix.
design structure matrix or dependency structure matrix: https://en.wikipedia.org/wiki/Design_structure_matrix
READMEs, Issues, Pull Requests, and Project Board Cards may contain Nested Markdown Task Lists with Issue (and actual Pull Request) # references:
- [ ] Objective
- [x] Task 1 +tag
- [ ] #237 (GitHub fills in the Title and Open/Closed/Merged state and adds a *hover card*)
- [x] Multiline Markdown list item indentation
<URL|text|>
- ID#:
- Title:
- Labels: [ ]
- Description: |
- htps://URL#yaml-yamlld
- [x] Multiline Markdown list item indentation w/ --- YAML front matter delimiters
---
- id:
- title:
- labels: [ ]
---
- htps://URL#yaml-yamlld
Time management > Setting priorities and goals > The Eisenhower Method: https://en.wikipedia.org/wiki/Time_management#The_Eisenhower... : | | Important | Not important
| Urgent |
| Not Urgent |
From "Ask HN: Any well funded tech companies tackling big, meaningful problems?" https://news.ycombinator.com/item?id=24412493 :> https://en.wikipedia.org/wiki/Strategic_alignment ... "Schema.org: Mission, Project, Goal, Objective, Task" https://news.ycombinator.com/item?id=12525141
NSA urges orgs to use memory-safe programming languages
# Infosec Memory Safety
## Hardware
- Memory protection: https://en.wikipedia.org/wiki/Memory_protection
- NX Bit: https://en.wikipedia.org/wiki/NX_bit
- Can non-compiled languages (e.g. those with mutable code objects like Python) utilize the NX bit that the processor supports?
- Can TLA+ find side-channels (which bypass all software memory protection features other than encryption-in-RAM)?
- How do DMA and IOMMU hardware features impact software memory safety controls? https://news.ycombinator.com/item?id=23993763
- DMA: Direct Memory Access
- DMA attack > Mitigations: https://en.wikipedia.org/wiki/DMA_attack
- IOMMU: I-O Memory Management Unit; GPUs, Virtualization, https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_ma...
- Kernel IOMMU parameters: Ctrl-F "iommu": https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...
- RDMA: Remote direct memory access https://en.wikipedia.org/wiki/Remote_direct_memory_access
## Software
- Type safety > Memory management and type safety: https://en.wikipedia.org/wiki/Type_safety#Memory_management_...
- Memory safety > Types of memory errors: https://en.wikipedia.org/wiki/Memory_safety#Types_of_memory_...
- Template:Memory management https://en.wikipedia.org/wiki/Template:Memory_management
- Category:Memory_management https://en.wikipedia.org/wiki/Category:Memory_management
- Reference (computerscience) https://en.wikipedia.org/wiki/Reference_(computer_science)
- Pointer (computer programming) https://en.wikipedia.org/wiki/Pointer_(computer_programming)
- Smart pointer (computer programming) in C++: unique_ptr, shared_ptr and weak_ptr; Python: weakref, Arrow Plasma IPC, https://en.wikipedia.org/wiki/Smart_pointer
- Manual Memory Management > Resource Acquisition Is Initialization https://en.wikipedia.org/wiki/Manual_memory_management#Resou...
- Resource acquisition is initialization (C++ (1980s), D, Ada, Vala, Rust), #Reference_counting (Perl, Python (CPython,), PHP,) https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...
- Ada > Language constructs > Concurrency https://en.wikipedia.org/wiki/Ada_(programming_language)#Con...
- C_dynamic_memory_allocation#Common_errors: https://en.wikipedia.org/wiki/C_dynamic_memory_allocation#Co...
- Python 3 > C-API > Memory Managment: https://docs.python.org/3/c-api/memory.html
- The Rust Programming Language > 4. Understanding Ownership > 4.1. What is Ownership? https://doc.rust-lang.org/book/ch04-00-understanding-ownersh...
- The Rust Programming Language > 6. Fearless Concurrency > Using Message Passing to Transfer Data Between Threads https://doc.rust-lang.org/book/ch16-02-message-passing.html#...
> One increasingly popular approach to ensuring safe concurrency is message passing, where threads or actors communicate by sending each other messages containing data. Here’s the idea in a slogan from the Go language documentation: “Do not communicate by sharing memory; instead, share memory by communicating.”
> To accomplish message-sending concurrency, Rust's standard library provides an implementation of channels. A channel is a general programming concept by which data is sent from one thread to another.
> You can imagine a channel in programming as being like a directional channel of water, such as a stream or a river. If you put something like a rubber duck into a river, it will travel downstream to the end of the waterway.
- The Rust Programming Language > 15. Smart Pointers > Smart Pointers: https://doc.rust-lang.org/book/ch15-00-smart-pointers.html
- The Rust Programming Language > 19. Advanced Features > Unsafe Rust: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html
- Secure Rust Guidelines > Memory management, > Checklist > Memory management: https://anssi-fr.github.io/rust-guide/05_memory.html
- Go 101 > "Type-Unsafe Pointers" https://go101.org/article/unsafe.html https://pkg.go.dev/unsafe
- https://github.com/rust-secure-code/projects#side-channel-vu...
- Segmentation fault > Causes, Examples, : https://en.wikipedia.org/wiki/Segmentation_fault
- "CWE CATEGORY: Pointer Issues" https://cwe.mitre.org/data/definitions/465.html
- "CWE CATEGORY: Memory Buffer Errors" https://cwe.mitre.org/data/definitions/1218.html
- "CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer" https://cwe.mitre.org/data/definitions/119.html
- "CWE CATEGORY: SEI CERT C Coding Standard - Guidelines 08. Memory Management (MEM)" https://cwe.mitre.org/data/definitions/1162.html
- "CWE CATEGORY: CERT C++ Secure Coding Section 08 - Memory Management (MEM)" https://cwe.mitre.org/data/definitions/876.html
- SEI CERT C Coding Standard > "Rule 08. Memory Management (MEM)" https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pa...
- SEI CERT C Coding Standard > "Rec. 08. Memory Management (MEM)" https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pa...
- Invariance (computer science) https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invari...
- TLA+ Model checker https://en.wikipedia.org/wiki/TLA%2B#Model_checker > The TLC model checker builds a finite state model of TLA+ specifications for checking invariance properties.
- Data remnance; after the process fails or is ended, RAM is not zeroed: https://en.wikipedia.org/wiki/Data_remanence
- Memory debugger; valgrind, https://en.wikipedia.org/wiki/Memory_debugger
- awesome-safety-critical https://awesome-safety-critical.readthedocs.io/en/latest/#so... ; Software Safety Standards, Handbooks; Formal Verification; backup/ https://github.com/stanislaw/awesome-safety-critical/tree/ma...
- > Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools: https://news.ycombinator.com/item?id=24511280
TEE Trusted Execution Environment > Hardware support, TEE Operating Systems: https://en.wikipedia.org/wiki/Trusted_execution_environment#...
List of [SGX,] vulnerabilities: https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
Protection Ring: https://en.wikipedia.org/wiki/Protection_ring ... Memory Segmentation: https://en.wikipedia.org/wiki/Memory_segmentation
.data segment: https://en.wikipedia.org/wiki/Data_segment
.code segment: https://en.wikipedia.org/wiki/Code_segment
NX bit: https://en.wikipedia.org/wiki/No-execute_bit
Arbitrary code execution: https://en.wikipedia.org/wiki/Arbitrary_code_execution :
> This type of attack exploits the fact that most computers (which use a Von Neumann architecture) do not make a general distinction between code and data,[6][7] so that malicious code can be camouflaged as harmless input data. Many newer CPUs have mechanisms to make this harder, such as a no-execute bit. [8][9]
> - Memory debugger; valgrind, https://en.wikipedia.org/wiki/Memory_debugger
"The GDB developer's GNU Debugger tutorial, Part 1: Getting started with the debugger" (2021) https://developers.redhat.com/blog/2021/04/30/the-gdb-develo...
"Debugging Python C extensions with GDB" (2021) https://developers.redhat.com/articles/2021/09/08/debugging-... & "Python Devguide" > "GDB support" https://devguide.python.org/advanced-tools/gdb/ :
run, where, frame, p(rint),
py-list, py-up/py-down, py-bt, py-locals, py-print
/? site:github.com inurl:awesome inurl:gdb
https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw.../? vscode debugger: https://www.google.com/search?q=vscode+debugger
/? juyterlab debugger: https://www.google.com/search?q=jupyterlab+debugger
Ghidra: https://en.wikipedia.org/wiki/Ghidra
> Ghidra can be used as a debugger since Ghidra 10.0. Ghidra's debugger supports debugging user-mode Windows programs via WinDbg, and Linux programs via GDB. [11]
Ghidra 10.0 (2021) Release Notes: https://ghidra-sre.org/releaseNotes_10.0beta.html
"A first look at Ghidra's Debugger - Game Boy Advance Edition" (2022) https://wrongbaud.github.io/posts/ghidra-debugger/ :
> - Debugging a program with Ghidra using the GDB stub
> - Use the debugging capability to help us learn about how passwords are processed for a GBA game
/? site:github.com inurl:awesome ollydbg ghidra memory https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
Memory forensics: https://en.wikipedia.org/wiki/Memory_forensics
awesome-malware-analysis > memory-forensics: https://github.com/rshipp/awesome-malware-analysis/blob/main...
github.com/topics/memory-forensics: https://github.com/topics/memory-forensics :
- microsoft/avml: https://github.com/microsoft/avml :
/dev/crash
/proc/kcore
/dev/mem
> NOTE: If the kernel feature `kernel_lockdown` is enabled, AVML will not be able to acquire memory.Aluminum formate Al(HCOO)3: Earth-abundant, scalable, & material for CO2 capture
"Aluminum formate, Al(HCOO)3: An earth-abundant, scalable, and highly selective material for CO2 capture" (2022) https://www.science.org/doi/10.1126/sciadv.ade1473
> Abstract: A combination of gas adsorption and gas breakthrough measurements show that the metal-organic framework, Al(HCOO)3 (ALF), which can be made inexpensively from commodity chemicals, exhibits excellent CO2 adsorption capacities and outstanding CO2/N2 selectivity that enable it to remove CO2 from dried CO2-containing gas streams at elevated temperatures (323 kelvin). Notably, ALF is scalable, readily pelletized, stable to SO2 and NO, and simple to regenerate. Density functional theory calculations and in situ neutron diffraction studies reveal that the preferential adsorption of CO2 is a size-selective separation that depends on the subtle difference between the kinetic diameters of CO2 and N2. The findings are supported by additional measurements, including Fourier transform infrared spectroscopy, thermogravimetric analysis, and variable temperature powder and single-crystal x-ray diffraction.
"NIST Breakthrough: Simple Material Could Scrub Carbon Dioxide From Power Plant Smokestacks" (2022) https://scitechdaily.com/nist-breakthrough-simple-material-c... :
> [What to do with all of the captured CO2?]
>> - Generate more formic acid to capture more CO2
- Make protein powder (#Solein,). And nutrients and flavors?
- Feed algae
- Feed plants, greenhouses
- CAES Compressed Air Energy Storage
- Firefighting: https://twitter.com/westurner/status/1572664456210948104
- Make aerogels. Aerogels are useful for firefighter protective clothing, extremely lightweight insulation, extremely lightweight packing materials, aerospace; and no longer require supercritical drying to produce: https://twitter.com/westurner/status/1572662622423584770?t=A...
- Make hydrogels. Hydrogels are useful for: firefighting: https://www.google.com/search?q=hydrogel+firefighting https://twitter.com/westurner/status/1572664456210948104
- Make diamonds, buckyballs, fullerenes, graphene
- Water filtration: activated carbon, nanoporous graphene
Is there a similar process ("MOF,) for capturing Methane from flue gas? Is that before or after the CO2 capture?
- Methane is worse for Earth than CO2 FWIU. And there are many uncapped wells leaking methane, as now visible from space: https://news.ycombinator.com/item?id=33431427
- It's possible to make CBG Cleaner Burning Gasoline from methane (natural gas)
How does the #CophenHill recycling and flue capture facility currently handle CO2 and Methane capture?
- "Transforming carbon dioxide into jet fuel using an organic combustion-synthesized Fe-Mn-K catalyst." (2020) https://doi.org/10.1038/s41467-020-20214-z https://news.ycombinator.com/item?id=25559414
Electrons turn piece of wire into laser-like light source
Full paper is here https://europepmc.org/article/ppr/ppr485152
"Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) Ye Tian, Dongdong Zhang, Yushan Zeng, Yafeng Bai, Zhongpeng Li, and 1 more https://doi.org/10.21203/rs.3.rs-1572967/v1
> Abstract: Surface plasmonic with its unique confinement of light is expected to be a cornerstone for future compact radiation sources and integrated photonics devices. The energy transfer between light and matter is a defining aspect that underlies recent studies on optical surface-wave-mediated spontaneous emissions. But coherent stimulated emission, being omnipresent in every laser system, remains to be realized and revealed in the optical near fields unambiguously and dynamically. Here, we present the coherent amplification of Terahertz surface plasmon polaritons via free electron stimulated emission. We demonstrate the evolutionary amplification process with a frequency redshift and lasts over 1-mm interaction length. The complementary theoretical analysis predicts a 100-order surface wave growth when a properly phase-matched electron bunch is used, which lays the ground for a stimulated surface wave light source and may facilitate capable means for matter manipulation, especially in the Terahertz band.
Polariton: https://en.wikipedia.org/wiki/Polariton
Surface plasmon polaritons : https://en.wikipedia.org/wiki/Surface_plasmon_polaritons :
> [...] Application of SPPs enables subwavelength optics in microscopy and photolithography beyond the diffraction limit. It also enables the first steady-state micro-mechanical measurement of a fundamental property of light itself: the momentum of a photon in a dielectric medium. Other applications are photonic data storage, light generation, and bio-photonics.[2][3][4][5]
Near-infrared window in biological tissue: https://en.wikipedia.org/wiki/Near-infrared_window_in_biolog...
NIRS Near-infrared spectroscopy > Applications: https://en.wikipedia.org/wiki/Near-infrared_spectroscopy#App...
This looks ripe for some cutting edge table-top physics!
TabPFN: Transformer Solves Small Tabular Classification in a Second
From https://twitter.com/FrankRHutter/status/1583410845307977733 :
> This may revolutionize data science: we introduce TabPFN, a new tabular data classification method that takes 1 second & yields SOTA performance (better than hyperparameter-optimized gradient boosting in 1h). Current limits: up to 1k data points, 100 features, 10 classes. 1/6
[Faster and more accurate than gradient boosting for tabular data: Catboost, LightGBM, XGBoost]
Mathics: A free, open-source alternative to Mathematica
Is there a reasonably neutral comparison of Mathics vs Mathematica anywhere?
Based on an amazing showcase[1] Mathematica is right at the top of my list of languages to learn if it (and at least some of the surrounding tooling) ever becomes open source. I wonder how many of those examples would give useful results in Mathics, or what their equivalents would be.
The thing about Mathematica / the Wolfram language is that it's quite a bit harder to create an open source interpreter than it was for eg R (which is actually a FOSS interpteter for the commercial package S) or for Matlab (not sure what the status of Octave, the FOSS interpeter for Matlab code, is, I read an entry on a mailing list a long time ago that its sole dev was giving up). A lot of the symbolic solvers that are used under the hood are Wolfram's IP, and it would be a monumental effort to recreate something similarly powerful from scratch.
You've heard of SageMath right?
The biggest thing missing from SageMath is a step by step solver. (Edit to add a caveat, I'm sure many professionals depend on minutiae of one or the other)
Feature-wise I'd say Sage has more Mathematica functionality than Octave does for MATLAB. Sage is not trying to be compatible however presumably it wouldn't be that hard if the functionality is there?
Sage is a bit of a Frankenstein though
SageMath (and the cocalc-docker image, and JupyterLite, and mambaforge, ) include SymPy; which can be called with `evaluate=False`
Advanced Expression Manipulation > Prevent expression evaluation: https://docs.sympy.org/latest/tutorials/intro-tutorial/manip...
> There are generally two ways to prevent the evaluation, either pass an evaluate=False parameter while constructing the expression, or create an evaluation stopper by wrapping the expression with UnevaluatedExpr.
From "disabling automatic simplification in sympy" https://stackoverflow.com/a/48847102 :
> A simpler way to disable automatic evaluation is to use context manager evaluate. For example,
from sympy.core.evaluate import evaluate
from sympy.abc import x,y,z
with evaluate(False):
print(x/x)
sage.symbolic.expression.Expression.unhold() and `hold=True`:
https://doc.sagemath.org/html/en/reference/calculus/sage/sym...IIRC there is a Wolfram Jupyter kernel?
WolframResearch/WolframLanguageForJupyter: https://github.com/WolframResearch/WolframLanguageForJupyter
mathics/IMathics is the Jupyter kernel for mathics: https://github.com/mathics/IMathics@main#egg=imathics
#pip install jupyter_console imathics
#conda install -c conda-forge -y jupyter_console jupyterlab
mamba install -y jupyter_console jupyterlab
jupyter console
jupyter kernelspec list
pip install -e git+https://github.com/mathics/imathics@main#egg=mathics
jupyter console --kernel=
%?
%logstart?
%logstart -o demo.log.py
There are Jupyter kernels for Python, Mathics, Wolfram, R, Octave, Matlab, xeus-cling, allthekernels (the polyglot kernel). https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
https://github.com/ml-tooling/best-of-jupyter#jupyter-kernel...The Python Jupyter kernel checks IPython.display.display()'d objects for methods in order to represent an object in a command line shell, graphical shell (qtconsole), notebook (.ipynb), or a latex document: _repr_mimebundle_(), _repr_html_(), _repr_json_(), _repr_latex_(), ..., __repr__(), __str__()
The last expression in an input cell of a notebook is implictly displayed:
from IPython.display import display
%display? # argspec, docstring
%display?? # ' & source code
display(last_expresssion)
Symbolic CAS mobile apps with tabling and charting and varying levels of support for complex numbers and quaternions, for example: Wolfram Mathematica, Desmos, Geogebra, JupyterLite, Jupyter on mobileAstronomers Discover Closest Black Hole to Earth
Micro black hole > Black holes in quantum theories of gravity https://en.wikipedia.org/wiki/Micro_black_hole#Black_holes_i...
Virtual black hole https://en.wikipedia.org/wiki/Virtual_black_hole
Timeline of gravitational physics and relativity https://en.wikipedia.org/wiki/Timeline_of_gravitational_phys...
- [ ] Superfluid Quantum Gravity
-- [ ] GR + Bernoulli's re: Dark Matter/Energy: Fedi (2017),
"What If (Tiny) Black Holes Are Everywhere?" https://youtu.be/srVKjWn26AQ
> Just one Planck relic per 30km cube, and that’s enough to make up most of the mass in the universe
Quantum foam: https://en.wikipedia.org/wiki/Quantum_foam
Sudo: Heap-based overflow with small passwords
Another day, another CVE in tool that we rely on everyday
The first question that we all want to ask
Could it be mitigated by safer, modern tech?
Yes.
One does not even need to reach for Rust. Literally any other language (in common use) other than very badly used C++ would not have had this problem.
C delenda est.
but if sudo was written in java we'd have other problems ;)
Do you know one of the reasons why Multics had a better security score than UNIX on DoD assement?
PL/I does bounds checking by default.
For reals. The fact that C toolchains have never even offered a bullet-proof bounds-checked (no UB) mode, no matter what the slowdown, boggles the mind. For something like sudo, literally running 100x slower would not be an issue. Its highest priority should be security.
100x slower would definitely be an issue. I am prompted for my sudo password probably 30x daily.
How long does sudo take to load on your system. Multiply that by 3000, is it really a noticeable number?
Seems to float around 0.005s (using "time sudo -k" to avoid timing user input), so yeah, x3000 = 15 seconds. Very noticeable.
Or even the x100 from (G)GP, that's a half second. Sometimes spiking to a full second.
I have once seen a proposal to make any sudo call wait for 10 seconds before doing anything so that the person running it has a moment to thing about what they just did do (and have ~10 seconds to ^C cancel it).
The person arguing also brought up that normally anything needing sudo should be automatized so that should be fine.
I'm not doing enough system administration to judge if that is a sane idea or not ;=)
That might be a reasonable idea but the time probably shouldn't be imposed by performance constraints.
Does it say somewhere that non-neteork-io PAM modules are supposed to be constant time?
def add_noise(t=10):
time.sleep(t-1)
time.sleep(
uniformrandom(min=0,max=1))
Constant time: https://en.wikipedia.org/wiki/Time_complexity#Constant_time(Re: Short, DES passwords https://en.wikipedia.org/wiki/Triple_DES :
> A CVE released in 2016, CVE-2016-2183 disclosed a major security vulnerability in DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of DES and 3DES, NIST has deprecated DES and 3DES for new applications in 2017, and for all applications by the end of 2023.[1] It has been replaced with the more secure, more robust AES.
Except for PQ. For PQ in 2022: https://news.ycombinator.com/item?id=32760170 :
> NIST PQ algos are only just now announced: https://news.ycombinator.com/item?id=32281357 : Kyber, NTRU, {FIPS-140-3}? [TLS1.4/2.0?]
I'm not sure I follow this comment.
Adding a random amount of time seems like a reasonable thing to do.
Not sure what the links are all about, or the discussion of time complexity... I mean, there isn't an "input size" to talk about big-O scaling anyway, in the case of sudo.
Should the time to complete the (single-character) password-hashing/key-strengthening routine vary in relation to any aspect of the input ?
Timing attacks > Avoidance https://en.wikipedia.org/wiki/Timing_attack
Cree releases LEDs designed for horticulture
Iirc chlorophyll has 2 very distinct absorption peaks [1] so it should be possible to design lights that target those frequencies. But knowing how LEDs work and the hacks with wavelength adjustment we use with phosphorus and others I’m sure it’s not easy.
[1] https://upload.wikimedia.org/wikipedia/commons/f/f6/Chloroph...
I mix 6000K/6500K (daylight) LED strips with 2700K (warm white) ones and plants seem very happy about it and it is OK for humans, too.
In fact I previously used only 6000K LED strips (in an insulated box so no other light at all) on cycad and other seeds and got very healthy plants.
I think the key for commercial applications like vertical farms is to optimise energy use by trying not to waste electricity on wavelengths that won't do much to boost production.
UV Ultraviolet light (UV-A, UV-B, and UV-C,) and near-UV "antimicrobial violet light" are sanitizing radiation.
Natural sunlight sanitizes because the radiation from the sun includes UV-* band radiation that is sufficient to radiate organic cells at this distance.
EM light/radiation intensity decreases as a function of the square of the distance from the light source (though what about superfluidic wave functions and accretion discs and microscopic black hole interiors (every 30km by one estimate) and Lagrangian points,).
"Inverse-square law" https://en.wikipedia.org/wiki/Inverse-square_law
Ultraviolet: https://en.m.wikipedia.org/wiki/Ultraviolet
> Short-wave ultraviolet light damages DNA and sterilizes surfaces with which it comes into contact. For humans, suntan and sunburn are familiar effects of exposure of the skin to UV light, along with an increased risk of skin cancer. The amount of UV light produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere. [2] More energetic, shorter-wavelength "extreme" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. [3] However, ultraviolet light (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates, including humans. [4] The UV spectrum, thus, has effects both beneficial and harmful to life.
Ultraviolet > Solar ultraviolet: https://en.wikipedia.org/wiki/Ultraviolet#Solar_ultraviolet
> The atmosphere blocks about 77% of the Sun's UV, when the Sun is highest in the sky (at zenith), with absorption increasing at shorter UV wavelengths. At ground level with the sun at zenith, sunlight is 44% visible light, 3% ultraviolet, and the remainder infrared. [23][24] Of the ultraviolet radiation that reaches the Earth's surface, more than 95% is the longer wavelengths of UVA, with the small remainder UVB. Almost no UVC reaches the Earth's surface. [25] The fraction of UVB which remains in UV radiation after passing through the atmosphere is heavily dependent on cloud cover and atmospheric conditions. On "partly cloudy" days, [...]
(Infrared light ~is heat.
There's usually plenty of waste heat from mechanical friction, exothermic chemical processes with gravity, lossy AC-DC conversion, and electronic components that waste energy/electron_field_disturbance//thermions/heat like steaming water towers sans superconducting channels wide enough for electrons to not start tunneling out without a ground or a negative)
Ultraviolet > Human health-related effects > harmful effects https://en.wikipedia.org/wiki/Ultraviolet#Harmful_effects
UV-A and UV-B protective eyewear (U6 polycarbonate) can be purchased in bulk. https://www.amazon.com/s?k=uvb+ansi+z87+glasses+20+pairs
Phlare: open-source database for continuous profiling at scale
What about maintenance on the existing projects? Open issues on Github:
Grafana: 2.6k issues, 275 PRs
Loki: 531 issues, 113 PRs
Mimir: 305 issues, 40 PRs
Tempo: 159 issues, 19 PRs
While they are open-source projects, I bet their software is still driven by customer feedback. I wouldn't put it past them to prioritize paying customer requests which takes time away from implementing features, doing bugfixes or code review.
I don't think it's fair to say that paying customer feedback is prioritised at the expense of the open-source community.
The thing with the core Grafana product being open source is that there's not that much dissimilarity between paying/enterprise Grafana users, and open-source users. Feedback from one set will almost always work in favor for the other.
NASA finds super-emitters of methane
How could a 2 mile long methane plume in New Mexico have been undetected for any significant amount of time?
From what I understand basic environmental monitoring is done in 2022 around all major industrial facilities in the U.S.
what interesting is there is nothing there. The only something are gas wells which the whole area is dotted with, so my only guess is leaky gas wells? https://www.google.com/maps/@32.3761968,-104.0819087,4787m/d...
Looks like Marathon Oil is the company
True story. I have worked oil and gas for a while and am familiar with this area. It can easily be a well not properly P&A'd (plugged and abandoned) - quite common down there. It is most likely a Marathon well (HARROUN COM #002), or it could be Eastland Oil (HARROUN A #007). If you want to get a better idea for yourself, you can check the EMNRD - https://ocd-hub-nm-emnrd.hub.arcgis.com/
Is there a way to IDK 3d earthen print a geodesic dome over the site and capture the waste methane (natural gas) into local tanks?
TIL about CBG: Cleaner Burning Gasoline
From https://twitter.com/westurner/status/1564443689623195650 :
> In September 2021 we covered a new "green gasoline" concept from @NaceroCo [in Penwell, TX] that involves constructing gasoline hydrocarbons by assembling smaller #methane molecules from natural gas
From https://www.houstonchronicle.com/opinion/outlook/article/Opi... :
> The Inflation Reduction Act imposes a fee of [$900/ton] of methane starting in 2024 — this is roughly twice the current record-high price of natural gas and five times the average price of natural gas in 2020.
> These high fees present a strong incentive
... "Argonne invents reusable [polyurethane] sponge that soaks up oil, could revolutionize oil spill and diesel cleanup" (2017) https://www.anl.gov/article/argonne-invents-reusable-sponge-...
FWIU, heat engines are useful with all thermal gradients: pipes, engines, probably solar panels and attics; "MIT’s new heat engine beats a steam turbine in efficiency" (2022) https://www.freethink.com/environment/heat-engine
I hear what you are saying, and there is a ton of room in the E&P space for improvements of all kinds. You would be shocked at how incredibly we are behind technologically (I remember just last year over hearing someone say, "We just figured out our cloud strategy."). In terms of a dome over fields or units to collect stray methane, that may be an issue. We are loathe to construct "enclosed spaces" for gases as that can be a safety issue. It doesn't take much stray anything to kill you out there. We have all sorts of stories of people going into an enclosed space, passing out and dying, only to have more people die trying to get them out. Sounds bad, I know, but this is coming from someone who has come across a few dead bodies out in the field for various reasons - mostly just being stupid. Fees are funny in oil and gas - we complain about how much money we don't have and then spend it frivolously elsewhere. That inflation act, at the state level there are all sorts of those out there and some companies care and some don't. If you want to see something crazy, check out the NDIC (North Dakota Industrial Commission). In terms of oil and gas data, theirs is the most centralized, easily accessed, and complete in the country (NM isn't bad, CA used to be better, TX is garbage-which is odd, LA is god awful, and PA is meh). The NDIC keeps really good track of flaring, so to see how much natural gas is just burned up at the cost of getting the oil out (not such a great infrastructure for moving gas and historically the price hasn't been a good inducement to build any). To get the well level data, it is $150/year, but well worth it if you are working that basin and also in comparison to all of the data services out there. https://www.dmr.nd.gov/oilgas/stats/statisticsvw.asp
So there only needs to be a bit of concrete in a smaller structure that exceeds bunker-busting bomb specs and 'funnels' (?) the natural gas to a tank or a bladder?
Are there existing methods for capturing methane from insufficiently-capped old wells?
Are the new incentives/fees/fines enough to motivate action thus far in this space?
OpenAPI is one way to specify integrable APIs. An RDFS vocabulary for this data is probably worthwhile; e.g. NASA Earth Science (?) may have a schema that all of the state APIs could voluntarily adopt?
Presumably the CophenHill facility handles waste methane? We should build waste-to-energy facilities in the US, too
FWIU Carbon Credits do not cover methane, which is worse than CO² for #ActOnClimate
Natural gas isn't stored on site, it needs to be piped to the nearest plant to be processed and put into a sales line. Capturing methane from insufficiently capped old wells would not be economic in most cases. If a company was called out on it, they would just go dump more cement in it to make sure the gas is contained. 90% of the time, wells that are plugged are plugged well, the ones that aren't and are just abandoned maybe leak only 1-5 thousand cubic feet per day - nothing worth doing anything about (to the company financially). Fines typically mean nothing to E&P with where they are now - though some have gotten clever about it - example being North Dakota keeps flaring down by whatever you are flaring you have to cut your oil production in some proportionate manner (oil is the more desired product) - though that was rescinded during the last big price down turn and am not sure if that is back in effect. To your statement about APIs - that is one thing the oil industry is terrrrrrible with. Our data collection and cleaning is abysmal. I agree with your statement, and I would be all for it, but E&P companies can't even get their own production numbers right - a good example is if you check out the fracfocus database where companies volunteer up their fracturing job compositions. Generally it is useful, but the people who input the data, similar to who would probably be handling this can barely spell and data cleaning would be a nightmare. waste to energy facilities are great, and there are some interesting things out in the oilfield but, like everything else, there needs to be more financial incentive for companies to build them/use them.
So, in 2022, it's cheaper to dump concrete than to capture it, but the new fines this year aren't enough incentive to solve for: capture to a tank and haul, build unprocessed natural gas pipelines, or process onsite and/or fill tankers onsite?
Data quality: https://en.wikipedia.org/wiki/Data_quality
... Space -based imaging.
How long should they wait to up the methane fee if it's not enough to incentivize capping closed wells?
It is totally cheaper to dump some cement (it is mostly gel with a topping of cement). To P&A older wells maybe runs $12-25K (assuming, like, a 5-8k ft. depth conventional well)...and I may be running a little high on that number. That gets a small truck out there with a small crew to pull tubing and dump alternating layers of cement and gel (cement goes on the top and across formations that would be ground water bearing). Fun fact, if you have to go back into an abandoned well and you come across red cement at the top, that is indicative of someone losing a nuclear based well tool in there and to call someone before going further. A typical 7.5-10k foot lateral unconventional well (horizontal wells) down that way will run about $7-8 million depending (and 6 wells on average on a well pad), but aren't really the issue, but just giving you some numbers to sort of show that fining someone $100K for something serious isn't that big an expense and not really a deterrent. Natural gas lines are always a big deal to oil and gas companies - if you build it they will come. Most space in pipelines for operators is spoken for before they even dig the first trench.
Options:
A. Privately and/or Publicly grant to P&A wells
($25k+ * n_wells)
+ ($7-8m+ * m_wells)
B. Build natural gas pipelines that run past those well sites (approval,)C. increase the incentives/fines/fees
**
Shouldn't it be pretty easy to find such tools with IDK neutron detection and/or imaging at what distance?
Show HN: Linen – Open-source Slack for communities
Hi HN, My name is Kam. I'm the founder of Linen.dev. Linen communities is a Slack/Discord alternative that is Google-searchable and customer-support friendly. Today we are open-sourcing Linen and launching Linen communities. You can now create a community on Linen.dev without syncing it from Slack and Discord!
I initially launched Linen as a tool to sync Slack and Discord conversations to a search engine-friendly website. As I talked to more community managers, I quickly realized that Slack and Discord communities don't scale well and that there needs to be a better tool, especially for open-source knowledge-based communities. Traditionally these communities have lived on forums that solved many of these problems. However, from talking to communities, I found most of them preferred chat because it feels more friendly and modern. We want to bring back a bunch of the advantages of forums while maintaining the look and feel of a chat-based community.
Slack and Discord are closed apps that are not indexable by the internet, so a lot of content gets lost. Traditional chat apps are not search engine friendly because most search engines have difficulty crawling JS-heavy sites. We built Linen to be search engine friendly, and our communities have over 30,000 pages/threads indexed by google. Our communities that have synced their Slack and Discord conversations under their domain have additional 40,000 pages indexed. We accomplish this by conditionally server rendering pages based on whether or not the browser client is a crawler bot. This way, we can bring dynamic features and a real-time feel to Linen and support search engines.
Most communities become a support channel, and managing this many conversations is not what these tools are designed for. I've seen community admins hack together their own syncs and internal devices to work to stay on top of the conversations. This is why we created a feed view, a single view for all the threads in all the channels you care about. We added an open and closed state to every thread so you can track them similarly to GitHub issues or a ticketing system. This way, you and your team won't miss messages and let them drop. We also allow you to filter conversations you are @mentioned as a way of assigning tickets. I think this is a good starting point, but there is a lot more we can improve on.
How chat is designed today is inherently interrupt-driven and disrupts your team's flow state. Most of the time, when I am @mentioning a team member, I actually don't need them to respond immediately. But I do want to make sure that they do eventually see it. This is why we want to redesign how the notification system works. We are repurposing @mentions to show up in your feed and your conversation sections and adding a !mention. A @mention will appear in your feed but doesn't send any push notifications, whereas a !mention will send a notification for when things need a real-time synchronous conversation. This lets you separate casual conversations from urgent conversations. When everything is urgent, nothing is. (credit: Incredibles) This, along with the feed, you can get a very forum-like experience to browse the conversations.
Linen is free with unlimited history for public communities under https://linen.dev/community domain. We monetize by offering a paid version based on communities that want to host Linen under their subdomain and get the SEO benefits without managing their own self-hosted instance.
We are a small team of 3, and this is the first iteration, so we apologize for any missing features or bugs. There are many things we want to improve in terms of UX. In the near term, we want to improve search and add more deep integrations, DMs, and private channels. We would appreciate any feedback, and if you are curious about what the experience looks like, you can join us here at Linen.dev/s/linen
If I'm an organization choosing a chat platform, why would I want to use Linen versus Mattermost, which is also self-hostable and open-source, and much more mature? Or Matrix and Element (or any other Matrix client)?
This space is getting pretty crowded, and I'm not sure why I'd want to use Linen rather than one of the many alternatives.
I think it's misleading to call Mattermost "open source". Mattermost the for-profit company makes two products, one of which is open source, and a much larger one (which is approximately feature complete wrt modern Slack-likes) which is absolutely not.
The open source product they make does not work like a normal open source project: submitted improvements to it will presumably be denied if they add functionality that exists in the proprietary and closed source second product, also called Mattermost, or if they serve users (like removing the silent and nonconsensual phone-home spyware from segment.io that they embed in the released binaries). It's a normal and expected thing in the open source world for user communities to be able to participate in the software project. That's one of the defined goals of the idea of open source: you can fix and improve it, and share your fixes and improvements.
Mattermost the company is not really an "open source company", as they build and sell proprietary software as their main source of revenue. Their goal is increasing the use of proprietary software, and every user of the open source Mattermost is a target for them to convince to stop using open source and start using proprietary software. This is presumably why they make their open source version phone home to tell them about your usage of it, so that they can try to get you to pay them to start using the proprietary one if you get big/rich enough.
Furthermore, to even contribute to the "open source" project of Mattermost, you are required to sign a CLA, because they want to be able to resell your work commercially under nonfree licenses. You'll note there is no CLA required for most real open source work, because they don't care if you retain copyright - the open source license is all they need. There is no CLA required to contribute to Linux or gcc.
Mattermost the software is not really an "open source project" because you can't meaningfully improve the software as the maintainer has a vested financial interest in keeping basic features (like message expiry/retention periods/SSO) out of it to direct the community to become paid customers of their proprietary software.
Mattermost wants to use the work of contributors to an open source project to further their proprietary software goals, while maintaining their open source bait as a neutered stub.
It's open source in name only. The real Mattermost product is proprietary and the open source version only exists as a fake open source project to serve as an onramp for selling their closed source proprietary one.
> [Open-core Software Firm X] wants to use the work of contributors to an open source project to further their proprietary software goals, while maintaining their open source bait as a neutered stub.
Source-available software: https://en.wikipedia.org/wiki/Source-available_software
Open-core model > Examples: https://en.wikipedia.org/wiki/Open-core_model#Examples
Who can merge which pull requests to the most-tested and most-maintained branch of a fork?
CLAs are advisable regardless of software license.
Re: Free and Open Source governance models: https://twitter.com/westurner/status/1308465144863903744
Protobuf-ES: Protocol Buffers TypeScript/JavaScript runtime
Arrow Flight RPC (and Arrow Flight SQL, a faster alternative to ODBC/JDBC) are based on gRPC and protobufs: https://arrow.apache.org/docs/format/Flight.html:
> Arrow Flight is an RPC framework for high-performance data services based on Arrow data, and is built on top of gRPC and the IPC format.
> Flight is organized around streams of Arrow record batches, being either downloaded from or uploaded to another service. A set of metadata methods offers discovery and introspection of streams, as well as the ability to implement application-specific methods.
> Methods and message wire formats are defined by Protobuf, enabling interoperability with clients that may support gRPC and Arrow separately, but not Flight. However, Flight implementations include further optimizations to avoid overhead in usage of Protobuf (mostly around avoiding excessive memory copies
"Powered By Apache Arrow in JS" https://arrow.apache.org/docs/js/index.html#powered-by-apach...
What's with your slightly on-topic but mostly off-topic comments? Your post history almost looks like something written by GPT-3, following the same format and always linking to a lot of external resources that only briefly touch the subject.
It looks like a bot which scans a comment, identifies some buzz phrases, and quotes those lines, replying with generic linked information (e.g. wikipedia) about those phrases.
No.
Read: https://westurner.github.io/hnlog/
If you have a question about what I just took the time to share with you here, that would be great. Otherwise, I'm going to need you on Saturday.
[deleted]
We need a replacement for TCP in the datacenter [pdf]
FWIU, barring FTL; superluminal communication breakthroughs and controls, Deep Space Networking needs a new TCP, as well:
From https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_t... :
void tcp_retransmit_timer(struct sock *sk) {
/* Increase the timeout each time we retransmit. Note that
* we do not increase the rtt estimate. rto is initialized
* from rtt, but increases here. Jacobson (SIGCOMM 88) suggests
* that doubling rto each time is the least we can get away with.
* In KA9Q, Karn uses this for the first few times, and then
* goes to quadratic. netBSD doubles, but only goes up to *64,
* and clamps at 1 to 64 sec afterwards. Note that 120 sec is
* defined in the protocol as the maximum possible RTT. I guess
* we'll have to use something other than TCP to talk to the
* University of Mars.
*
* PAWS allows us longer timeouts and large windows, so once
* implemented ftp to mars will work nicely. We will have to fix
* the 120 second clamps though!
*/
/? "tp-planet" "tcp-planet"
https://www.google.com/search?q=%22tp-planet%22+%22tcp-plane... https://scholar.google.com/scholar?q=%22tp-planet%22+%22tcp-...A Message from Lunny on Gitea Ltd. and the Gitea Project
> I created Gitea
No mention of Gogs anywhere, really classy.
(Gitea is a fork of Gogs)
> Gitea was created by Lunny Xiao, who was also a founder of the self-hosted Git service Gogs.
Gogs is a clone of GitHub and GitLab (which were both originally written in Ruby with the Ruby on Rails CoC Convention-over-Configuration Web Framework), which were built because Trac didn't support Git or multiple projects, Sourceforge didn't support Git or on-prem, and git patchbombs as attachments over emailing lists needed Pull Requests, and Issues and PRs should pull from the same sequence of autoincrement (*) integer keys.
- You can do ~GitHub Pages with Gitea and an idempotent git post-receive-hook that builds static HTML from a repo revision, tests, deploys to revid/ and updates a latest/ symlimk, and logs; or with HTTP webhooks.
- "Feature: Allow interacting with tickets via email" https://github.com/go-gitea/gitea/issues/2386#issuecomment-6...
- It's not safe to host Gitea on the same server as the CI (e.g. DroneCI) host if you grant permissions to the docker socket to the CI container: you need another VM at least to run the CI controller and workers on_push() with Gitea. https://docs.drone.io/server/provider/gitea/ :
> Please note we strongly recommend installing Drone on a dedicated instance. We do not recommend installing Drone and Gitea on the same machine due to network complications, and we definitely do not recommend installing Drone and Gitea on the same machine using docker-compose.
GitHub and GitLab centralize git for project-based collaboration, which is itself a distributed system.
Linux System Call Table – Chromiumos
System call: https://en.wikipedia.org/wiki/System_call
Strace and similar tools can trace syscalls to see what kernel system calls are made by a process: https://en.wikipedia.org/wiki/Strace#Similar_tools
Google/syzkaller https://github.com/google/syzkaller :
> syzkaller ([siːzˈkɔːlə]) is an unsupervised coverage-guided kernel fuzzer. Supported OSes: Akaros, FreeBSD, Fuchsia, gVisor, Linux, NetBSD, OpenBSD, Windows
Fuschia / Zircon syscalls: https://fuchsia.dev/fuchsia-src/reference/syscalls
"How does Go make system calls?" https://stackoverflow.com/questions/55735864/how-does-go-mak...
Variability, not repetition, is the key to mastery
Over how many generations?
Genetic algorithm: https://en.wikipedia.org/wiki/Genetic_algorithm
Mutation: https://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)
Crossover: https://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)
Selection: https://en.wikipedia.org/wiki/Selection_(genetic_algorithm)
...
AlphaZero / MuZero: https://en.wikipedia.org/wiki/MuZero :
> MuZero was trained via self-play, with no access to rules, opening books, or endgame tablebases.
Self-play algorithms essentially mutate and select according to the game rules. For a generally-defined mastery objective, are there subjective and/or objective game rules, and is there a distance metric for ranking candidate solutions?
The Docker+WASM Technical Preview
Michael Irwin from Docker here (and author of the blog post too). Happy to answer your questions, hear feedback, and more!
maybe a naive question: is there a way to run some form of docker in the browser? It could be a great education / demo tool
Great question! There isn't a way to run Docker directly in the browser. But, there are tools (like Play with Docker at play-with-docker.com) that lets you interact with a CLI in the browser to run commands against a remote cloud instance. I personally use this a lot for demos and workshops!
But... certainly a neat idea to think about what Wasm-based applications could possibly look like/run in the browser!
Is it possible to sandbox the host system from the guests in WASM?
Are there namespaces and cgroups and SECCOMP and blocking for concurrent hardware access in WASM, or would those kernel protections be effective within a WASM runtime? Do WASM runtimes have subprocess isolation?
/? subprocess isolation https://www.google.com/search?q=subprocess+isolation on a PC:
- TIL about teh Endokernel: "The Endokernel: Fast, Secure, and Programmable Subprocess Virtualization" (2021) https://arxiv.org/abs/2108.03705#
> The Endokernel introduces a new virtual machine abstraction for representing subprocess authority, which is enforced by an efficient self-isolating monitor that maps the abstraction to system level objects (processes, threads, files, and signals). We show how the Endokernel can be used to develop specialized separation abstractions using an exokernel-like organization to provide virtual privilege rings, which we use to reorganize and secure NGINX. Our prototype, includes a new syscall monitor, the nexpoline, and explores the tradeoffs of implementing it with diverse mechanisms, including Intel Control Enhancement Technology. Overall, we believe sub-process isolation is a must and that the Endokernel exposes an essential set of abstractions for realizing this in a simple and feasible way.
Sandbox (computer security) > Implementations https://en.wikipedia.org/wiki/Sandbox_(computer_security)
- [x] Linux containers
- [ ] WASM with or without WASI
eWASM has costed opcodes; basically like dynamic tracing in CPython.
Are there side channels for many or most of these sandboxing methods; even at the CPU level?
google/gvisor could be useful for this? https://github.com/google/gvisor :
> gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system surface. It includes an Open Container Initiative (OCI) runtime called runsc that provides an isolation boundary between the application and the host kernel.
Python 3.11.0 Released
"What’s New In Python 3.11" https://docs.python.org/3.11/whatsnew/3.11.html
Tomorrow the Unix timestamp will get to 1,666,666,666
Approximations of Pi: https://en.wikipedia.org/wiki/Approximations_of_π
> In the 3rd century BCE, Archimedes proved the sharp inequalities 223⁄71 < π < 22⁄7, by means of regular 96-gons (accuracies of 2·10−4 and 4·10−4, respectively).
223/71 = 3.1408450704225
666/212 = 3.1415094339622
π = 3.14159265359
22/7 = 3.1428571428571
Is π a good radix for what types of math in addition to Trigonometry? And then what about e for natural systems; the natural log."Why do colliding blocks compute pi?" https://youtu.be/jsYwFizhncE https://www.3blue1brown.com/lessons/clacks-solution https://www.reddit.com/r/3Blue1Brown/comments/r29vm5/rationa... ... Geogebra: https://www.geogebra.org/m/BhxyBJUZ :
> The applet shows the old method used to approximate the value of π. Archimedes used a 96-sided polygons to find that the value of π is 223/71 < π < 22/7 (3.1408 < π < 3.1429). In 1630, an Austrian astronomer Christoph Grienberger found a 38-digit approximation by using 10^40-sided polygons. This is the most accurate approximation achieved by this method.
Bringing Modern Authentication APIs (FIDO2 WebAuthn, Passkeys) to Linux Desktop
WebAuthN: https://en.wikipedia.org/wiki/WebAuthn
FIDO2: https://en.wikipedia.org/wiki/FIDO2_Project
Arch/WebAuthN: https://wiki.archlinux.org/title/WebAuthn
U2F/FIDO2: https://wiki.archlinux.org/title/Universal_2nd_Factor
TPM: https://wiki.archlinux.org/title/Trusted_Platform_Module#Oth...
Seahorse: https://en.wikipedia.org/wiki/Seahorse_(software)
GNOME Keyring: https://en.wikipedia.org/wiki/GNOME_Keyring
tpmfido: https://github.com/psanford/tpm-fido
"PEP 543 – A Unified TLS API for Python" (withdrawn*) https://peps.python.org/pep-0543/#interfaces
> Specifying which trust database should be used to validate certificates presented by a remote peer.
certifi-system-store https://github.com/tiran/certifi-system-store/blob/main/src/...
truststore/_openssl.py: https://github.com/sethmlarson/truststore/blob/main/src/trus...
"Help us test system trust stores in Python" (2022) https://sethmlarson.dev/blog/help-test-system-trust-stores-i... :
python -m pip install \
--use-feature=truststore Flask
Science, technology and innovation isn’t addressing world’s most urgent problems
> Changing directions: Steering science, technology and innovation for the Sustainable Development found that research and innovation around the world is not focused on meeting the UN’s Sustainable Development Goals, which are a framework set up to address and drive change across all areas of social justice and environmental issues.
https://globalgoals.org/ #GlobalGoals #SDGs #Goal17
Each country (UN Member State) prepares an annual country-level report - an annual SDG report - on their voluntary progress toward their self-defined Targets (which are based upon Indicators; stats).
Businesses that voluntarily prepare a sustainability report necessarily review their SDG-aligned operations' successes and failures. The GRI Corporate Sustainability report is SDG aligned; so if you prepare an annual Sustainability report, it should be easy to review aligned and essential operations.
GRI Global Reporting Initiative is also in NL: https://en.wikipedia.org/wiki/Global_Reporting_Initiative
> Critically, the report finds that research in high-income and middle-income countries contributes disproportionally to a disconnect with the SDGs. Most published research (60%-80%) and innovation activity (95%-98%) is not related to the SDGs.
Strategic alignment: https://en.wikipedia.org/wiki/Strategic_alignment
https://USAspending.gov resulted from tracking State-level grants in IL: the Federal Funding Accountability and Transparency Act: https://en.wikipedia.org/wiki/Federal_Funding_Accountability...
Unfortunately, https://performance.gov/ and https://USAspending.gov/ do not have any way to - before or after funding decisions - specify that a funded thing is SDG-aligned.
IMHO, we can easily focus on domestic priorities and also determine where our spending is impactful in regards to the SDGs.
> Illustrating the imbalance, the report found that 80 percent of SDG-related inventions in high-income countries were concentrated in just six of the 73 countries
Lots of important problems worth money to folks:
#Goal1 #NoPoverty
#Goal2 #ZeroHunger
#Goal3 #GoodHealth
#Goal4 #QualityEducation
#Goal5 #GenderEquality
#Goal6 #CleanWater
#Goal7 #CleanEnergy
#Goal8 #DecentJobs
#Goal9 #Infrastructure
#Goal10 #ReduceInequality
#Goal11 #Sustainable
#Goal12 #ResponsibleConsumption
#Goal13 #ClimateAction
#Goal14 #LifeBelowWater
#Goal15 #LifeOnLand
#Goal16 #PEACE #Justice
#Goal17 #Partnership #Teamwork
If you label things with #GlobalGoal hashtags, others can find solutions to the very same problems.
Quantum Monism Could Save the Soul of Physics
It reminds me of Sean Carrols Hilbert Space Fundamentalism.
Sean M. Carroll: Reality as a Vector in Hilbert Space
Hilbert space https://en.wikipedia.org/wiki/Hilbert_space :
From sympy.physics.quantum.hilbert https://github.com/sympy/sympy/blob/master/sympy/physics/qua... :
__all__ = [
'HilbertSpaceError',
'HilbertSpace',
'TensorProductHilbertSpace',
'TensorPowerHilbertSpace',
'DirectSumHilbertSpace',
'ComplexSpace',
'L2',
'FockSpace'
]
From sympy.physics.quantum.operator
https://github.com/sympy/sympy/blob/master/sympy/physics/qua... : __all__ = [
'Operator',
'HermitianOperator',
'UnitaryOperator',
'IdentityOperator',
'OuterProduct',
'DifferentialOperator'
]
From SymPy.physics.quantum.operatorset https://github.com/sympy/sympy/blob/master/sympy/physics/qua... : """ A module for mapping operators to their corresponding eigenstates and vice versa
It contains a global dictionary with eigenstate-operator pairings.
If a new state-operator pair is created,
this dictionary should be updated as well.
It also contains functions operators_to_state and state_to_operators for mapping between the two. These can handle both classes and instances of operators and states.
See the individual function descriptions for details.
TODO List:
- Update the dictionary with a complete list of state-operator pairs
"""
From sympy.physics.quantum.represent https://github.com/sympy/sympy/blob/master/sympy/physics/qua... : """Logic for representing operators in state in various bases.
TODO:
* Get represent working with continuous hilbert spaces.
* Document default basis functionality.
"""
# ...
__all__ = [
'represent',
'rep_innerproduct',
'rep_expectation',
'integrate_result',
'get_basis',
'enumerate_states'
]
# ...
def represent(expr, **options):
"""Represent the quantum expression in the given basis.
"I am one with the universe"From tequila/simulators/simulator_cirq https://github.com/tequilahub/tequila/blob/master/src/tequil... :
from tequila.wavefunction.qubit_wavefunction import QubitWaveFunction
From tequila.circuit.qasm https://github.com/tequilahub/tequila/blob/master/src/tequil... :> """ Export QCircuits as qasm code OPENQASM version 2.0 specification from:
> A. W. Cross, L. S. Bishop, J. A. Smolin, and J. M. Gambetta, e-print arXiv:1707.03429v2 [quant-ph] (2017). https://arxiv.org/pdf/1707.03429v2.pdf
Why are you posting this?
Because there is a functional executable symbolic algebra implementation of such Hilbert spaces and their practical representations (and qubit applications) that's approachable because it's not ambiguous MathTeX without automated tests and test assertions.
Because it's easier to learn math things by preparing a notebook with MathTex and/or SymPy expressions with a MathTeX representation and then make test assertions about the symbolic expression and/or `assert np.allclose()` with real valued parameters after symbolic construction and derivation
Geothermal may beat batteries for energy storage
Many of the greenhouse Youtube channels I watch do a smaller DIY version of this. They have either dark barrels of water in the greenhouse with the windows facing south to the sun, or a thin metal wall filled with clay and pipes acting as a sun-heat-battery. They pump the water through the barrels or clay battery into pipes that are under ground to store the heat. In winter time they extract the heat from the ground keeping the greenhouse warm using a combination of solar and commercial power for the pumps. Heat is also extracted from compost bins at each end of the greenhouse. This works well in extremely cold climates in Canada and Alaska. A few of these folks do all of this without using any electricity at all and somehow manage to get water moving through the pipes using heat convection alone.
I've been thinking about doing something like this but connecting it to pipes under the foundation of my home so I can get rid of the wallboard heaters or just leave them off. Electricity is the only commercial utility near me.
This reminds me a lot of the Earthship community in the middle of the desert, which always focusses on smart ways of reusing water for different purposes, and by not wasting water as much as possible. The houses usually have a clean/grey/blackwater + a rainwormbox system to process fecals and reuse it for growing plants. Their air conditioning system is basically just a pipe in the ground where the hot air flows through, cools down, and automatically gets pushed through the house when they open the windows on the roof.
I was always wondering why there are no systems converting the unused electricity in potential energy by moving water to a higher ground. And more importantly: Why there are no water storages on the roof.
> I was always wondering why there are no systems converting the unused electricity in potential energy by moving water to a higher ground
There are a bunch of pumped storage facilities around [1]. But they work best at massive scale, so suitable locations are somewhat limited. Plus they are expensive to build and often face environmental protests (similar to building dams). Still, it's a solution I'm a fan of.
[1] https://en.wikipedia.org/wiki/List_of_pumped-storage_hydroel...
emphasis on massive scale.
Moving 500,000 kg (over 1 million pounds) 7.5 meters (~25 feet aka the height of a house) will give you about 10 kWh of energy. This is equivalent to running a 425W device all day, like a small air conditioner. The relationship is linear. Double the weight or the distance to double the energy. All of the metal at a scrap yard I know of amounts to less than half that weight, for reference.
I'm also a fan because pumped storage is a really interesting storage method, but it is beyond niche. It is very tough to move that kind of weight around efficiently for what you get back. Pumping water to great heights is not easy either. (see also: moving rail-carts up a mountain)
They are building a lot of them in China
https://en.wikipedia.org/wiki/List_of_pumped-storage_hydroel...
Makes sense, it is definitely a useful tool. I just think it is insufficient to act as storage. It can be good at producing variable amounts of Watts on demand but not so good at storing enough Watt-hours to keep things running for very long. I can see a great appeal for it to help with load-balancing for a significant amount of choppiness between supply and demand on the hour timescale.
For something like solar, where we will want to store over half our daily energy production at peak storage (ideally 2-3 days worth I think) - I don't think it holds up. Additionally, it doesnt seem like a good bet as a primary mechanism for either storage or on-demand generation if energy consumption continues to increase due to the rather large coefficients involved for scaling it up.
"The United States generated 4,116 terawatt hours of electricity in 2021"[1]
4,116 TWh/year = 11.2 TWh/day
The storage capacities for the largest items listed on the wiki is on the magnitude of GWh. The scale goes kilo-, Mega-, Giga-, then Terra. So we are talking about a need on the order of a thousand pumped storage facilities per country. The US would need over 50 of them per state (on average) in order to keep everything running without production for 24 hours. Doesnt matter how many solar panels we have, if we get 1 dark day then we would run out of power. If we tried to rely on solar entirely, we'd also still need very roughly half that amount of storage just to get through the night.
lithium batteries are obviously much better suited for overnight storage, but I have no idea what the numbers are on how much lithium is physically available to use as such storage.
If we want to get on the order of monthly to yearly storage to allow, for example, solar panels in alaska to provide enough energy for a resident to get through months of darkness - I have no idea what the leading storage options are, probably lithium still
[1]https://www.statista.com/statistics/188521/total-us-electric...
Sodium ion is expected to sharply take over cost limited applications some time in the next couple of years. There are pilot mass production programs designed to avoid scarce materials that drop into existing processes. Natron have products on the market (at presumably high cost) targetting datacenters for high safety applications.
For longer scale storage it's a tossup between opportunistic pumped hydro, CAES where geology makes it easy, hydrogen in similar areaswith caverns, ammonia, synthetic hydrocarbons, sodium ion, and one of the emerging molten salt or redox flow battery technogies. Lithium isn't really in the running due to resource limits.
Wires also have a lot of value for decreasing the need for storage. Joining wind and solar 1000s of km apart can greatly reduce downtime. Replacing as much coal and oil with those, and maintaining the OCGT and CCGT fleet is the fastest and most economic way to target x grams of CO2e per kWh where x is some number much smaller than the 400 of pure fossil fuels but bigger than around 50. Surplus renewable power (as adding 3 net watts of solar is presently cheaper than the week of storage to get an isolated area through that one week where capacity is 1/3rd the average) will subsidize initial investments into better storage and electrolysis with no further interventions needed.
Awesome response. I've come across the molten salt option but havent researched in depth. I saw it referenced as something a lot of scientists are hyping up, but I am not sure what kind of engineering challenges exist for implementation and maintenance.
Second paragraph is a bit too information dense, I had trouble following some of it. Renewable energy deficiencies will be localized, so i understand how wires help here. A larger connected area produces more stability, makes sense. Agreed with the carbon reduction priority to tackle coal and oil first. Surplus renewable power acting as a subsidy checks out, but that is skirting around the energy storage problem imo. Sounds like you are saying "instead of storing renewable energy, get more than you need and sell it back to the grid and then use those funds to buy the energy back later". This would certainly work for local consumers, but doesnt do too much to help the power grid itself manage what to do with the surplus energy. Sell it to neighboring power grids? Ties in to the first point about connecting a larger area - but what are the limits here? Can we physically connect the sunny side of earth to the dark side? (ignoring that it seems logistically/legally prohibitive)
the question really comes down to what should we be spending money on to get "better storage"? What are the best solutions for long-term local storage?
> the question really comes down to what should we be spending money on to get "better storage"? What are the best solutions for long-term local storage?
The solution I'm proposing is basically 'the best place to spend your money on storage is to not spend it on storage yet'
If the goal is to reduce emissions asap, then focusing on the strategy that removes x% of 100% of the emissions rather than 100% of y% of the emissions makes sense unless there are enough resources/money that y% is more than x%. And storage is currently expensive enough that you need many times as much money for this to be true to 99.9% confidence.
Getting a wind + solar system that has at least y watts at least eg. 90% of the time is remarkably affordable already and still going down.
In excellent climates new solar costs less per MWh than fuel for a gas turbine (and is not far off fuel for a nuclear reactor). Wind is not much more. Distribution, dealing with less than ideal sites and oversupply increase the cost, but an ideal mix has very little storage (4-12 hours) which can be delivered by lithium batteries.
By relying on the existing fossil fuel/hydro/nuclear/whatever to pick up the last 10% for now, you can replace more coal/oil more quickly than other strategies. During this build all storage technologies where they make the most sense so that when that last 10% is needed, prices will have dropped. I'm fairly sure some mix of green hydrogen and green ammonia burning in those same turbines will be one of the winners (ammonia in particular has negligible marginal cost of capacity allowing for a strategic reserve, and will be needed to replace fossil fuel derived fertilizer anyway).
In the unlikely case that there's an overnight $2 trillion investment in new wind/solar/powerlines and production capacity to match in the US then choosing a dispatchable power source from some or all of: expensive green hydrogen, expensive abundant existing batteries, expensive pumped hydro, and expensive nuclear or immediately going all in on commercialising every vaguely promising electrolyser tech becomes the priority.
Completely agree with the hybrid approach wrt reducing emissions. I am talking more towards work that would be done concurrently with that.
> During this build all storage technologies where they make the most sense so that when that last 10% is needed, prices will have dropped
this is kind of the point of what I'm getting at. Without any investment, none of the storage technologies are going to make much progress. If not financial investment, then at least a time investment from research/science teams. then again, maybe opportunism/free market will take care of this and we can assume any progress that can be made will be made by people trying to make a name for themselves or be first to market. I'm still curious to size up what that progress might look like for discussion/entertainment purposes in any case
Good storage solutions would immediately pay dividends through arbitrage, which would keep electric prices stable, and then anywhere renewable energy generation is more than demand and storage is sufficient, that stable price point could come down below the cost of using coal/oil as well as any other continuous production method. We would be able to consolidate power generation over time, not just space, and realize gains from that. As in, use massive bursts of energy production to top off storage and use them to exactly meet demand. Maybe this opens the door for more alternative energy production methods as well (that are better suited for burst than steady)
In terms of promising technologies, they're broadly categorisable as thermal, kinetic, battery/fuel cell, and thermochemical. Most of the promising ones are far enough along the learning curve that other markets (such as green hydrogen/ammonia for fertiliser driving electrolysers and small scale/more efficient chemical reactors) will drive the learning curve.
Thermal storage concepts include:
Molten salt thermal. short/medium for high grade heat. Most high grade heat is dispatchable (fire) and so doesn't make sense to store, or expensive (solar thermal, nuclear) and so isn't worth pursuing.
Sand thermal batteries. Low grade heat for medium/long term. Only useful for heating and some industrial purposes. Has a minimum size (neighborhood). Literally dirt cheap.
Thermochemical. I guess this is kind of a fuel? Use case is for low grade heat so it can go here. Phase change materials like sodium acetate or reversible solution like NaOH seem really appealing for heating. Back of envelope says it's close to competitive with electric heating, so I'd expect more attention as it's cheaper than any technology that stores work. No idea why it isn't being rolled out. You could even charge it with heat pumps for extremely high efficiency if needed.
Kinetic:
Lifting stuff. Only really works for water without large subsidies and only if you already have at least one handy reservoir like a watershed or cavern. No reason to expect it would suddenly get cheaper as digging holes and moving big things is already something lots of industries try to do cheaply. Great addition to existing hydro.
Sinking stuff (using buoys to store energy). I can't comprehend how this can be viable. I have seen it espoused, but it doesn't pass back of the envelope test unless I did a dumb.
Squashing stuff. Compressed air energy storage. Tanks are just barely competitive with last gen batteries capacity-wise, efficiency isn't great. There are concepts for underwater bladders (let the watter do the holding) or cavern based storage that seem viable at current rates. Achievable with abundant materials so worst case scenario we nut up and spend$500/kWh. Key word CAES, cavern or underwater energy storage
Battery/fuel cell:
Lithium ion: One of the best options currently. Will be heavily subsidised by car buyers. Has hit limits of current mining production which puts a floor on price and is ecologically devistating.
X ion where x is probably sodium: Great slot in replacement. Barring large surprises will expect it to replace LiFePO4 very soon for most uses. Expect the learning rate of lithium ion manufacturing to continue resulting in a sharp jump to $60/kWh in 2021 dollars and eventual batteries around $30/kWh. Key word natron (have just brought their first product to market and are working with other parts of the supply chain to scale up)
Flow batteries, air batteries and fuel cells. These are almost the same concept. You have a chemical reaction that makes electricity with a circular resource like hydrogen, methane, ammonia, or electrolyte. Downside is most versions require a prohibitive amount of some metal like rutheneum or vanadium or something. Not a fundamental limit, but not sure it will be a great avenue as research goes back a fair ways. Aluminum-air batteries are one interesting concept. Essentially turning Al smelters into fuel production facilities. Keywords iron-air aluminum-air, redox-flow, direct methane fuel cell, ammonia fuel cell, ammonia cracking, nickel fuel cell.
Molten salt batteries. Incredibly simple, cheap and scalable concept that has no problems with dendrites (and so theoretically no cycle limit) with one limitation on portability (they must be hot, sloshing is bad) and one as yet insurmountable deal breaking flaw (incredibly corrosive material next to an airtight insulating seal). Look up Ambri for details of an attempt which has presumably failed by now. There is a more recent attempt using a much lower temperature salt and sodium sulfur which shows promise. Keywords ambri, sodium sulfur battery.
Thermochemical:
Any variation on burning stuff you didn't dig up.
Hydrogen is hard to store more than a few days worth, but underground caverns could help. I expect a massive scandal about fugitive hydrogen, toxicity and greenhouse effect in the 2030s sometime. It's borderline competitive to make now. Main limitation is cost of energy (solved by more wind and solar and more 4 hour storage) and cost of capital (platinum/palladium/rutheneum/nickel are usually required). Lots of work going on to reduce the latter and to increase power density and efficiency. If you were directing a billion dollars of public funds this would probably be the place to put it. Keywords $200/kw electrolyser, hysata 95% efficient.
Methane, ammonia, dimethyl ether, methanol, etc. These are all far easier to store than hydrogen. Production needs large scale but is borderline viable already if you have cheap hydrogen. Keywords ammonia energy storage, synthetic fuels, efuels, green ammonia, direct ammonia electrolysis.
Then there's virtual batteries.
Many loads like aluminum smelting can be much more variable than they are now. Rearranging workflows such that they can scale up or down by 50% and change worker tasks to suit has the same function as storage during any period where consumption isn't zero. EV's can kinda fit here too and kinda fit actual storage (especially if they power other things)
Biofuels. Not technically storage, more dispatchable, but it serves a similarfunction. Bagasse is an option for a few percent of power. Waste stream methane is a possibility for a couple % of power. Limited by the extremely low efficiency of photosynthesis so something PV based will likely be a better way of making hydrocarbons from air and sunlight. Most other 'biofuels' are either fossil fuels with extra steps or ways of getting paid green energy credits for burning native forests. Some grad student might surprise us by creating a super-algae that's 10% efficient and doesn't all get eaten if there's a single bacterium in the room. Detangling it all is hard, but I wouldn't be surprised if wind + solar + biofuels + reigning in the waste was enough -- it certainly works for some people doing off grid.
I'd expect a system based on sodium ion (or even lithium) batteries and synthetic fuels to render any fossil fuel mix unviable in the next decade or two. More scalable batteries or scalable fuel cells would hasten this somewhat.
CAES (Compressed Air Energy Storage)
"Compressed air storage vs. lead-acid batteries" (2022) https://www.pv-magazine.com/2022/07/21/compressed-air-storag... :
> Researchers in the United Arab Emirates have compared the performance of compressed air storage and lead-acid batteries in terms of energy stored per cubic meter, costs, and payback period. They found the former has a considerably lower CAPEX and a payback time of only two years.
FWIU China has the first 100MW CAES plant; and it uses some external energy - not a trompe or geothermal (?) - to help compress air on a FWIU currently ~one-floor facility.
Couldn't CAES tanks be filled with CO2/air to fight battery fires?
A local CO2 capture unit should be able to fill the tanks with extra CO2 if that's safe?
Should there be a poured concrete/hempcrete cask to set over burning batteries? Maybe a preassembled scaffold and "grid crane"?
How much CO2 is it safe to flood a battery farm with with and without oxygen tanks after the buzzer due to detected fire/leak? There could be infrared on posts and drones surrounding the facility.
Would it be cost-advisable to have many smaller tanks and compressors; each in a forkable, stackable, individually-maintainable IDK 40ft shipping container? Due to: pump curves for many smaller pumps, resilience to node failure?
If CAES is cheaper than the cheapest existing barriers, it can probably be made better with new-gen ultralight hydrogen tanks for aviation, but for air ballast instead?
Do submarines already generate electricity from releasing ballast?
(FWIW, like all modern locomotives - which are already diesel-electric generators - do not yet have regenerative braking.)
PostgresML is 8-40x faster than Python HTTP microservices
Python is slow for ML. People will take time to realize it. The claim that most of the work is done in GPU -- covers only a small fraction of cases.
For example, in NLP a huge amount of pre and post processing of data is needed outside of the GPU.
Spacy is much faster on the GPU. Many folks don't know that Cudf (a Pandas implementation for GPUs) parallelizes string operations (these are notoriously slow on Pandas)... shrug...
Apache Ballista and Polars do Apache Arrow and SIMD.
The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF*, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/
Ask HN: How to become good at Emacs/Vim?
I've tried switching from IDEs like VSCode to Emacs (with evil mode) a few times now, but I always gave up after a while because my productivity decreases. Even after 1-2 weeks it's still not close to what it was with VScode. That's frustrating. But when I watch proficient people using these editors I'm always amazed at what they can do, and they appear more productive than I am with VSCode. So with enough effort it should be a worthwhile investment.
I think my problem is the lack of a structured guide/tutorial focused on real-world project usage. I can do all basic operations, but I'm probably doing them in an inefficient way, which ends up being slower than a GUI. But I don't know what I don't know, so I don't know what commands and keybindings I should use instead or what my options are.
How did you become good at using these editors? Just using them doesn't really work because by myself I'd never discover most of the features and keybindings.
>How did you become good at using these editors? Just using them doesn't really work because by myself I'd never discover most of the features and keybindings.
I used Vim (GVim to be specific) as part of my job in a semiconductor industry. Everybody in our team used it and we'd help each other. My recollection is not 100% sure, but I think nobody knew how awesome user manuals were (start with vimtutor and then go through `:h usr_toc.txt` https://vimhelp.org/usr_toc.txt.html a few times).
I've also collected a list of Vim resources here: https://learnbyexample.github.io/curated_resources/vim.html
For Emacs, check out https://www.emacswiki.org/emacs/SiteMap
A document with notes on the software tool https://westurner.github.io/tools/#vim :
- [x] SpaceVim, SpaceMacs
- [ ] awesome-vim > Learning Vim https://github.com/akrawchyk/awesome-vim#learning-vim
- [x] https://learnxinyminutes.com/docs/vim/
- [ ]
:help help
:h help
:h usr_toc.txt
:help noautoindent
- [ ] https://en.wikibooks.org/wiki/Learning_the_vi_Editor/Vim/Mod... : :help Command-line-mode
:help Ex-mode
- [ ] my dotvim with regexable comments: https://github.com/westurner/dotvim/blob/master/vimrcThe VSCode GitLab extension now supports getting code completions from FauxPilot
GitLab team member here.
This work was produced by our AI Assist SEG (Single-Engineer Group)[0]. The engineer behind this feature recently uploaded a quick update about this work and other things they are working on to YouTube[1].
[0] - https://about.gitlab.com/handbook/engineering/incubation/ai-...
Why do you call this a group? Why don't you say "one of our engineers did this"? I read the linked article [1] and that seems to be the accurate situation: not a group of people including one engineer, not one engineer at a time on a rotation, but literally one person. In what way is that a group?
[1]: https://about.gitlab.com/company/team/structure/#single-engi...
Great question. I'm not entirely clear on the origin of the name and it would probably be hard for me to find the folks behind this decision on Friday evening/Saturday so I'll share my interpretation.
At GitLab, we have a clearly defined organizational structure. Within that stucture, we have Product Groups[0] which are groups of people aligned around a specific category. The name "Single-Engineer Groups" reflects that this single engineer owns the category which they're focusing on.
I'll be sure to surface your question to the leader of our Incubation Engineering org. Thanks.
[0] - https://about.gitlab.com/company/team/structure/#product-gro...
jesus christ john, i cant wrap my head around the concept of single-person group!! haha
Gitlab is 600Kloc of JS and 1.8Mloc of Ruby. Of course a SEG would make sense to them.
OpenAPI, tests, and {Ruby on Rails w/ Script.aculo.us built-in back in the day, JS, and rewrite in {Rust, Go,}}? There's dokku-scheduler-kubernetes; and Gitea, a fork of Gogs, which is a github clone in Go; but Gitea doesn't do inbound email to Issues, Service Desk Issues, or (Drone,) CI with deploy to k8s revid.staging and production DNS domains w/ Ingress.
You Can Now Google the Balances of Ethereum Addresses
Looks like the data is from Etherscan.io .
Ethereum in BigQuery: https://console.cloud.google.com/marketplace/product/ethereu... and the ETL scripts: https://github.com/blockchain-etl/ethereum-etl
cmorqs-public/cmorq-eth-data in BigQuery: https://console.cloud.google.com/marketplace/product/cmorqs-...
blockchain-etl/awesome-bigquery-views has example SQL queries for querying the BigTable copy of the Ethereum blockchain: https://github.com/blockchain-etl/awesome-bigquery-views
Jupyter Notebooks showing how to query the Ethereum BigQuery Public Dataset:
/? Ethereum Kaggle: https://www.google.com/search?q=ethereum+kaggle
Blender: Wayland Support on Linux
I don't know much about the whole space, but from what I've read so far about Wayland, it starts to feel a lot like yet another "perpetual next-gen" tech, such as IPv6, fuel cells, the Semantic Web or XHTML2 (before that one was officially declared dead at least).
Like, according to the linked article, the standard is out there since 2008 - so the adoption period is already 14 years! And people are still haggling about basic stuff like color management, mouse cursors and window decorations?
What exactly is the envisioned timeframe for Wayland to replace X11 as the dominant windowing system?
If the standard has been promoted as the obvious next step for linux desktop environments for 14 years but still hasn't actually caught on, are we sure it really is the right direction to go?
I cannot say for sure that it is true, but I read (about a year ago) that about 90% of Linux users are already on Wayland.
I tend to believe it because I've seen news reports as of 3 years ago saying that Xserver is no longer being maintained.
The change from Xserver to an arrangement using Wayland is transparent to the Linux user (and Linux admin) and I heard that most of the major distros made the transition a few years ago. A corollary of that, if it is true, is that some of the many Linux users appearing on this site to attack Wayland are in fact (unknown to themselves) using Wayland.
Specifically, the "display server" (terminology? I mean the software that talks to the graphics driver and the graphics hardware) on most distros these days uses the Wayland protocol to talk to apps. An app that have not been modified to use the Wayland protocol to talk to the display server is automatically given a "connection" (terminology?) to XWayland, whose job is to translate the X protocol to the Wayland protocol.
I think `printenv XDG_SESSION_TYPE` will tell you whether you are running Wayland or the deprecated Xserver.
The OP begins, "Recently we have been working on native Wayland support on Linux." What that means is that the blender app no longer needs XWayland: it can talk directly to the display server (using the Wayland protocol). There are certain advantages to that: one advantage is that you can configure all the UI elements on your screen to be scaled by an arbitrary factor without everything getting blurry.
I'm using the latest MacOS to make this comment, but for over a year until a few weeks ago, I was using Linux for all my computing needs, and I went out of my way to run only apps that used the Wayland protocol to talk to the display server (because of the aforementioned ability to scale the UI without blurriness). Chrome had to be started with certain flags for it to use the Wayland protocol. To induce Emacs to speak Wayland, I had to use a special branch of the git source repo, called feature/pgtk.
Do you think Apple will ever contribute XQuartz back to the X11 / X.org open source community?
XQuartz has been a part of X.Org since forever
https://gitlab.freedesktop.org/xorg/xserver/-/tree/master/hw...
Xpra: Multi-platform screen and application forwarding system for x11
I do almost all my computing through Xpra these days. Being able to combine windows seamlessly from multiple VMs is a much more usable way to segregate workloads, and Xpra doesn't suffer from the same security (and increasingly compatibility) issues of X forwarding.
Same here. I wish I could do the same with a Windows VM and a remote MacOS host.
Have you heard of any such solutions?
There used to be a hack for getting integrated windows using Remote Desktop, but I can't remember the name of it anymore and Google isn't finding much :( Hopefully someone remembers (and it's still maintained).
Edit: Found it: https://github.com/rdesktop/seamlessrdp. Seems like there are probably more modern solutions now though.
IIRC WinSwitch + xpra could do seamless windows: http://winswitch.org/documentation/faq.html#protocols http://winswitch.org/about/ :
> Window Switch is a tool which allows you to display running applications on other computers than the one you start them on. Once an application has been started via a winswitch server, it can be displayed on other machines running winswitch client, as required.
> You no longer need to save and send documents to move them around, simply move the view of the application to the machine where you need to access it.
Wikipedia/Neatx links to https://en.wikipedia.org/wiki/Remmina (C) :
> It supports the Remote Desktop Protocol (RDP), VNC, NX, XDMCP, SPICE, X2Go and SSH protocols and uses FreeRDP as foundation.
But no xpra, for which Neatx has old python 2 scripts.
Retinoid restores eye-specific brain responses in mice with retinal degeneration
The retina is a very complex structure. I’m skeptical that if major damage happens to the structure, like wet macular degeneration or in retinal diseases like PIC, that the structure can ever function again.
Given that every body is capable of creating two working retina, why would you think it can’t re-create at least one working retina?
It clearly doesn’t know that the need exists. We have to find the right set of commands and hack the body to re-create one. But the procedure exists even if it involves retina removal and 5+ years of new retina growth (from infant to child).
Null hypothesis: A Nanotransfection (vasculogenic stromal reprogramming) intervention would not result in significant retinal or corneal regrowth
... With or without: a nerve growth factor, e.g. fluoxetine to induce plasticity in the adult visual cortex, combination therapy with cultured conjunctival IPS, laser mechanical scar tissue evisceration and removal, local anesthesia, robotic support, Retinoid
Nanotransfection: https://en.wikipedia.org/wiki/Tissue_nanotransfection :
> Most reprogramming methods have a heavy reliance on viral transfection. [22][23] TNT allows for implementation of a non-viral approach which is able to overcome issues of capsid size, increase safety, and increase deterministic reprogramming
How to turn waste polyethylene into something useful
From "Argonne invents reusable sponge that soaks up oil, could revolutionize oil spill and diesel cleanup" (2017) https://www.anl.gov/article/argonne-invents-reusable-sponge-... :
> [...] The scientists started out with common polyurethane foam, used in everything from furniture cushions to home insulation. This foam has lots of nooks and crannies, like an English muffin, which could provide ample surface area to grab oil; but they needed to give the foam a new surface chemistry in order to firmly attach the oil-loving molecules.
> Previously, Darling and fellow Argonne chemist Jeff Elam had developed a technique called sequential infiltration synthesis, or SIS, which can be used to infuse hard metal oxide atoms within complicated nanostructures.
> After some trial and error, they found a way to adapt the technique to grow an extremely thin layer of metal oxide “primer” near the foam’s interior surfaces. This serves as the perfect glue for attaching the oil-loving molecules, which are deposited in a second step; they hold onto the metal oxide layer with one end and reach out to grab oil molecules with the other.
> The result is Oleo Sponge, a block of foam that easily adsorbs oil from the water. The material, which looks a bit like an outdoor seat cushion, can be wrung out to be reused—and the oil itself recovered.
> At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research & Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface.
From "Reusable Sponge for Mitigating Oil Spills" https://www.energy.gov/science/bes/articles/reusable-sponge-... :
> A new foam called the Oleo Sponge was invented that not only easily adsorbs oil from water but is also reusable and can pull dispersed oil from an entire water column, not just the surface. Many materials can grab oil, but there hasn't been a way, until now, to permanently bind them into a useful structure. The scientists developed a technique to create a thin layer of metal oxide "primer" within the interior surfaces of polyurethane foam. Scientists then bound oil-loving molecules to the primer. The resulting block of foam can be wrung out to be used, and the oil itself recovered.
EU Passes Law to Switch iPhone to USB-C by End of 2024
Self-regulation only works if government regulation is a serious threat in case self-regulation fails. In this case, self-regulation failed, so government regulation stepped in to force industry to do what is right.
All the handwringing about stifling innovation is on its face ridiculous as mandates to use micro-usb didn't stop android phones from adopting the new and better standard as soon as it was viable.
All the handwringing about stifling innovation is on its face ridiculous as mandates to use micro-usb didn't stop android phones from adopting the new and better standard as soon as it was viable.
I'm not sure I understand your logic here. Android phones were able to quickly move to micro USB because they weren't restricted from doing so by a government regulation.
If there was a government mandate requiring phones to have whatever connector came before micro USB, wouldn't that have prevented the Android phones from changing connectors/choosing the new innovation?
Or was there a location where micro USB was required by law before, and since I don't live there, I wasn't affected by it?
Edit: Strike that last sentence, since I see in other responses that there was, indeed, a location where micro USB was mandated. So my new question is: How does a company change to a new/better connector, if it's required by law to use an old connector?
Edit edit: Looks like other responses show there was no mandate, that's just something that people on HN assume. It was a recommendation, not a law, like it is now.
> If there was a government mandate requiring phones to have whatever connector came before micro USB, wouldn't that have prevented the Android phones from changing connectors/choosing the new innovation?
Dunno, we have a test case tho: how'd the Micro-B to USB-C transition go in EU markets?
Vulhub: Pre-Built Vulnerable Environments Based on Docker-Compose
Most of these compose files are pretty outdated AND they depend on non-standard builds of containers for each respective application.
What else would you expect for setups intentionally trying to preserve past versions of software?
Reproducibility in [Infosec] Software Research requires DevOpSec, which requires: explicit data and code dependency specifications, and/or trusting hopefully-immutable software package archives, and/or securely storing and transmitting crytographically-signed archival (container) images; and then Upgrade all of the versions and run the integration tests with a git post-receive hook or a webhook to an external service dependency not encapsulated within the {Dockerfile, environment.yml/requirements.txt/postBuild; REES} dependency constraint model.
With pip-tools, you update the python software versions in a requirements.txt from a requirements.in meta-dependency-spec-file: https://github.com/jazzband/pip-tools#updating-requirements
$ pip-compile --upgrade requirements.in
$ cat requirements.tct
Poetry has an "Expanded dependency specification syntax" but FWIU there's not a way to specify unsigned or signed cryptographic hashes, which e.g. Pipfile.lock supports: hashes for every variant of those versions of packages on {PyPI, and third-party package repos with TUF keys, too}.From https://pipenv.pypa.io/en/latest/basics/#pipenv-lock :
$ pipenv lock
> pipenv lock is used to create a Pipfile.lock, which declares all dependencies (and sub-dependencies) of your project, their latest available versions, and the current hashes for the downloaded files. This ensures repeatable, and most importantly deterministic, builds"Reproducible builds" of a DVWA Deliberately Vulnerable Web Application is a funny thing: https://en.wikipedia.org/wiki/Reproducible_builds
Replication crisis https://en.wikipedia.org/wiki/Replication_crisis :
> The replication crisis (also called the replicability crisis and the reproducibility crisis) is an ongoing methodological crisis in which it has been found that the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method,[2] such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
Just rebuilding or re-pulling a container image does not upgrade the versions of software installed within the container. See also: SBOM, CycloneDx, #LinkedReproducibility, #JupyterREES.
`podman-pull` https://docs.podman.io/en/latest/markdown/podman-pull.1.html... ~:
podman image pull busybox
podman pull busybox
docker pull busybox
podman pull busybox centos fedora ubuntu debian
"How to rebuild and update a container without downtime with docker-compose?"
https://stackoverflow.com/questions/42529211/how-to-rebuild-... : docker-compose up -d --no-deps --build #[servicename]
"Statistics-Based OWASP Top 10 2021 Proposal"
https://dzone.com/articles/statistics-based-owasp-top-10-202...awesome-vulnerable-apps > OWASP Top 10 https://github.com/vavkamil/awesome-vulnerable-apps#owasp-to... :
> OWASP Juice Shop: Probably the most modern and sophisticated insecure web application
And there's a book, an Open Source Official Companion Guide book titled "Pwning Juice Shop": https://github.com/juice-shop/juice-shop#official-companion-...
If the versions installed in the book are outdated, you too can bump the version strings in the dependency specs in the git repo and send a PR Pull Request (which also updates the Screenshots and Menu > Sequences and Keyboard Shortcuts in the book&docs); and then manually test that everything works with the updated "deps" dependencies.
If it's an executablebooks/, a Computational Notebook (possibly in a Literate Computing style), you can "Restart & Run all" from the notebook UI button or a script, and then test that all automated test assertions pass, and then "diff" (visually compare), and then just manually read through the textual descriptions of commands to enter (because people who buy a Book presumably have a reasonable expectation that if they copy the commands from the book to a script by hand to learn them, the commands as written should run; it should work like the day you bought it for a projected term of many free word-of-mouth years.
From https://github.com/juice-shop/juice-shop#docker-container :
docker pull bkimminich/juice-shop
docker run --rm -p 3000:3000
With podman [desktop], podman pull bkimminich/juice-shop
podman run --rm -p 3000:3000 -n juiceshop0
I have read this multiple times and can still not figure out what you are trying to say and how it is related to OPs comment...
> Most of these compose files are pretty outdated AND they depend on non-standard builds of containers for each respective application.
>> What else would you expect for setups intentionally trying to preserve past versions of software?
So, I wrote about reproducibility in software; and Software Supply Chain Security. Specifically, how to do containers and keep the software versions up to date.
Are you challenging the topicality of my comment on HN - containing original research - to be facetious?
Bash 5.2
@jhamby on Twitter is currently refactoring bash to c++, and it's really interesting to read anecdotes about it and read about the progress. It's a really interesting codebase.
c2rust https://github.com/immunant/c2rust :
> C2Rust helps you migrate C99-compliant code to Rust. The translator (or transpiler), c2rust transpile, produces unsafe Rust code that closely mirrors the input C code. The primary goal of the translator is to preserve functionality; test suites should continue to pass after translation.
crust https://github.com/NishanthSpShetty/crust :
> C/C++ to Rust transpiler
"CRustS: A Transpiler from Unsafe C to Safer Rust" (2022) https://scholar.google.com/scholar?q=related:WIDYx_PvgNoJ:sc...
rust-bindgen https://github.com/rust-lang/rust-bindgen/ :
Automatically generates Rust FFI bindings to C (and some C++) libraries
nushell/nushell looks like it has cool features and is written in rust.
awesome-rust > Applications > System Tools https://github.com/rust-unofficial/awesome-rust#system-tools
awesome-rust > Libraries > Command-line https://github.com/rust-unofficial/awesome-rust#command-line
rust-shell-script/rust_cmd_lib https://github.com/rust-shell-script/rust_cmd_lib :
> Common rust command-line macros and utilities, to write shell-script like tasks in a clean, natural and rusty way
hey thanks for this I didn't know it existed. I'm still kind of a rust noob and working my way through Rust in action and various examples.
Mozilla reaffirms that Firefox will continue to support current content blockers
- [ ] ENH,SEC,UBY: indicate that DNS is locally overridden by entries in /etc/hosts
- [ ] ENH,SEC,UBY: Browser UI: indicate that a domain does not have DNSSEC record signatures
- [ ] ENH,SEC,UBT: Browser UI: indicate whether DNS is over classic UDP or DoH, DoT, DoQ (DNS-over-QUIC)
- [ ] ENH,SEC,UBY: browser: indicate that a page is modified by extensions; show a "tamper bit"
- [ ] ENH,SEC: Devtools?: indicate whether there are (matching) HTTP SRI Subresource Integrity signatures for any or some of the page assets
- [ ] ENH,SEC,UBY: a "DNS Domain(s) Information" modal_tab/panel like the Certificate Information panel
Manifest V3, webRequest, and ad blockers
Google made a faux-pas with this one...
Their stated goal is to improve the performance of the web request blocking API.
Their (unstated but suspected) goal is to neuter adblocking chrome extensions.
They should have made extensions get auto-disabled if they 'slow down web page loading too much'. Set the threshold for that to be say more than a 20% increase in page load time, but make the threshold decrease with time - eg. 10% in 2023, 5% in 2024, 2% in 2025, to finally 1% in 2026 etc.
Eventually, that would achieve both of Googles goals - since adblockers would be forced to shorten their lists of regex'es, neutering them, and performance would increase at the same time. Extension developers would have a hard time complaining, because critics will always argue they just have bloated inefficient code.
eWASM opcodes each have a real cost. It's possible to compile {JS, TypeScript, C, Python} to WASM.
What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
- [ ] UBY: Browsers: Strobe the tab tab or extension button when it's beyond (configurable) resource usage thresholds
- [ ] UBY: Browsers: Vary the {color, size, fill} of the tab tabs according to their relative resource utilization
- [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU, RAM, Disk, [GPU, TPU, QPU] (Linux: cgroups,)
> Their (unstated but suspected) goal is to neuter adblocking chrome extensions.
Except they didn't. There's already 3-4 adblockers that work perfectly as far as blocking ads is concerned. They do lose more advanced features, but 99.9% of people with adblockers installed never ever touch those features.
To claim that this neuters adblocking is truly ridiculous. It also ignores that Safari has the exact same restrictions yet no one complains that Apple wanted to neuter ad blocking.
> 99.9% of people with adblockers installed never ever touch those [advanced features]
Custom matching algorithms and ability to fine tune or expand matching algorithms according to new content blocking challenges are actually a kind of advanced feature that are used by all those users without them ever realizing it since their content blocker work seamlessly on their favorite sites without the need for intervention.
The declarativeNetRequest (DNR) API has been quite improved since it was first announced and its great, but since it's the only one we can use now, it's no longer possible to innovate by coming up with improved matching algorithm for network requests.
If the DNR had been designed 8 years ago according to the requirements of content blockers back then, it would be awfully equipped to deal with the challenges thrown at content blockers nowadays, so it's difficult to think the current one will be sufficient in the coming years.
Nothing stops the API evolving... Anyone can make a build of Chromium with a better API, test it out with their own extension, and if it works better, then they can send a PR to get that API into Chrome.
Obviously, extending an API is a long term commitment, so I can understand the Chrome team wanting to only do it if there is a decent benefit - "it makes my one extension with 10 users work slightly better" probably doesn't cut it.
> Nothing stops the API evolving... Anyone can make a build of Chromium with a better API, test it out with their own extension, and if it works better, then they can send a PR to get that API into Chrome.
This is not at all how API proposals are handled. There are a lot more (time, financial, logical) barriers to a change like this. See the role of the W3C in this: https://www.eff.org/deeplinks/2021/11/manifest-v3-open-web-p...
It is reasonable to expect BPF or a BPF-like filter. https://en.wikipedia.org/wiki/Berkeley_Packet_Filter
bromite/build/patches/Bromite-AdBlockUpdaterService.patch: https://github.com/bromite/bromite/blob/master/build/patches...
bromite/build/patches/disable-AdsBlockedInfoBar.patch: https://github.com/bromite/bromite/blob/master/build/patches...
bromite/build/patches/Bromite-auto-updater.patch: () https://github.com/bromite/bromite/blob/master/build/patches...
- [ ] ENH,SEC,UPD: Bromite,Chromium: is there a url syntax like /path.tar.gz#sha256=cba312 that chromium http filter downloader could use to check e.g. sha256 and maybe even GPG ASC signatures with? (See also: TUF, Sigstore, W3C Blockcerts+DIDs)
Bromite/build/patches/Re-introduce-*.patch: [...]
1Hz CPU made in Minecraft running Minecraft at 0.1fps [video]
Cool! Now build it in Turing Complete and export it to an FPGA
From KiCad https://en.wikipedia.org/wiki/KiCad :
> KiCad is a free software suite for electronic design automation (EDA). It facilitates the design and simulation of electronic hardware. It features an integrated environment for schematic capture, PCB layout, manufacturing file viewing, SPICE simulation, and engineering calculation. Tools exist within the package to create bill of materials, artwork, Gerber files, and 3D models of the PCB and its components.
https://www.kicad.org/discover/spice/ :
> KiCad integrates the open source spice simulator ngspice to provide simulation capability in graphical form through integration with the Schematic Editor.
PySpice > Examples: https://pyspice.fabrice-salvaire.fr/releases/v1.6/examples/i... :
+ Diode, Rectifier (AC to DC), Filter, Capacitor, Power Supply, Transformer, [Physical Relay Switche (Open/Closed) -> Vacuum Tube Transistor -> Solid-state [MOSFET,]] Transistor,
From the Ngspice User's Manual https://ngspice.sourceforge.io/docs/ngspice-37-manual.pdf :
> Ngspice is a general-purpose circuit simulation program for nonlinear and linear analyses.*
> Circuits may contain resistors, capacitors, inductors, mutual inductors, independent or dependent voltage and current sources, loss-less and lossy transmission lines, switches, uniform distributed RC lines, and the five most common semiconductor devices: diodes, BJTs, JFETs, MESFETs, and MOSFETs.
> [...] Ngspice has built-in models for the semiconductor devices, and the user need specify only the pertinent model parameter values. [...] New devices can be added to ngspice by several means: behavioral B-, E- or G-sources, the XSPICE code-model interface for C-like device coding, and the ADMS interface based on Verilog-A and XML.
Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness :
> In colloquial usage, the terms "Turing-complete" and "Turing-equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language. In real life this leads to the practical concepts of computing virtualization and emulation. [citation needed]
> Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (the "tape" corresponding to their memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, a universal computer is defined as a device with a Turing-complete instruction set, infinite memory, and infinite available time.
Church–Turing thesis: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis ... Lamda calculus (Church): https://en.wikipedia.org/wiki/Lambda_calculus
HDL: Hardware Description Language > Examples: https://en.wikipedia.org/wiki/Hardware_description_language#...
HVL: Hardware Verification Language: https://en.wikipedia.org/wiki/Hardware_verification_language
awesome-electronics > Free EDA Packages: https://github.com/kitspace/awesome-electronics#free-eda-pac...
https://github.com/TM90/awesome-hwd-tools
EDA: Electronic Design Automation: https://en.wikipedia.org/wiki/Electronic_design_automation
More notes for #Q12:
Quantum complexity theory https://en.wikipedia.org/wiki/Quantum_complexity_theory#Back... :
> A complexity class is a collection of computational problems that can be solved by a computational model under certain resource constraints. For instance, the complexity class P is defined as the set of problems solvable by a Turing machine in polynomial time. Similarly, quantum complexity classes may be defined using quantum models of computation, such as the quantum circuit model or the equivalent quantum Turing machine. One of the main aims of quantum complexity theory is to find out how these classes relate to classical complexity classes such as P, NP, BPP, and PSPACE.
> One of the reasons quantum complexity theory is studied are the implications of quantum computing for the modern Church-Turing thesis. In short the modern Church-Turing thesis states that any computational model can be simulated in polynomial time with a probabilistic Turing machine. [1][2] However, questions around the Church-Turing thesis arise in the context of quantum computing. It is unclear whether the Church-Turing thesis holds for the quantum computation model. There is much evidence that the thesis does not hold. It may not be possible for a probabilistic Turing machine to simulate quantum computation models in polynomial time. [1]
> Both quantum computational complexity of functions and classical computational complexity of functions are often expressed with asymptotic notation. Some common forms of asymptotic notion of functions are \Omega(T(n)) and \Theta(T(n)).
> \Theta(T(n)) expresses that something is bounded above by cT(n) where c is a constant such that c>0 and T(n) is a function of n, \Omega(T(n)) expresses that something is bounded below by cT(n) where c is a constant such that c>0 and T(n) is a function of n, and \Theta(T(n)) expresses both O(T(n)) and \Omega(T(n)). [3] These notations also their own names. O(T(n)) is called Big O notation, \Omega(T(n)) is called Big Omega notation, and \Theta(T(n)) is called Big Theta notation.
Quantum complexity theory > Simulation of quantum circuits https://en.wikipedia.org/wiki/Quantum_complexity_theory#Simu... :
> There is no known way to efficiently simulate a quantum computational model with a classical computer. This means that a classical computer cannot simulate a quantum computational model in polynomial time [P]. However, a quantum circuit of S(n) qubits with T(n) quantum gates can be simulated by a classical circuit with O(2^{S(n)}T(n)^{3}) classical gates. [3] This number of classical gates is obtained by determining how many bit operations are necessary to simulate the quantum circuit. In order to do this, first the amplitudes associated with the S(n) qubits must be accounted for. Each of the states of the S(n) qubits can be described by a two-dimensional complex vector, or a state vector. These state vectors can also be described a linear combination of its component vectors with coefficients called amplitudes. These amplitudes are complex numbers which are normalized to one, meaning the sum of the squares of the absolute values of the amplitudes must be one. [3] The entries of the state vector are these amplitudes.
Quantum Turing machine: https://en.wikipedia.org/wiki/Quantum_Turing_machine
Quantum circuit: https://en.wikipedia.org/wiki/Quantum_circuit
Church-Turing-Deutsch principle: https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93...
Computational complexity > Quantum computing, Distributed computing: https://en.wikipedia.org/wiki/Computational_complexity#Quant...
Hash collisions and exploitations – Instant MD5 collision
MD5 > History, Security > Collision vulnerabilities: https://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities
AI Seamless Texture Generator Built-In to Blender
Is there a way to run things like this with an AMD graphics card? Every Stable Diffusion project I've seen seems to be CUDA focused.
That's because Stable Diffusion is built with PyTorch. Which isn't optimized for anything but CUDA. Even the CPU is a second class citizen there. Let alone AMD or other graphics.
Not saying PyTorch doesn't run on anything else. You can but those will lag and some will be hackish.
Looks like Nvidia is on its way to be the next Intel.
From the Arch wiki, which has a list of GPU runtimes (but not TPU or QPU runtimes) and arch package names: OpenCL, SYCL, ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :
> GPGPU stands for General-purpose computing on graphics processing units.
- "PyTorch OpenCL Support" https://github.com/pytorch/pytorch/issues/488
- Blender re: removal of OpenCL support in 2021 :
> The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, …).
- https://gitlab.com/illwieckz/i-love-compute
- https://github.com/vosen/ZLUDA
- https://github.com/RadeonOpenCompute/clang-ocl
AMD ROCm: https://en.wikipedia.org/wiki/ROCm
AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation
ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :
> GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify
ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:
> HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:
> - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.
> - HIP allows developers to use the "best" development environment and tools on each target platform.
> - The [HIPIFY] tools automatically convert source from CUDA to HIP.
> - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*
Faraday and Babbage: Semiconductors and Computing in 1833
Timeline of Quantum Computing (1960-) https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_...
1960-1883 = 77 years later
From https://news.ycombinator.com/item?id=31996743 https://westurner.github.io/hnlog/#comment-31996743 :
> Qubit#Physical_implementations: https://en.wikipedia.org/wiki/Qubit#Physical_implementations
> - note the "electrons" row of the table
>> See also: "Quantum logic gate" https://en.wikipedia.org/wiki/Quantum_logic_gate
macOS Subsystem for Linux
You can just use Vagrant for this kind of setup. The benefit of that is you can provision so it is able to run your application and you're able to commit it to git and share it with others. Pretty sure it will have a qemu adapter too if you so choose.
brew install vagrant packer terraform
Podman Desktop is Apache 2.0 open source; supports Win, Mac, Lin; supports Docker Desktop plugins; and has plugins for Podman, Docker, Lima, and CRC/OpenShift Local (k8s) https://github.com/containers/podman-desktop : brew install podman-desktop
/? vagrant Kubernetes MacOS https://www.google.com/search?q=vagrant+Kubernetes+macosYou get all that put together one time on one box and realize you could have scripted the whole thing, but you need bash 4+ or Python 3+ so it all depends on `brew` first: https://github.com/geerlingguy/ansible-for-kubernetes/blob/m...
The Ansible homebrew module can install and upgrade brew and install and upgrade packages with brew: https://docs.ansible.com/ansible/latest/collections/communit...
And then write tests for the development environment too, or only for container specs in production: https://github.com/geerlingguy/ansible-for-kubernetes/tree/m... :
brew install kind docker
type -a python3; python3 -m site
python3 -m pip install molecule ansible-test yamllint
# molecule converge; ssh -- hostname
molecule test
# molecule destroy
westurner/dotfiles/scripts/upgrade_mac.sh: https://github.com/westurner/dotfiles/blob/develop/scripts/u...Perhaps not that OT, but FWIW I just explained exactly this in a tweet:
> Mambaforge-pypy3 for Linux, OSX, Windows installs from conda-forge by default. (@condaforge builds packages with CI for you without having to install local xcode IIRC)
conda install -c conda-forge -y nodejs
mamba install -y nodejs
https://github.com/conda-forge/miniforge#mambaforge-pypy3Global-Chem: A Free Dictionary from Common Chemical Names to Molecules
TIL about #cheminformatics and Linked Data (Semantic Web):
Cheminformatics: https://en.wikipedia.org/wiki/Cheminformatics
https://github.com/topics/cheminformatics :
- https://github.com/hsiaoyi0504/awesome-cheminformatics #Databases #See_also
- https://github.com/mcs07/PubChemPy :
> PubChemPy provides a way to interact with PubChem in Python. It allows chemical searches by name, substructure and similarity, chemical standardization, conversion between chemical file formats, depiction and retrieval of chemical properties.
http://chemicalsemantics.com/introduction-to-the-chemical-se... ... /? chemicalsemantics github ... https://github.com/semanticchemistry/semanticchemistry #See_also :
> The Chemical Information Ontology (CHEMINF) aims to establish a standard in representing chemical information. In particular, it aims to produce an ontology to represent chemical structure and to richly describe chemical properties, whether intrinsic or computed.
Looks like they developed the CHEMINF OWL ontology in Protege 4 (which is Open Source). /ontology/cheminf-core.owl: https://github.com/semanticchemistry/semanticchemistry/blob/...
- Does it -- the {sql/xml/json/graphql, RDFS Vocabulary, OWL Ontology} schema - have more (C)Classes and (P)Properties than other schema for modeling this domain?
- What namespaced strings and URIs does it specify for linking entities internally and externally?
LOV Linked Open Vocabularies maintains a database of many RDFS vocabularies and OWL ontologies (which are represented in RDF) https://lov.linkeddata.es/dataset/lov/terms?q=chemical
- "The Linking Open Data Cloud" (2007-) https://lod-cloud.net/
/? "cheminf" https://scholar.google.com/scholar?hl=en&as_sdt=0,43&qsp=1&q...
/? "cheminf" ontology https://scholar.google.com/scholar?hl=en&as_sdt=0,43&qsp=1&q...
"The ChEMBL database as linked open data" (2013) https://scholar.google.com/scholar?cites=1029919691588310633... ... citations:
"PubChem substance and compound databases" (2017) https://scholar.google.com/scholar?cites=7847099277060264658...
"5 Star Linked Data⬅" https://wrdrd.github.io/docs/consulting/knowledge-engineerin...
Thing > BioChemEntity https://schema.org/BioChemEntity
Thing > BioChemEntity > ChemicalSubstance https://schema.org/ChemicalSubstance
Thing > BioChemEntity > MolecularEntity https://schema.org/MolecularEntity
Thing > BioChemEntity > Protein https://schema.org/Protein
Thing > BioChemEntity > Gene https://schema.org/Gene
Some of the BioSchemas work [1] is proposed and pending inclusion in the Schema.org RDFS vocabulary [2].
[1] https://github.com/bioschemas
[2] https://github.com/schemaorg/schemaorg/issues/1028
Will newer Bioschema terms like BioSample, LabProtocol, SequenceAnnotation, and Phenotype be proposed for inclusion into the Schema.org vocabulary?: https://bioschemas.org/profiles/index#nav-draft
GCC's new fortification level: The gains and costs
Unfortunately not much in the way of performance measurements :(
It's on my TODO list. Watch out for Fedora change proposals for (hopefully) Fedora 38.
What if you develop inside of a Fedora 38 Docker container
FROM quay.io/fedora/fedora:38
RUN dnf install -y <bunch of tools>
Got any useful tips or flags to enable that're bleeding edge?It's not a runtime flag, you'll have to patch redhat-rpm-config to use _FORTIFY_SOURCE=3 instead of 2 and then build packages with it.
Of course if you only want to build your application with _FORTIFY_SOURCE=3, you can do it right away even on Fedora 36. The Fedora change will be to build the distribution (or at least a subset of packages) with _FORGIFY_SOURCE=3.
Poor writing, not specialized concepts, drives difficulty with legal language
Which attributes of a person are necessary to answer a legal question?
Python:
def has_legal_right(person: dict, right: str): -> bool
assert person
assert right
#
return NotImplementedError
def have_equal_rights(persons: list): -> bool
return NotImplementedError
Javascript: function hasRight(person, right) {
console.assert(person);
console.assert(right);
// return true || false;
}
function haveEqualRights(persons) {
// return true || false;
}
Maybe Lean Mathlib or Coq?... Therefore you've failed at the Law of Reciprocity.
U.S. appeals court rejects big tech’s right to regulate online speech
Does this mean that newspaper Information Service Providers are now obligated to must-carry opinion pieces from political viewpoints that oppose those of the editors in the given district?
Does this mean that newspapers in Texas are now obligated to carry liberal opinion pieces? Equal time in Texas at last.
Must-carry provision of a contract for service: https://en.wikipedia.org/wiki/Must-carry
I imagine they'd have to accept arbitrary submissions first. If you just worked there, probably they'd be forced to put up anything you wrote
no
How limited is the given district court of appeals case law precedent in regards to must-carry and Equal time rules for non-licensed spectrum Information Service providers? Are they now common carrier liability, too?
Equal time rules and American media history: https://en.wikipedia.org/wiki/Equal-time_rule
Who pays for all of this?
> "Give me my free water!"
From "FCC fairness doctrine" (1949-1987) https://en.wikipedia.org/wiki/FCC_fairness_doctrine :
> The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters. Stations were given wide latitude as to how to provide contrasting views: It could be done through news segments, public affairs shows, or editorials. The doctrine did not require equal time for opposing views but required that contrasting viewpoints be presented. The demise of this FCC rule has been cited as a contributing factor in the rising level of party polarization in the United States. [5][6]
Because the free flow of information is essential to democracy, it is in the Public Interest to support a market of new and established flourishing information service providers, not a market of exploited must-carry'ers subject to district-level criteria for ejection or free water for life. Shouldn't all publications, all information services be subject to any and all such Equal Time and Must-Carry interpretations?
Your newspaper may not regulate viewpoints: in its editorial section or otherwise. Must carry. Equal time.
The wall of one's business, perhaps.
You must keep that up there on your business's wall.
**
In this instance, is there a contract for future performance? How does Statute of Frauds apply to contracts worth over $500?
Transformers seem to mimic parts of the brain
Sincere question inspired purely by the headline: how many important ML architectures aren't in some way based on some proposed model of how something works in the brain?
(Not intended as a flippant remark, I know Quanta Magazine articles can generaly safely be assumed to be quality content, and that this is about how a language model unexpectedly seems to have relevance for understanding spatial awareness)
I think the response to this has two prongs:
- Some families of ML techniques (SVMs, random forests, gaussian processes) got their inspiration elsewhere and never claimed to be really related to how brains do stuff.
- Among NNs, even if an idea takes loose inspiration from neuroscience (e.g. the visual system does have a bunch of layers, and the first ones really are pulling out 'simple' features like an edge near an area), I think it's relatively uncommon to go back and compare specifically what's happening in the brain with a given ML architecture. And a lot of the inspiration isn't about human-specific cognitive abilities (like language), but is really a generic description of neurons which is equally true of much less intelligent animals.
> I think it's relatively uncommon to go back and compare specifically what's happening in the brain with a given ML architecture.
Less common but not unheard of. Here's one example, primarily on focused on vision: http://www.brain-score.org/
DeepMind has also published works comparing RL architectures like IQN to dopaminergic neurons.
The challenge is that its very cross-disciplinary and most DL labs don't have a reason to explore the neuroscience side while most neuro labs don't have the expertise in DL.
Is it necessary to simulate the quantum chemistry of a biological neural network in order to functionally approximate a BNN with an ANN?
A biological systems and fields model for cognition:
Spreading activation in a dynamic graph with cycles and magnitudes ("activation potentials",) that change as neurally-regulated heart-generated electron potentials (and,) reverberate fluidically with intersecting paths. And a partially extra-cerebral induced field which nonlinearly affects the original signal source through local feedback; Representational shift.
Representational shift: "Neurons Are Fickle. Electric Fields Are More Reliable for Information" (2022) https://neurosciencenews.com/electric-field-neuroscience-201...
Spreading activation: https://en.wikipedia.org/wiki/Spreading_activation
Re: 11D (11-Dimensional) biological network hyperparameters, ripples in (hippocampal, prefrontal,) association networks: https://news.ycombinator.com/item?id=18218504
M-theory String theory is also 11D, but IIUC they're not the same dimensions
Diffusion suggests fluids, which in physics and chaos theory suggests Bernoulli's fluid models (and other non-differentiable compact descriptions like Navier-Stokes), which are part of SQG Superfluid Quantum Gravity postulates.
Can e.g. ONNX or RDF with or without bnodes represent a complete connectome image/map?
Connectome: https://en.wikipedia.org/wiki/Connectome
Wave Field recordings are probably the most complete known descriptions of the brain and its nonlinear fields?
How such fields relate to one or more Quantum Wave functions might entail near-necessity of QFT: Quantum Fourier Transform.
When you replace the Self-attention Network part of a Transformer algorithm with classical FFT Fast Fourier Transform: ... From https://medium.com/syncedreview/google-replaces-bert-self-at... :
> > New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs."
> > Would Transformers (with self-attention) make what things better? Maybe QFT? There are quantum chemical interactions in the brain. Are they necessary or relevant for what fidelity of emulation of a non-discrete brain?
> Quantum Fourier Transform: https://en.wikipedia.org/wiki/Quantum_Fourier_transform
The QFT acronym annoyingly reminds ne rather of Quantum Field Theory than Quantum Fourier Transforms ...
Yeah. And resolve QFT + { QG || SQG }
A more useful query, to Google dork:
/? QFT "field theory" "Superfluid" "quantum gravity" https://www.google.com/search?q=QFT+%22field+theory%22+%22Su... https://scholar.google.com/scholar?q=QFT+%22field+theory%22+...
/? QFT "field theory" "Superfluid quantum gravity" https://www.google.com/search?q=QFT+%22field+theory%22+%22Su... https://scholar.google.com/scholar?q=QFT+%22field+theory%22+...
Chaos researchers can now predict perilous points of no return
This sounds similar to work I did years ago to combine phase-space manifolds with a rule-based expert system to address problems diagnosing failures in mechanical systems exhibiting multi-modal operating regimes.
Hopefully the researchers found a simpler computational method than I did in trying to mate those two systems together. :)
What really caught my attention was the output of a probability curve showing how the system might operate in the never-before-seen regimes once the tipping point was reached. The ability to predict behavior outside the training set is a huge win. My method was only predictive while the the system operated in the training regime; outside that regime it was useless.
Could this detect/predict/diagnose e.g. mechanical failures in engines and/or motors, and health conditions, given sensor fusion?
Sensor fusion https://en.wikipedia.org/wiki/Sensor_fusion
Steady state https://en.wikipedia.org/wiki/Steady_state :
> In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. [1]
https://github.com/topics/steady-state
Control systems https://en.wikipedia.org/wiki/Control_system
https://github.com/topics/control-theory
Flap (disambiguation) > Computing and networks > "Flapping" (nagios alert fatigue,) https://en.wikipedia.org/wiki/Flap
Perceptual Control Theory (PCT) > Distinctions from engineering control theory https://en.wikipedia.org/wiki/Perceptual_control_theory :
> In the artificial systems that are specified by engineering control theory, the reference signal is considered to be an external input to the 'plant'.[7] In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in the methodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in the next section below, evokes the reference signal r from synapse-local memory, and the strength of r is proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems. [26]
> In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on the mathematics of PCT (and elaborated more fully in the referenced literature). The weighted sum is understood to result from reorganization.
> Engineering control theory is computationally demanding, but as the preceding section shows, PCT is not. For example, contrast the implementation of a model of an inverted pendulum in engineering control theory [27] with the PCT implementation as a hierarchy of five simple control systems. [28]
Structural Equation Modeling: https://en.wikipedia.org/wiki/Structural_equation_modeling https://github.com/topics/structural-equation-modeling
ros2_control https://control.ros.org/master/index.html
Limit cycle https://en.wikipedia.org/wiki/Limit_cycle
Finite Element Analysis https://en.wikipedia.org/wiki/Finite_element_method
> #FEM: Finite Element Method (for ~solving coupled PDEs Partial Differential Equations)
> #FEA: Finite Element Analysis (applied FEM)
awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea
GraphBLAS
> When applied to sparse adjacency matrices, these algebraic operations are equivalent to computations on graphs
Sparse matrix: https://en.wikipedia.org/wiki/Sparse_matrix :
> The concept of sparsity is useful in combinatorics and application areas such as network theory and numerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.
CuGraph has a NetworkX-like API, though only so many of the networkx algorithms are yet reimplemented with some possible CUDA-optimizations.
From https://github.com/rapidsai/cugraph :
> cuGraph operates, at the Python layer, on GPU DataFrames, thereby allowing for seamless passing of data between ETL tasks in cuDF and machine learning tasks in cuML. Data scientists familiar with Python will quickly pick up how cuGraph integrates with the Pandas-like API of cuDF. Likewise, users familiar with NetworkX will quickly recognize the NetworkX-like API provided in cuGraph, with the goal to allow existing code to be ported with minimal effort into RAPIDS.
> While the high-level cugraph python API provides an easy-to-use and familiar interface for data scientists that's consistent with other RAPIDS libraries in their workflow, some use cases require access to lower-level graph theory concepts. For these users, we provide an additional Python API called pylibcugraph, intended for applications that require a tighter integration with cuGraph at the Python layer with fewer dependencies. Users familiar with C/C++/CUDA and graph structures can access libcugraph and libcugraph_c for low level integration outside of python.
/? sparse https://github.com/rapidsai/cugraph/search?q=sparse
Pandas and scipy and IIRC NumPy have sparse methods; sparse.SparseArray, .sparse.; https://pandas.pydata.org/docs/user_guide/sparse.html#sparse...
From https://pandas.pydata.org/docs/user_guide/sparse.html#intera... :
> Series.sparse.to_coo() is implemented for transforming a Series with sparse values indexed by a MultiIndex to a scipy.sparse.coo_matrix.
NetworkX graph algorithms reference docs https://networkx.org/documentation/stable/reference/algorith...
NetworkX Compatibility > Differences in Algorithms https://docs.rapids.ai/api/cugraph/stable/basics/nx_transiti...
List of algorithms > Combinatorial algorithms > Graph algorithms: https://en.wikipedia.org/wiki/List_of_algorithms#Graph_algor...
To give an example of real world sparse matrices, power grids can have thousands and thousands of nodes, but most of those nodes only connect to a few other nodes at most that are local neighbors. Systems are highly sparse as a result
Integer factor graphs are sparse. https://en.wikipedia.org/wiki/Factor_graph#Message_passing_o...
Compared to the Powerset graph that includes all possible operators and parameter values and parentheses in infix but not Reverse Polish Notation, a correlation graph is sparse: most conditional probabilities should be expected to tend toward the Central Limit Theorem, so if you subtract (or substitute) a constant noise scalar, a factor graph should be extra-sparse. https://en.wikipedia.org/wiki/Central_limit_theorem_for_dire...
What do you call a factor graph with probability distribution functions (PDFs) instead of float64s?
Are Path graphs and Path graphs with cycles extra sparse? An adjacency matrix for all possible paths through a graph is also mostly zeroes. https://en.wikipedia.org/wiki/Path_graph
Methods of feature reduction use and affect the sparsity of a sparse matrix (that does not have elements for confounding variables). For example, from "Exploratory factor analysis (EFA) versus principal components analysis (PCA)" https://en.wikipedia.org/wiki/Factor_analysis :
> For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix.
Common Lisp names all sixteen binary logic gates
From File:Logical_connectives_Hasse_diagram.svg https://commons.wikimedia.org/wiki/File:Logical_connectives_...:
> Description: The sixteen logical connectives ordered in a Hasse diagram. They are represented by:
> - logical formulas
> - the 16 elements of V4 = P^4({})
> - Venn diagrams
> The nodes are connected like the vertices of a 4 dimensional cube. The light blue edges form a rhombic dodecahedron - the convex hull of the tesseract's vertex-first shadow in 3 dimensions.
Hasse diagram: https://en.wikipedia.org/wiki/Hasse_diagram
> A research question for a new school year: (2021, still TODO)
> The classical logical operators form a neat topology. Should we expect there to be such symmetry and structure amongst the quantum operators as well?
From Quantum Logic https://en.wikipedia.org/wiki/Quantum_logic :
> Quantum logic can be formulated either as a modified version of propositional logic or as a noncommutative and non-associative many-valued (MV) logic.[2][3][4][5][6]
> Quantum logic has been proposed as the correct logic for propositional inference generally, [...] group representations and symmetry.
> The more common view regarding quantum logic, however, is that it provides a formalism for relating observables, system preparation filters and states.[citation needed] In this view, the quantum logic approach resembles more closely the C*-algebraic approach to quantum mechanics. The similarities of the quantum logic formalism to a system of deductive logic may then be regarded more as a curiosity than as a fact of fundamental philosophical importance. A more modern approach to the structure of quantum logic is to assume that it is a diagram—in the sense of category theory—of classical logics
Quantum_logic#Differences_with_classical_logic: https://en.wikipedia.org/wiki/Quantum_logic#Differences_with...
Cirq > Gates and operations: https://quantumai.google/cirq/build/gates
Cirq > Operators and Observables: https://quantumai.google/cirq/build/operators
qiskit-terra/qiskit/circuit/operation.py Interface: https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/circ...
tequila/src/tequila/circuit/gates.py: https://github.com/tequilahub/tequila/blob/master/src/tequil...
Pauli matrices > Quantum information: https://en.wikipedia.org/wiki/Pauli_matrices#Quantum_informa...
From Quantum_information#Quantum_information_processing https://en.wikipedia.org/wiki/Quantum_information#Quantum_in... :
> The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch Sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators.
Google pays ‘enormous’ sums to maintain search-engine dominance, DOJ says
Google does this yet lies about their searches to maintain an illusion of competence.
For example, you can type in something like "purpose of life" and it will say it has 2 billion results, yet if you try to go to result 500, you can't, it will stop at 400, then change the number to 400 results only on the last page.
This happens for every query. Google lies about the astronomical number of search results, then only shows a few hundred at most.
I work for Google Search. The counts we show for results are estimated. They get more refined when you go deeper into the results. But yes, there are still likely to be millions of results for many things you query -- and most people are not going to be able to go through all millions of those. So we show usually up to around 40 pages / 400 of these. We have a help page about this here: https://support.google.com/websearch/answer/9603785
So when I google "cows" and it says "Page 2 of about 1,210,000,000 results" you know full well you aren't going to show anywhere near 1,201,000,000 results yet you program it to display that? And in fact it only shows "about 231 results" which is 0.000019234% of 1,210,000,000 results.
That doesn't sound like an estimate to me. To me, that is intentional misleading.
How is it misleading? It's an estimate of the total results, it doesn't say it's going to display them
"I'll sell you about 1,201,000,000 paper clips."
"OK."
ships 231 paper clips
"Hey I only got 231 paper clips, not 1,201,000,000."
"That's right. 1,201,000,000 was an estimate."
"You said about. So you estimated 1,201,000,000 paper clips but you actually only had 231?"
"No, I had the full 1,201,000,000. I sold them to you but I didn't say I would ship all of them. What kind of idiot uses more than a few hundred paper clips anyway? Plus, it saves us money on shipping costs."
You haven't paid Google for search: there is no sale of product or service to you, the user using free services for free.
You haven't signed any agreement with Google for search services. Google hasn't signed any agreement for future performance with you.
Google is not obligated to count every search result of every free search query. You are not entitled to such resource-intensive queries.
How much does COUNT() on a full table scan of billions of rows - with snippets - cost you on BigQuery or a similar pay-for-query-resources service?
>> you, the user using free services for free
Absolutely false, it is not free. I have provided them with my data which they will monetize.
It's the same as Hacker News not being free. I have provided Hacker News with my personal data.
For example, if you look through my post history just in the last day or so, you would know that Rufus Foreman owns a killer cis-gendered cat named Mr. Tiddlesworth, that Rufus Foreman is a Warren Buffett fan boy, and that when thinking of a generic search term to use as an example, the first thing to come to Rufus Foreman's mind is "cows".
Now imagine what sort of dark patterns an unscrupulous corporation like say, Hooli, could implement in order to target me with advertising tailored to my preferences!
If you tell the bartender your life story, they don't owe you free drinks (and they might as well sell a screenplay)
While it's true that they sell the data they collect, you can choose to not share such data and still receive the free services. "Bromite" is a fork of Chromium, for example.
If you spend time in their store and cause loss and order a bunch of free waters, do the Terms of Service even apply to you? What can they even do? What can LinkedIn do about scraping and resale of every public profile page?
Give me some free privacy on my free dsl line. (Note that ISPs can sell the entirety of a customer's internet PCAPs, for example, due to Pai's FCC rescinding a Wheeler FCC privacy rule https://www.theverge.com/2017/3/31/15138526/isp-privacy-bill... "Trump signs repeal of U.S. broadband privacy rules" (2017) https://www.reuters.com/article/us-usa-internet-trump/trump-... )
Bromite is not a Google service. It's a false precedence that anything open source from a corporation is a free service and makes their anti-privacy stance good. That's like saying a criminal is a good guy because he did 50 hours of charity work after murdering 2 people.
You can use the Chromium source code that Google contributes to, to browse the internet with and without ads and trackers that use obvious domain names: Microsoft Edge, Opera, Vivaldi, Bromite, ungoogled-chromium, Brave, Chrome.
You choose whether to shop at Google.
Google buying the default search engine position in browsers does not prevent users from changing the - possibly OpenSearch - browser search engine to DuckDuckGo or Ecosia.
You can force an address bar entry to a.tld/search=?${query} search w/:
Ctrl-L
?${query}
?how to change the default search engine
?how to block ads & trackers in {browser name}
?how to provide free search queries on a free search engine and have positive revenue after years of debt obligations to fairly build market share
You can choose to take their free s and search elsewhere, eh?Why would they now get out of paying for Firefox development using a revenue model, too?
(Competitors can and do use e.g. google/bazel the open source clone of google/blaze, which is what Chromium builds were built with before gn. Here's Chromium/BUILD.bazel, for example: https://source.chromium.org/chromium/v8/v8.git/+/master:BUIL... )
Android (and /e/ and LineageOS) do allow you to install browsers other than the Chrome WebView and Chrome. Is it possible to install anything other than Safari (WebKit) on iOS devices? Maybe from another software repository like F-droid? Hopefully current downstream releases with signed manifests and SafetyNet scanning uploaded apps
> You haven't signed any agreement with Google for search services.
Literally on absolutely every google search page: https://policies.google.com/terms
No one read terms and conditions, yes?
Terms of Service: https://en.wikipedia.org/wiki/Terms_of_service
Statute of Frauds applies to agreements regarding amounts over $500. Is this a conscionable agreement between which identified parties? Does what satisfy chain of custody requirements for criminal or civil admissability if the data is from not a trustless system but a centralized trustful system?
"Victory! Ruling in hiQ v. Linkedin Protects Scraping of Public Data" (2019) https://www.eff.org/deeplinks/2019/09/victory-ruling-hiq-v-l...
And then the interplay between a "Right to be Forgotten" and the community legal obligation to retain for lawful investigative law enforcement purposes. They don't know what they want: easy investigations, compromisable investigations, privacy
Ask HN: Best empirical papers on software development?
There are some good empirical papers, but I only know very few. What is your best empirical paper on software development?
From https://en.wikipedia.org/wiki/Experimental_software_engineer... :
> Experimental software engineering involves running experiments on the processes and procedures involved in the creation of software systems, with the intent that the data be used as the basis of theories about the processes involved in software engineering (theory backed by data is a fundamental tenet of the scientific method). A number of research groups primarily use empirical and experimental techniques.
> The term empirical software engineering emphasizes the use of empirical studies of all kinds to accumulate knowledge. Methods used include experiments, case studies, surveys, and using whatever data is available.
(CS) Papers We Love > https://github.com/papers-we-love/papers-we-love#other-good-... :
- "Systematic Review in Software Engineering" (2005)
-- "The Developed Template for Systematic Reviews in Software Engineering"
- "Happiness and the productivity of software engineers" (2019)
DevTech Research Group (Kibo, Scratch Jr,) > Publications https://sites.bc.edu/devtech/publications/
' > Empirical Research, instruments: https://sites.bc.edu/devtech/about-devtech/empirical-researc...
"SafeScrum: Agile Development of Safety-Critical Software" (2018) > A Summary of Research https://scholar.google.com/scholar?cites=9208467786713301421... (Gscholar features: cited by, Related Articles) https://link.springer.com/chapter/10.1007/978-3-319-99334-8_...
Re: Safety-Critical systems, awesome-safety-critical, and Formal Verification as the ultimate empirical study: https://news.ycombinator.com/item?id=28709239
Why public chats are better than direct messages
As I read this, I got the sinking feeling that I'd read it all before. But then I realised, it's just another case of someone thinking that their specific solution is best, solely on the basis that it worked for them, in their specific circumstances.
But here's the thing: every group is different, has different needs, and will respond differently to different styles of communication. Some people (especially people not using their first language) can find working in public stressful; some people get very stressed by the intensity of 1:1s. It takes all types, and as a manager, understanding how to get the best out of everyone is part of the job.
There is no single, correct way, and we know this because, if there was, then we wouldn't keep hearing about these interminable "solutions".
Yes, but so is which is best for which situation still the question?
Presuming that information asymmetry will hold over time is a bad assumption, regardless of cost of information security controls.
Why have these new collaborative innovative services succeeded where NNTP and > > indented, text-wrapped email forwards for new onboards have not?
Instead of Chat or IM, hopefully working on Issues with checkbox Tasks and Edges; and Pull Requests composed of Commits, Comments, and Code Reviews; with conditional Branch modification rules; will produce Products: deliverables of value to the customer, per the schema:Organization's Mission.
What style of communication is appropriate for a team in which phase of development, regardless of communications channel?
> Why have these new collaborative innovative services succeeded where NNTP and > > indented, text-wrapped email forwards for new onboards have not?
The new tools we have at our disposal are amazing. Of course they are better. But they are just tools. They don’t solve any problems relating to interpersonal communication any more than a hammer solves building a house.
> What style of communication is appropriate for a team in which phase of development, regardless of communications channel?
It’s the job of a manager to work that out. There is no formula. It’s not even possible to write one down. That’s the point.
Well, our societies value these communication businesses as among the most valuable corporations on Earth, so I think that there's probably some value in the tools that people suffer ads on to get for free.
"Traits of good remote leaders" (2019) https://news.ycombinator.com/item?id=24432088 :
"From Comfort Zone to Performance Management" (2009) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2... :
> "Table 4 – Correlation of Development Phases, Coping Stages and Comfort Zone transitions and the Performance Model" in "From Comfort Zone to Performance Management" White (2008) tabularly correlates the Tuckman group development phases (Forming, Storming, Norming, Performing, Adjourning) with the Carnall coping cycle (Denial, Defense, Discarding, Adaptation, Internalization) and Comfort Zone Theory (First Performance Level, Transition Zone, Second Performance Level), and the White-Fairhurst TPR model (Transforming, Performing, Reforming). The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages.
Planting trees not always an effective way of binding carbon dioxide
"Hemp twice as effective at capturing carbon as trees, UK researcher says" (2021) https://hempindustrydaily.com/hemp-twice-as-effective-at-cap... :
> “Industrial hemp absorbs between 8 to 15 tonnes of CO2 per hectare (3 to 6 tonnes per acre) of cultivation.”
> Comparatively, forests capture 2 to 6 tonnes of carbon per hectare (0.8 to 2.4 tonnes per acre), depending on the region, number of years of growth, type of trees and other factors, Shah said.
> Shah, who studies engineered wood, bamboo, natural fiber composites and hemp [at Cambridge, UK], said hemp “offers an incredible scope to grow a better future” while producing fewer emissions than conventional crops and more usable fibers per hectare than forestry.
"Cities of the future may be built with algae-grown limestone" (2022) https://www.colorado.edu/today/2022/06/23/cities-future-may-... :
> And limestone isn’t the only product microalgae can create: microalgae’s lipids, proteins, sugars and carbohydrates can be used to produce biofuels, food and cosmetics, meaning these microalgae could also be a source of other, more expensive co-products—helping to offset the costs of limestone production.
Carbon sequestration: https://en.wikipedia.org/wiki/Carbon_sequestration
tonnes of carbon per hectare is not relevant.
What you need is tonnes of carbon per hectare per year.
We don't have enough space to let these plants be there, we need to convert them into coal and throw it back into the mines.
The efficient thing to do is render them into charcoal, yes.
Biochar is a great soil amendment, and doesn't oxidize over decades or even centuries, depending. Putting it back in the mines is an option, if we ever need to stop rebuilding topsoil, which is itself getting urgent.
Biochar is also intensive and only converts a relatively small amount of that carbon into stable charcoal.
Hemp is compostable, though because it's so tough, shredding and waiting for it to compost trades (vertical) space & time for far less energy use than biocharification unless it's waste heat from a different process.
Bioenergy with carbon capture and storage (BECCS) > Biomass feedstocks doesn't have a pivot table of conversion efficiencies?: https://en.wikipedia.org/wiki/Bioenergy_with_carbon_capture_... :
> Biomass sources used in BECCS include agricultural residues & waste, forestry residue & waste, industrial & municipal wastes, and energy crops specifically grown for use as fuel. Current BECCS projects capture CO2 from ethanol bio-refinery plants and municipal solid waste (MSW) recycling center.
> A variety of challenges must be faced to ensure that biomass-based carbon capture is feasible and carbon neutral. Biomass stocks require availability of water and fertilizer inputs, which themselves exist at a nexus of environmental challenges in terms of resource disruption, conflict, and fertilizer runoff.
If you keep taking hemp off a field without leaving some down, you'll probably need fertilizer (see: KNF, JADAM,) and/or soil amendments to be able to rotate something else through; though it's true that hemp grows without fertilizer.
> A second major challenge is logistical: bulky biomass products require transportation to geographical features that enable sequestration. [27]
Or more local facilities
Composting oxidizes a lot of the carbon. And since you have to replant, harvest and fertilise everyyear it is a lot more intensive than forestry where you do that ever 30 (15-100) years
Do you know of any papers investigating how kelp or other seaweeds compare? I've heard some biologists informally claim that they would be even more effective because they can grow incredibly fast.
I thought trees were somewhat ideal because the carbon sequestered in them can be used as long-lived lumber. If the kelp sequesters carbon but then it gets released immediately when it decomposes or someone eats it, then it doesn't really solve the problem.
We need to be putting carbon back into the ground where we got it, or at least converting into forms where it lives a long time on the surface (decades or centuries).
> I thought trees were somewhat ideal because the carbon sequestered in them can be used as long-lived lumber.
This is why hempcrete is ideal. But hemp, by comparison, doesn't result in a root-bound tree farm for wind break and erosion control; hemp can be left down to return nutrients to the soil or for soil remediation as it's a very absorbent plant (that draws e.g. heavy metals out of soil and into the plant)
Why hempcrete when you could use wood?
> Hemp can be left down to return nutrients to the soil or for soil remediation as it's a very absorbent plant (that draws e.g. heavy metals out of soil and into the plant)
It won't remove heavy metals from the soil if you leave it there. You also can't turn it into a product if you leave it there.
Caddyhttp: Enable HTTP/3 by Default
lucaslorentz/caddy-docker-proxy works like Traefik, in that Container metadata labels are added to the reverse proxy configuration which is reloaded upon container events, which you can listen to when you subscribe to a Docker/Podman_v3 socket (which is unfortunately not read only)
So, with Caddy or Traefik, a container label can enable HTTP/3 (QUIC (UDP port 1704)) for just that container.
"Labels to Caddyfile conversion" https://github.com/lucaslorentz/caddy-docker-proxy#labels-to...
From https://news.ycombinator.com/item?id=26127879 re: containersec :
> > - [docker-socket-proxy] Creates a HAproxy container that proxies limited access to the [docker] socket
The point of the link in OP is that now in v2.6, Caddy enables HTTP/3 by default, and doesn't need to be explicitly enabled by the user.
So I'm not exactly sure the point you're trying to make. But yes, CDP is an awesome project!
That is a good point. Is there any way to disable HTTP/3 support with just config?
The (unversioned?) docs have: https://caddyserver.com/docs/modules/http#servers/experiment... :
> servers/experimental_http3: Enable experimental HTTP/3 support. Note that HTTP/3 is not a finished standard and has extremely limited client support. This field is not subject to compatibility promises
TIL caddy has Prometheus metrics support (in addition to automatic LetsEncrypt X.509 Cert renewals)
Yeah those docs are for v2.5. The experimental_http3 option is removed in v2.6 (which is currently only in beta, stable release coming soon). To configure protocols and disable HTTP/3, you'll now use the "protocols" option (inside of the "servers" global option), and it would look like "protocols h1 h2" (the default is "h1 h2 h3").
Yes, the docs have been updated at https://github.com/caddyserver/website but haven't been deployed yet. There is a new protocols option:
protocols h1 h2
will disable Http/3 but leave 1.1 and 2 on.For HTTP/3 support with python clients:
- aioquic supports HTTP/3 only now https://github.com/aiortc/aioquic
- httpx is mostly requests-compatible, supports client-side caching, and HTTP/1.1 & HTTP/2, and here's the issue for HTTP/3 support: https://github.com/encode/httpx/issues/275
Make better decisions with fewer online meetings
Hi! I am the cofounder TopAgree. We have created TopAgree to help teams make faster decisions with fewer meetings. My friend Linus and I are developing it together because we often don't make the important decisions until the last five minutes of a meeting. And then, unfortunately, we often make the wrong decisions. I have a big request for you: Please comment when you like to test the product and give us feedback. Thanks so much! Kind regards, Bastian
Does it integrate into MS Teams? Or some other established enterprise platform?
Otherwise it’s really hard to introduce it to a corporation, where it’s probably most needed.
Webhook integrations: Slack/Mattermost; Zulip; Zapier Platform; GitHub Pull Requests
Another issue/checkbox:
Re: collaboration engineering, Thinklets: "No Kings: How Do You Make Good Decisions Efficiently in a Flat Organization?" (2019) https://news.ycombinator.com/item?id=20157064
Thank you westurner for the link. Super helpful. Kind regards, Bastian
Great idea. IMHO, Feedback is necessary for #EvidenceBasedPolicy; for objective progress.
Evidence-based policy: https://en.wikipedia.org/wiki/Evidence-based_policy (Jupyter, scikit-learn & Yellowbrick, Kaggle,)
Town hall meeting: https://en.wikipedia.org/wiki/Town_hall_meeting
awesome-ideation-tools: https://github.com/zazaalaza/awesome-ideation-tools :
> Awesome collection of brainstorming, problem solving, ideation and team building tools. From foresight to overcoming creative blocks, this list contains all the awesome boardgames, canvases and deck of cards that were designed to help you solve a certian problem.
Europe’s energy crisis hits science
How come there is nobody held responsible on the side of European regulators/politicians that led the continent into such dependency on fossil fuels with no backup supplier
Basically, the entire European green and anti-global warming movement was either delusional or bought off and there was (and still is) not really much of a counterbalance to this within the mainstream. They pushed for renewables that required a huge amount of natural gas as backup for when the wind dropped or the sun went behind clouds and for the shutdown of alternatives like coal and nuclear, whilst simultaneously campaigning to block new natural gas exploration and investment in Europe. This made energy supplies completely dependent on imports from totalitarian dictatorships. In addition, Germany chose to make themselves completely dependent on Russia specifically despite attempts by the US and eastern European countries to stop them and somehow arm-twisted the EU into letting them go ahead with this - but even if it wasn't for that, the natural gas capacity just isn't there to cope with Russia refusing to supply and there hasn't been the willingness to invest in building more.
I don't think they've stopped this campaign even after it ended in catastrophe either. It's certainly still going on in the UK - for example just yesterday I saw a BBC article telling people that building more natural gas production here is pointless because wind and solar are cheaper, presumably based on the raw per-MWh number with no storage to fill in the gaps when it doesn't produce. That is not a useful replacement for natural gas at all.
Germany produced 48% of its electricity via renewables this year so far. The surpluss energy will soon be used to electrolyze hydrogen that will be fed into our vast gas infrastructure.
What surplus? 48% is less than half, isn't it?
So, another way of looking at it is - Germany is still producing most of its electricity from non-renewable sources. Adding nuclear, it's still ~40% from coal and gas [0]. So again, what surplus?
In contrast, France is producing ~70% of its electricity just from nuclear, with another ~19% from renewables.
Note that either way, the major energy expenditure that ties all of these countries to Russian gas is not electricity, it is heating.
[0] https://ourworldindata.org/energy/country/romania?country=DE...
There often is a renewable energy surplus at night (wind /hydro) and more recently even around noon, thanks to solar. And renewables are constantly expanding.
The whole problem with renewables is the temporal variation and lack of large scale storage.
Producing hydrogen (and potentially converting that into more traditional gasses) is a perfectly viable solution, but of course such infrastructure can only be stood up over years and decades. It's not relevant for this winter.
TIL that apparently Compressed Air Energy Storage (CAES) has the lowest cost of any method of energy storage? "Techno-economic analysis of bulk-scale compressed air energy storage in power system decarbonisation" (2021) https://www.sciencedirect.com/science/article/pii/S030626192... https://www.pv-magazine.com/2022/07/21/compressed-air-storag... :
> They found the former has a considerably lower Capex and a payback time of only two years. [compared with Lead Acid batteries]
I wouldn't thought that a Gravity Battery would've been more efficient than compressed air: https://en.wikipedia.org/wiki/Gravity_battery
Also TIL about geothermal heat pump energy storage: https://techxplore.com/news/2022-09-geothermal-ideal-energy-... :
The Risks of WebAssembly
Don't there need to be per- CPU/RAM/GPU quotas per WASM scope/tab? Or is preventing DOS with WASM out of scope for browsers?
IIRC, it's possible to check resource utilization in e.g. a browser Task Manager, but there's no way to do `nice` or `docker --cpu-quota` or `systemd-nspawn --cpu-affinity` to prevent one or more WASM tabs from DOS'ing a workstation with non-costed operations. FWIU, e.g. eWASM has opcode costs in particles/gas: https://github.com/ewasm/design/blob/master/determining_wasm...
Pypi.org is running a survey on the state of Python packaging
I didn't take the survey because I've never packaged anything for PyPI, but I wish all of the package managers would have an option for domain validated namespaces.
If I own example.com, I should be able to have 'pypi.org/example.com/package'. The domain can be tied back to my (domain verified) GitHub profile and it opens up the possibility of using something like 'example.com/.well-known/pypi/' for self-managed signing keys, etc..
I could be using the same namespace for every package manager in existence if domain validated namespaces were common.
Then, in my perfect world, something like Sigstore could support code signing with domain validated identities. Domain validated signatures make a lot of sense. Domains are relatively inexpensive, inexhaustible, and globally unique.
For code signing, I recognize a lot of project names and developer handles while knowing zero real names for the companies / developers involved. If those were sitting under a recognizable organizational domain name (example.com/ryan29) I can do a significantly better job of judging something's trustworthiness than if it's attributed to 'Ryan Smith Inc.', right?
That's a really interesting idea, but I worry about what happens when a domain name expires and is re-registered (potentially even maliciously) by someone else.
CT: Certificate Transparency logs log creation and revocation events.
The Google/trillian database which supports Google's CT logs uses Merkle trees but stores the records in a centralized data store - meaning there's at least one SPOF Single Point of Failure - which one party has root on and sole backup privileges for.
Keybase, for example, stores their root keys - at least - in a distributed, redundantly-backed-up blockchain that nobody has root on; and key creation and revocation events are publicly logged similarly to now-called "CT logs".
You can link your Keybase identity with your other online identities by proving control by posting a cryptographic proof; thus adding an edge to a WoT Web of Trust.
While you can add DNS record types like CERT, OPENPGPKEY, SSHFP, CAA, RRSIG, NSEC3; DNSSEC and DoH/DoT/DoQ cannot be considered to be universally deployed across all TLDs. Should/do e.g. ACME DNS challenges fail when a TLD doesn't support DNSSEC, or hasn't secured root nameservers to a sufficient baseline, or? DNS is not a trustless system.
EDNS (Ethereum DNS) is a trustless system. Reading EDNS records does not cost EDNS clients any gas/particles/opcodes/ops/money.
Blockcerts is designed to issue any sort of credential, and allow for signing of any RDF graph like JSON-LD.
List_of_DNS_record_types: https://en.wikipedia.org/wiki/List_of_DNS_record_types
Blockcerts: https://www.blockcerts.org/ https://github.com/blockchain-certificates :
> Blockcerts is an open standard for creating, issuing, viewing, and verifying blockchain-based certificates
W3C VC-DATA-MODEL: https://w3c.github.io/vc-data-model/ :
> Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. This specification provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable
W3C VC-DATA-INTEGRITY: "Verifiable Credential Data Integrity 1.0" https://w3c.github.io/vc-data-integrity/#introduction :
> This specification describes mechanisms for ensuring the authenticity and integrity of Verifiable Credentials and similar types of constrained digital documents using cryptography, especially through the use of digital signatures and related mathematical proofs. Cryptographic proofs enable functionality that is useful to implementors of distributed systems. For example, proofs can be used to: Make statements that can be shared without loss of trust,
W3C TR DID (Decentralized Identifiers) https://www.w3.org/TR/did-core/ :
> Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. A DID refers to any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) as determined by the controller of the DID. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities. Specifically, while other parties might be used to help enable the discovery of information related to a DID, the design enables the controller of a DID to prove control over it without requiring permission from any other party. DIDs are URIs that associate a DID subject with a DID document allowing trustable interactions associated with that subject.
> Each DID document can express cryptographic material, verification methods, or services, which provide a set of mechanisms enabling a DID controller to prove control of the DID. Services enable trusted interactions associated with the DID subject. A DID might provide the means to return the DID subject itself, if the DID subject is an information resource such as a data model.
For another example of how Ethereum might be useful for certificate transparency, there's a fascinating paper from 2016 called "EthIKS: Using Ethereum to audit a CONIKS key transparency log" which is probably way ahead of its time.
Abstract: https://link.springer.com/chapter/10.1007/978-3-662-53357-4_...
Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency
/? "Certificate Transparency" Blockchain https://scholar.google.com/scholar?q=%22Certificate+Transpar... https://scholar.google.com/scholar_alerts?view_op=list_alert...
- Some of these depend upon a private QKD [fiber,] line
- NIST PQ algos are only just now announced: https://news.ycombinator.com/item?id=32281357 : Kyber, NTRU, {FIPS-140-3}?
/? Ctrl-F "Certificate Transparency" https://westurner.github.io/hnlog/ :
"Google's Certificate Transparency Search page to be discontinued May 15th, 2022" https://news.ycombinator.com/item?id=30781698
- LetsEncrypt Oak is also powered by Google/trillian, which is a trustful centralized database
- e.g. Graph token (GRT) supports Indexing (search) and Curation of datasets
> And what about indexing and search queries at volume, again without replication?
My understanding is that the s Sigstore folks are now more open to the idea of a trustless DLT? "W3C Verifiable Credentials" is a future-proof standardized way to sign RDF (JSON-LD,) documents with DIDs.
Verifiable Credentials: https://en.wikipedia.org/wiki/Verifiable_credentials
# Reproducibile Science Publishing workflow procedures with Linked Data:
- Sign the git commits (GPG,)
- Sign the git tags (GPG+Sigstore, ORCID & DOI (-> W3C DIDs), FigShare, Zenodo,)
- Sign the package(s) and/or ScholarlyArticle & their metadata & manifest ( Sigstore, pkg_tool_xyz,CodeMeta RDF/JSON-LD, ),
- Sign the SBOM (CycloneDx, Sigstore,)
- Search for CVEs/vulns & Issues for everything in the SBOM (Dependabot, OSV,)
- Search for trusted package hashes for everything in the SBOM
- Sign the archive/VM/container image (Docker Notary TUF, Sigstore,)
- Archive & Upload & Restore & Verify (and then Upgrade Versions in the) from the dependency specifications, SBOM, and/or archive/VM/container image (VM/container tools, repo2docker (REES),)
- Upgrade Versions and run unit, functional, and integration tests ({pip-tools, pipenv, poetry, mamba}, pytest, CI, Dependabot,))
All poverty is energy poverty
"The Limits to Growth" https://en.wikipedia.org/wiki/The_Limits_to_Growth
Carrying capacity https://en.wikipedia.org/wiki/Carrying_capacity
> The carrying capacity of an environment is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the environment's maximal load, which in population ecology corresponds to the population equilibrium, when the number of deaths in a population equals the number of births (as well as immigration and emigration). The effect of carrying capacity on population dynamics is modelled with a logistic function. Carrying capacity is applied to the maximum population an environment can support in ecology, agriculture and fisheries. The term carrying capacity has been applied to a few different processes in the past before finally being applied to population limits in the 1950s.[1] The notion of carrying capacity for humans is covered by the notion of sustainable population.
Sustainable population https://en.wikipedia.org/wiki/Sustainable_population :
> Talk of economic and population growth leading to the limits of Earth's carrying capacity for humans are popular in environmentalism.[16] The potential limiting factor for the human population might include water availability, energy availability, renewable resources, non-renewable resources, heat removal, photosynthetic capacity, and land availability for food production.[17] The applicability of carrying capacity as a measurement of the Earth's limits in terms of the human population has not been very useful, as the Verhulst equation does not allow an unequivocal calculation and prediction of the upper limits of population growth.[16]
> [...] The application of the concept of carrying capacity for the human population, which exists in a non-equilibrium, is criticized for not successfully being able to model the processes between humans and the environment.[16][20] In popular discourse the concept has largely left the domain of academic consideration, and is simply used vaguely in the sense of a "balance between nature and human populations".[20]
Practically, if you can find something sustainable to do with brine (NaCL; Sodium Chloride and), and we manage to achieve cheap clean energy, and we can automate humanoid labor, desalinating water and pumping it inland is feasible; so, global water prices shouldn't then be the limit to our carrying capacity. #Goal6 #CleanWater
Water trading > Alternatives to water trading markets (*) https://en.wikipedia.org/wiki/Water_trading
LCOE: Levelized Cost of Electricity https://en.wikipedia.org/wiki/Levelized_cost_of_electricity
LCOW: Levelized Cost of Water: https://en.wikipedia.org/wiki/Levelized_cost_of_water
TIL about modern methods for drilling water wells on youtube: with a hand drill, with a drive cap and a sledgehammer and a pitcher-pump after a T with valves for an optional (loud) electric pump, or a solar electric water pump
Drinking water > Water Quality: https://en.wikipedia.org/wiki/Drinking_water#Water_quality :
> Nearly 4.2 billion people worldwide had access to tap water, while another 2.4 billion had access to wells or public taps.[3] The World Health Organization considers access to safe drinking-water a basic human right.
> About 1 to 2 billion people lack safe drinking water.[4] Water can carry vectors of disease. More people die from unsafe water than from war, then-U.N. secretary-general Ban Ki-moon said in 2010.[5] Third world countries are most affected by lack of water, flooding, and water quality. Up to 80 percent of illnesses in developing countries are the direct result of inadequate water and sanitation. [6]
A helpful risk hierarchy chart: "The risk hierarchy for water sources used in private drinking water supplies": From Lowest Risk to Highest Risk: Mains water, Rainwater, Deep groundwater, Shallow groundwater, Surface water
TIL it's possible to filter Rainwater with an unglazed terracotta pot and no electricity, too
Also, TIL about solid-state heat engines ("thermionic converters") with no moving parts, that only need a thermal gradient in order to generate electricity. The difference between #VantaBlack and #VantaWhite in the sun results in a thermal gradient, for example
Is a second loop and a heat exchange even necessary if solid-state heat engines are more efficient than gas turbines?
Any exothermic reaction?! FWIU, we only need 100°C to quickly purify water.
100°C will sterilise water but it won't purify it if there are inorganic pollutants in it.
One Serverless Principle to Rule Them All: Idempotency [video]
What does idempotency have to do with serverless?
That seems more like a general systems principle.
x ° x = x
Idempotency > Computer science meaning https://en.wikipedia.org/wiki/Idempotence#Computer_science_m...> Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of projectors and closure operators) and functional programming (in which it is connected to the property of referential transparency).
"A comparison of idempotence and immutability" [immutable infrastructure] https://devops.stackexchange.com/questions/2484/a-comparison...
Ansible Glossary > Idempotency https://docs.ansible.com/ansible/latest/reference_appendices... :
> Idempotency: An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions
Ask HN: IT Security Checklist for Startups?
Hi HN,
Does anyone have a list of IT security stuff that you should setup for your early stage startup?
Like for example DNSSEC, VPN, forcing employees to use 2-factor etc.
--- CIS Top 18: --- CIS have a top 18 critical security control checklist, which is pretty incredible.
https://www.cisecurity.org/controls/cis-controls-list
CIS Top 18 Version 8 works sequentially too, so you implement control 1 and it sets you up in a good place to then implement control 2, and so on.
--- Cyber Essentials: --- The UK's Cyber Essentials scheme is a certification standard (but you can ignore that and just use it as a checklist if you'd like).
It's designed for small to medium size organizations, and focused on getting the foundations right.
This would be a useful place to start but it won't cover some of the specific threats and risks associated with software/app development. See @snowstormsun's comment about OWASP Top 10 for that.
https://www.ncsc.gov.uk/cyberessentials/overview
--- There's loads of other standards like this out there, that are free to use and are each focused on different security challenges etc.
I've shared these two standards as they both setup a solid foundation.
https://dev-sec.io/baselines/ has executable implementations of CIS and other guides
From https://news.ycombinator.com/item?id=30906310 :
> "The SaaS CTO Security Checklist [Redux]" https://github.com/vikrum/SecurityChecklists
> "The Personal Infosec & Security Checklist" https://www.goldfiglabs.com/guide/personal-infosec-security-...
> "The DevOps Security Checklist Redux" https://www.goldfiglabs.com/guide/devops-security-checklist/
Fed expects to launch long-awaited Faster Payments System by 2023
I'm starting to feel more hopeful about the US's digital infrastructure.
If this goes well instant payments will be available with no ACH 2 day withdrawal delay.
The IRS is exploring a system for people to file taxes online.
The State Department is building a system to renew your passport totally online.
I think the last two items are a result of the Inflation Reduction Act. I hope this momentum continues and we can continue making as many government services available digitally. While some will complain how bad these systems will be - I'd rather at least have them than not and allow them to improve.
The sooner the better.
Extremely sad we need to give the IRS $80 billion to accomplish something they should have provided decades ago.
You’re way off. The money earmarked for that project is $15 million, and it’s for a study on how the IRS would implement such a system.
$15 Million to “study”… “how they would implement it”
How should I say it, “come on man”?
Hey, I don't disagree government spending numbers can be crazy, but building a system for 330,000,000 people across 50 states that works flawlessly (and is responsible for trillions of dollars) isn't easy. I'm personally okay with $15M... that's like salaries for 25 people for two years.
To put that number into perspective, TurboTax spent about that much money on lobbying alone from 2018-2012.
> building a system for 330,000,000 people across 50 states that works flawlessly (and is responsible for trillions of dollars) isn't easy
Which is why they should pay real experts to do it. For less.
If I were an expert and the government came to me and my team with this project, I'd definitely charge at least $15M.
What do you think has been spent on developing Blockchain/DLT from 2007 to present? Is it more than $15m? Mostly amateur open source or corporate open source contributions?
- Bitcoin has an MIT License, for example
- XRPL and Stellar can do the payments volume; the TPS report; whereas otherwise you're building another Layer 2 system without Interledger.
- Flare does EVM (Ethereum Virtual Machine) Smart Contracts with XRPL; $<0.01/tx, network tx fees are just burned, 5 second transaction/ledger close time.
Notes re: Interledger addresses, SPSP, WebMonetization: https://westurner.github.io/hnlog/#comment-32515531 ,
> From "NIST Special Publication 800-181 Revision 1: Workforce Framework for Cybersecurity (NICE Framework)" (2020) https://doi.org/10.6028/NIST.SP.800-181r1 :
>> 3.1 Using Existing Task, Knowledge, and Skill (TKS) Statements
> (Edit) FedNOW should - like mCBDC - really consider implementing Interledger Protocol (ILP) for RTGS "Real-Time Gross Settlement" https://interledger.org/developer-tools/get-started/overview...
> From https://interledger.org/rfcs/0032-peering-clearing-settlemen... :
> Peering, Clearing and Settling; The Interledger network is a graph of nodes (connectors) that have peered with one another by establishing a means of exchanging ILP packets and a means of paying one another for the successful forwarding and delivery of the packets.
> […] Accounts and Balances: The edge between any two nodes (peers) is a communication link (for exchanging ILP packets and other data), and an account held between the peers (the Interledger account). The Interledger account has a balance denominated in a mutually agreed upon asset (e.g. USD) at an agreed upon scale (e.g. 2). The balance on the account is the net total of the amounts on any packets “successfully” routed between the peers.
> The balance on the Interledger account can only change as a result of two events:
> 1. The “successful” routing of an ILP packet between the two peers
> 2. A payment made between the peers on the underlying payment network they have agreed to use to settle the Interledger account
And then you realize you're sharing payment address information over a different but comparably-unsecured channel in a non-stanfardized way; From https://github.com/interledger/rfcs/blob/master/0009-simple-... :
> Relation to Other Protocols: SPSP is used for exchanging connection information before an ILP payment or data transfer is initiated
To do a complete business process, there's – e.g. TradeLens, GSBN, and – signaling around transactions, which then necessarily depends upon another - hopefully also cryptographically-secured and HA Highly Available - information system with API version(s) and database schema(s) unless there's something like Interledger SPSP Simple Payment Setup Protocol and Payment Pointers, which also solve for micropayments to accountably support creators; https://WebMonetization.org/ .
> What do you think has been spent on developing Blockchain/DLT from 2007 to present? Is it more than $15m? Mostly amateur open source or corporate open source contributions?
Are you kidding me? Hundreds of billions. a16z's last fund alone as $4.6B.
That money goes directly to developers.
What percentage (%) of the market cap, daily volume, or price gets contributed to, OTOH: (a) open source software development like code, tests, and docs; PRs: Pull Requests; (b) free information security review such as static and dynamic analysis; (c) marketing; (d) an open source software foundation; (e) offsetting long-term environmental costs through sustainable investment?
Are there other metrics for Software Quality & Infosec Assurances?
Honestly none of that is relevant.
The entirety of the investment made by VCs is given directly to developers. That's got nothing to do with daily volume, market cap, etc - that's cash in hand. Human dollars. What developers choose to spend that money on is up to them but they sure do seem to have found a way. Arbitrarily constraining the analysis to the 'free' part is disingenuous.
As for offsets, the entire model is bogus and I encourage you to look into it. It's generally just a way to prop up bad business models. You can't buy crime offsets from someone who promises not to do a crime they would have otherwise so that you can do it - and yet that's exactly how carbon offsetting works. That's to say nothing of that blockchain carbon offsetting company that set fire to the forest they were planting twice. [1]
[1] https://boingboing.net/2022/07/23/blockchain-based-carbon-of...
In your opinion, high quality Open Source Software is the result of VC money?
Chainslysis sells Blockchain/DLT analysis and investigation software and services to investigative agencies in multiple countries: https://en.wikipedia.org/wiki/Chainalysis
How do they estimate their potential market?
"Crypto Crime Trends for 2022: Illicit Transaction Activity Reaches All-Time High in Value, All-Time Low in Share of All Cryptocurrency Activity" https://blog.chainalysis.com/reports/2022-crypto-crime-repor...
> In fact, with the growth of legitimate cryptocurrency usage far outpacing the growth of criminal usage, illicit activity’s share of cryptocurrency transaction volume has never been lower.
> […] Transactions involving illicit addresses represented just 0.15% of cryptocurrency transaction volume in 2021 despite the raw value of illicit transaction volume reaching its highest level ever. As always, we have to caveat this figure and say that it is likely to rise as Chainalysis identifies more addresses associated with illicit activity and incorporates their transaction activity into our historical volumes. For instance, we found in our last Crypto Crime Report that 0.34% of 2020’s cryptocurrency transaction volume was associated with illicit activity — we’ve now raised that figure to 0.62%. Still, the yearly trends suggest that with the exception of 2019 — an extreme outlier year for cryptocurrency-based crime largely due to the […] scheme — crime is becoming a smaller and smaller part of the cryptocurrency ecosystem. Law enforcement’s ability to combat cryptocurrency-based crime is also evolving. We’ve seen several examples of this throughout 2021, […]
> However, we also have to balance the positives of the growth of legal cryptocurrency usage with the understanding that $14 billion worth of illicit activity represents a significant problem. Criminal abuse of cryptocurrency creates huge impediments for continued adoption, heightens the likelihood of restrictions being imposed by governments, and worst of all victimizes innocent people around the world. In this report, we’ll explain exactly how and where cryptocurrency-based crime increased, dive into the latest trends amongst different types of cybercriminals, and tell you how cryptocurrency businesses and law enforcement agencies around the world are responding. But first, let’s look at a few of the key trends in cryptocurrency-based crime […]
"Mid-year Crypto Crime Update: Illicit Activity Falls With Rest of Market, With Some Notable Exceptions" (August 2022) https://blog.chainalysis.com/reports/crypto-crime-midyear-up...
More concerned about the climate impact; about environmental sustainability: https://cryptoclimate.org/supporters/
And still, 99%+ of the market is incapable of assessing the software quality of the platforms underpinning the assets themselves, so we reach for some sort of market fundamentals technical data other than software quality and information security asurances.
> In your opinion, high quality Open Source Software is the result of VC money?
Some of it? Oh most definitely. It just depends on your business model and whether you need to keep your code proprietary. If you don't, good to go.
REPL Driven Minecraft
It would be interesting to render the images from the POV of a player in the world onto a large movie screen in game. Then other players could join in, go to the cinema and watch the live stream. Might be fun for game lobbies.
It's been done. See https://m.youtube.com/watch?v=_VBO2WRpBrQ
"Public Minecraft: Pi Edition API Python Library" https://github.com/martinohanlon/mcpi
rendering-minecraft w/ mcpi might do offline / batch rendering? https://pypi.org/project/rendering-minecraft/
Jupyter-cadquery might be worth a look: https://github.com/bernhard-42/jupyter-cadquery :
> View CadQuery objects in JupyterLab or in a standalone viewer for any IDE [w/ pythreejs]
If there's a JS port or WASM build of Minecraft, Minecraft could be rendered in a headless browser?
Nice, I wonder if the open source Minetest can be used instead of Minecraft.
Not really a REPL, but can inject code into a running minetest instance: https://github.com/Quiark/mtrepl/
A Jupyter kernel for Minecraft would be neat: https://jupyter-client.readthedocs.io/en/latest/wrapperkerne... :
> You can re-use IPython’s kernel machinery to easily make new kernels. This is useful for languages that have Python bindings, such as Hy (see Calysto Hy), or languages where the REPL can be controlled in a tty using pexpect, such as bash.
Looks like there's a pytest-minecraft, too: https://pypi.org/project/pytest-minecraft/
/? Minecraft site:pypi.org https://pypi.org/search/?q=minecraft
SensorCraft is just the voxel designer part of Minecraft (like original Minecraft) with Pyglet for OpenGL: https://github.com/AFRL-RY/SensorCraft
Awhile back I looked into creating a Jupyter kernel for LDraw / LeoCAD / Bricklayer (SML) to build LEGO creations with code in the input cell of a notebook and render to the output cell, and put together some notes on Jupyter kernels that might be of use for creating a Minecraft / mcpi / SensorCraft Jupyter kernel: https://github.com/westurner/wiki/blob/master/bricklayer.md#...
- [ ] DOC: Jupyter-cadquery & pythreejs inspiration
- [ ] ENH: Minecraft Jupyter kernel (pytest-minecraft?,)
- [ ] ENH: mcpi Jupyter kernel
- [ ] ENH: SensorCraft Jupyter kernel
- [ ] ENH,BLD,DOC: SensorCraft: conda-forge environment.yml (target Mambaforge for ARM64 support, too)
- [ ] ENH,BLD,DOC: SensorCraft: build plaform-specific binary installer(s) with {CPython, all packages from environment.yml, Pyglet & OpenGL, } with e.g. Conda Constructor
- [ ] ENH,BLD,DOC: SensorCraft: reformat tutorials as Jupyter notebooks; for repo2docker, jupyter-book (template w/ repo2docker requirements.txt and/or conda/mamba environment.yml), & jupyter-lite, & VScode (because VSCode works on Win/Mac/Lin/Web and has Jupyter notebook support)
- [ ] ENH,BLD,DOC: SensorCraft: build a jupyter-lite build to run SensorCraft & Python & JupyterLab in WASM in a browser tab with no installation/deployment time cost
- [ ] ENH: SensorCraft: replace Pyglet (OpenGL) with an alternate WebGL/WebGPU implementation
Fast.ai's Practical Deep Learning for Coders Has Been Updated
"Practical Deep Learning for Coders" 2022: https://github.com/fastai/course22
"Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD" (2020) https://github.com/fastai/fastbook
There's also a new nbdev; nbdev v2: https://github.com/fastai/nbdev and nbdev-template to start new projects with: https://github.com/fastai/nbdev-template
Reducing methane is the fastest strategy available to reduce warming
Can we 3d print e.g. geodesic domes to capture the waste methane (natural gas) from abandoned well sites?
"NASA Instrument Tracks Power Plant Methane Emissions" (2020) https://www.jpl.nasa.gov/images/pia24019-nasa-instrument-tra... :: https://methane.jpl.nasa.gov/ (California,)
> NASA conducts periodic methane studies using the next-generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG) instrument. These studies are determining the locations and magnitudes of the largest methane emission sources across California, including those associated with landfills, refineries, dairies, wastewater treatment plants, oil and gas fields, power plants, and natural gas infrastructure.
"NASA 3D Printed Habitat Challenge" (2019) @CentennialChallenge: 3d print [habitats] with little to no water and local ~soil. https://www.google.com/search?q=NASA+3D+Printed+Habitat+Chal...
What about IDK ~geodesic domes for capturing waste methane from abndanoned wells?
VS Code – What's the deal with the telemetry?
A warning for people who, like me, long ago disabled telemetry in Visual Studio Code: Microsoft introduced a new setting that deprecates the two older settings... and the new setting defaults to "all"[1]. I just noticed this now after double-checking my settings. Sneaky, sneaky, Microsoft (but still totally expected).
[1] https://imgur.com/a/RbrsuyA
You will want these three settings in your default settings JSON (the first two just to be safe):
{
"telemetry.enableTelemetry": false,
"telemetry.enableCrashReporter": false,
"telemetry.telemetryLevel": "off"
}
Or just use VSCodium, the actually open source build, with no added closed source blobs and of course no telemetry.
It uses https://open-vsx.org, an open source host for VSX plugins.
Here's open-vsx's page for the Microsoft/vscode-jupyter extension: https://open-vsx.org/extension/ms-toolsai/jupyter
I think it's great that open license terms enable these sorts of forks and experiments so that the community can work together by sending pull requests.
Is there a good way to install code-server (hosted VSCode/vscodium in a browser tab) plugins from openvsx?
(E.g. the ml-workspace and ml-hub containers include code-server and SSH, which should be remotely-usable from vscodium?)
Learn to sew your own outdoor gear
I guess this is a good place to share my open source large format laser cutter design for sewing projects. It’s cheap to make, works pretty well, and the whole gantry assembly slides right off leaving just a sheet of plywood with low profile 3D printed rails on the sides. So I throw my rug over it and it becomes my floor when not in use. Important because the laser cutter can cut a full 60” wide piece of fabric two yards long. It’s basically 5 foot by 6 foot, and I don’t have space in my apartment for a dedicated machine that takes up all that space. But since this doubles as my floor it works great! Also includes a raspberry pi camera on the laser head which serves as a pattern scanner. I really want to finish my video on this thing, I’ve just been busy. But please take a look and considering building it! If you have any questions open a GitHub issue and I will do everything I can to help. I think it’s a great starting point (designed in three weeks) and I’d LOVE for other people to reproduce it and extend the design! The machine has a few hiccups but I use it all the time for my sewing projects and it is SO nice to get all the cutting done repeatably and automatically. You can even scan existing clothes often without disassembly and turn those in to digital patterns!
Could it tape seams for waterproofing?
This Solar LED umbrella could be a different fabric than cotton.
Just picked up a few tarps with a bunch of loops all around the sides and through the center for guidelines.
TIL hemp is antimicrobial and it can be mixed with {rayon,}.
Hmm I don’t know! That would be a great thing to experiment with!
FedNow FAQ
In India, we have UPI, a tokenized peer to peer payments network powered by the National Payments Corporation of India (NPCI).
Banks participate on the NPCI payment network.
I can assign a Virtual Payment Address (VPA) such as payxyz@mybank and give that VPA to anyone that needs to transfer funds to me. At the moment, there are no transaction fees with a transaction limit of INR 200,000 (approx. $2500). This makes it unnecessary to share any sensitive bank routing numbers / account numbers etc., One can create different VPAs for different purposes/payers.
UPI has had impressive growth in transaction volume (both # and monetary value) since its inception in 2016. https://www.npci.org.in/what-we-do/upi/product-statistics
The transfers are instantaneous (i.e., completed within seconds) and accounts debited/credited. The system works transparently across banks due to the network being managed by NPCI.
W3C ILP Interledger Protocol [1] specifies addresses [2]:
> Neighborhoods are leading segments with no specific meaning, whose purpose is to help route to the right area. At this time, there is no official list of neighborhoods, but the following list of examples should illustrate what might constitute a neighborhood:
> `crypto.` for ledgers related to decentralized crypto-currencies such as Bitcoin, Ethereum, or XRP.
> `sepa.` for ledgers in the Single Euro Payments Area.*
> `dev.` for Interledger Protocol development and early adopters
From "ILP Addresses - v2.0.0" [2]:
> Example Global Allocation Scheme Addresses
> `g.acme.bob` - a destination address to the account "bob" held with the connector "acme".
> `g.us-fed.ach.0.acmebank.swx0a0.acmecorp.sales.199.~ipr.cdfa5e16-e759-4ba3-88f6-8b9dc83c1868.2` - destination address for a particular invoice, which can break down as follows:
> - Neighborhoods: us-fed., ach., 0.
> - Account identifiers: acmebank., swx0a0., acmecorp., sales, 199 (An ACME Corp sales account at ACME Bank)
> - Interactions: ~ipr, cdfa5e16-e759-4ba3-88f6-8b9dc83c1868, 2
And from [3] "Payment Pointers and Payment Setup Protocols":
> The following payment pointers resolve to the specified endpoint URLS:
$example.com -> https://example.com/.well-known/pay
$example.com/invoices/12345 -> https://example.com/invoices/12345
$bob.example.com -> https://bob.example.com/.well-known/pay
$example.com/bob -> https://example.com/bob
The WebMonetization spec [4] and docs [5] specifies the `monetization` <meta> tag for indicating where supporting browsers can send payments and micropayments: <meta
name="monetization"
content="$<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">http://wallet.example.com/alice" rel="nofollow noopener" target="_blank">wallet.example.com/alice">
[1]
https://github.com/interledger/rfcs/blob/master/0001-interle...[2] https://github.com/interledger/rfcs/blob/master/0015-ilp-add...
[3] "Payment Pointers and Payment Setup Protocols" https://interledger.org/rfcs/0026-payment-pointers/
This is one of the best things about living in Southeast Asia.
Instant bank transfers.
Because of this you csn just scan the QR code of your taxi, the fruit seller on the street, transfer to your friends.
Everyone gets it instantly.
Now this is not third party services. It’s bank account to bank account. As a result things like PayPal or Cash app make no sense in this part of the world and have no uptake here.
In fact it’s a downgrade to use those kinds of services.
Remembering the bank issues in North America is quite painful. Hope it really happens next year.
FedNow is for US, interbank transactions only.
Do your country's favorite banks implement ILP Interledger Protocol; to solve for more than just banks' domestic transactions between themselves? https://github.com/interledger/rfcs/blob/master/0001-interle...
Ask HN: Why are bookmarks second class citizens in browsers?
so much of my time is spent in chrome tabs / windows / and searches. someone's average chrome tab count is a badge of honor / horror. Chrome now hides the bookmark bar by default. you can create tab groups for a session, or pin them so that they consume all your bandwidth and memory next time you open your session.
But that's not what i want.
i want rich bookmark behavior.
i want to be able to quickly load common favorite news sites & blogs.
or load a window with all my productivity SaaS sites.
or pick up where i left off on a research rabbit hole.
and i want it to be intuitive, efficient, and a prominent UX feature set.
i'm not alone right?
I thought it was a commonly held thought that Google didn't want Chrome users using bookmarks because it directly competed with their search (and advertising).
I think that's venturing into tinfoil hat territory. More likely telemetry told them that some percentage > 50 always toggle the bookmark bar off and they decided to make it the default. It's still there on new tabs after all. Companies like Google are long term greedy, they don't make unpopular UX decisions for scraps.
Companies like Google are long term greedy, they don't make unpopular UX decisions for scraps.
Like not letting you add tags? not tagging definitely makes you use google search more.
"Do you like the browser bookmark manager?" https://news.ycombinator.com/item?id=24511454 :
> Things I'd add to browser bookmark managers someday:
> - Support for (persisting) bookmarks tags. From the post re: the re-launch of del.icio.us: https://news.ycombinator.com/item?id=23985623
>> "Allow reading and writing bookmark tags" https://bugzilla.mozilla.org/show_bug.cgi?id=1225916
>> Notes re: how [browser bookmarks with URI|str* tags] could be standardized with JSON-LD: https://bugzilla.mozilla.org/show_bug.cgi?id=1225916#c116 *
***
WICG/scroll-to-text-fragment "Integration with W3C Web Annotations" https://github.com/WICG/scroll-to-text-fragment/issues/4
***
W3C Web Share API & W3C Web Target API https://news.ycombinator.com/item?id=30449716
Millet, a Language Server for SML
Cool project!
I always thought SML was a great language for teaching computer science and functional programming.
A lot more fun than learning Java. With ML, you got glimpses of aesthetic beauty in code.
The bricklayer IDE and bricklayer-lite are SML IDEs FWIU [1]. Could Millet and/or three Millet VSCode extension and/or the SML/NJ Jupyter kernel [2] be useful for creating executable books [3][4] for learning?
[1] https://bricklayer.org/level-1/ :
> Bricklayer libraries provide support for creating 2D and 3D block-based artifacts. Problem-solving and math are used to exercise creative and artistic skills in a fun and innovative environment. Bricklayer integrates with third-party software including: LEGO Digital Designer, LDraw, Minecraft, and 3D Builder (LeoCAD; `dnf install -y leocad`)
[2] https://github.com/matsubara0507/simple-ismlnj
[3] https://github.com/executablebooks
[4] https://executablebooks.org/en/latest/ (Jupyter-Book: Sphinx (Docutils (.rst ReStructuredText), .md)), MyST-Parser (.md MyST Markdown), Jupyter Kernels (.ipynb Jupyter Notebooks),)
The Cha Cha Slide Is Turing Complete
Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness
Church-Turing thesis: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
Is Church-Turing expected to apply to concurrent, quantum computers? To actual quantum physical stimulation, or qubits, or qudits? Presumably there's already a quantum analog
What is quantum field theory and why is it incomplete?
Still the basis of the most precise prediction by a Physics theory:
https://physicstoday.scitation.org/doi/10.1063/PT.3.2223
I'm always amazed by quantum field theories, with a particular taste for the two dimensional ones (as string theorist :))
Also the basis of the least precise prediction made by a physics theory: https://en.wikipedia.org/wiki/Cosmological_constant_problem
> Depending on the Planck energy cutoff and other factors, the discrepancy is as high as 120 orders of magnitude
yes and no. physical theories always have a domain of validity where their prediction make sense and QFT and QM do not give an absolute value of energy (density). sure you can do QFT on curved spacetime (Hawking had some success w/ it) but comparing the QFT vacuum energy with the cosmological constant is sloppy at best.
but yeah, sure, it doesn't work for that :)
The Dymaxion car: Buckminster Fuller’s failed automobile
Car Talk drove a replica.[1] "You’ve pushed shopping carts with broken casters that handle better. ... No one in his or her right mind would ever venture above 45 miles per hour because of the lousy handling."
[1] https://www.cartalk.com/blogs/jamie-lincoln-kitman/test-driv...
Good old Car Talk.
What about with biopolymer superstructure at least, batteries in the surfboard floor, an elegant teardrop airfoil, regenerative braking with natural branching carbon anodes, and an awning?
Those are all solutions to problems the Dymaxion car didn't have?
The main complaints of the Car Talk article are the awful handling resulting from rear wheel steering (giving a bad caster angle resulting in no self-centering-- all modern tricycle motor vehicles put the steer tires at the front, even if it complicates the linkages) and the aerodynamic shape that resulted in a large car with surprisingly little usable interior room. (a classic Buckminster problem, as this is still the big downside of geodesic domes)
All the big compromises of the Dymaxion car came from the shape, which let it hit a drag coefficient of 0.25. But modern car engineering has simply passed it by: a Prius is 0.24 and a Model S is 0.208. There's no reason to accept the downsides of a Dymaxion car today, which is why nobody ever copied the design.
Top Secret Rosies: The Female “Computers” of WWII
Similar ones:
"Code girls : the untold story of the American women code breakers of World War II" (2017) https://g.co/kgs/CBSxQv https://en.wikipedia.org/wiki/Code_Girls
"Hidden Figures" (2016) https://en.wikipedia.org/wiki/Hidden_Figures https://g.co/kgs/m2fFvN
Women in science : https://en.wikipedia.org/wiki/Women_in_science
Also, the British Women Royal Naval Service serving in Western Approaches Tactical Unit during WWII, who developed a great number of extremely effective tactics for fighting U-boats, as well as effectively reverse engineering bits and pieces of German technology that had never been physically seen by analyzing the tactics of their employment.
https://en.wikipedia.org/wiki/Western_Approaches_Tactical_Un...
The associated documentary is "free" to stream on Kanopy. We enjoyed it.
"Top Secret Rosies: The Female “Computers” of WWII" (2010) https://www.kanopy.com/en/product/122786 https://g.co/kgs/49A9bD
There are at least hundreds of mostly all masculine dude war movies. As said films are about just regular dudes in war, and historical, is there any appropriate gender outrage?
A person can simply superimpose a whole separate identity and preference agenda to the storyline, which - if non-historical - may be all characters from one writer, whose paintings at least aren't at all obligated to be representative samples.
I recall that most war movies I seen tend to have women in the resistance movements, especially the french one. The only place where there are only men is those that were drafted since the military draft only forced men to go to war.
For shots focusing on the factories, again women tend to be featured. With most young men was drafted into the military front, and much of the civil industry being diverted to produce war material, women was critical to fill in the need for workers. Top Secret Rosies seem to be illustrating that fact.
One major demographic that old war movies tend to not show is children. When half the work group is drafted, and the other half is diverted to do both the civil and the war production (and everything else in private life), children were not just playing in the fairground or studying in school. It is however not very nice to illustrate child labor, so old and new war films tend to avoid that.
Rehabilitation and reintegration of child soldiers > See also: https://en.wikipedia.org/wiki/Rehabilitation_and_reintegrati...
> Children in the military, History of children in the military, Impact of war on children, Paris Principles (Free Children from War Conference), Children in emergencies and conflicts, Children's rights, Stress in early childhood
Disaster Relief procedures for us all: https://www.ready.gov/plan :
> How will I receive emergency alerts and warnings?
> What is my shelter plan?
> What is my evacuation route?
> What is my family/household communication plan?
> Do I need to update my emergency preparedness kit?
The World Excel Championship is being broadcast on ESPN
TIL about Excel e-Sports. From https://www.fmworldcup.com/excel-esports/ :
> No finance, just Excel and logical thinking skills. Use IFS, XLOOKUP, SUM, VBA, Power Query: anything is allowed, and the strategy is up to you.
It says VBA, so presumably named variables are allowed in the Excel spreadsheet competition.
What about Python and e.g. Pandas or https://www.pola.rs/ ?
> Some events are invitational, some are open for everyone to participate in.
> All battles are live-streamed and can be spectated on our Youtube channel.
> What about Python and e.g. Pandas or https://www.pola.rs/ ?
I think it is safe to assume that only Microsoft Excel is allowed. The features they listed are all built in features.
Kaggle speed runs as a sport would be interesting.
They really would. I feel like kaggle is mostly teams slowly grinding for very small incremental improvements over each other. Adding a time component would really sort the wheat from the chaff.
It would certainly change the dynamic. Not sure about wheat from the chaff - faster != better.
If you can write it in 20 minutes, I'd be impressed. Excel has everything integrated with macros already so it is easier to use VBA.
There are many ways to do ~no-code GUI analysis with [Jupyter] notebooks (e.g. with the VSCode jupyter notebook support) pretty quickly; with some pointing and clicking of the mouse if you please.
Computational Notebook speedrun competition ideas:
- Arithmetic w/ `assert`: addition/subtraction, multiplication/division, exponents/logarithms; and then again with {NumPy, SymPy, Lean Mathlib}
- Growth curve fitting; data modeling with linear and better models
- "Principia" work-along notebooks
- "Princeton Companion to [Applied] Mathematics" work-along notebooks
- Khan Academy work-along notebooks
> - Arithmetic w/ `assert`: addition/subtraction, multiplication/division, exponents/logarithms; and then again with {NumPy, SymPy, Lean Mathlib}
"How I'm able to take notes in mathematics lectures using LaTeX and Vim" https://news.ycombinator.com/item?id=19452752 ...
"How should logarithms be taught?" https://news.ycombinator.com/item?id=28519356
Any good recommendations for starting points to learn how to work with these? I'm not a programmer, but do have programming ability, and use excel all the time. Having a more flexible system with better inputs (tying in web scraping, data pipelines, etc) sounds like a great idea.
Kaggle learn: https://www.kaggle.com/learn
"Python Data Science Handbook" (O'Reilly) http://jakevdp.github.io/PythonDataScienceHandbook
awesome-python#web-content-extracting: https://github.com/vinta/awesome-python#web-content-extracti...
https://learnxinyminutes.com/docs/python/
"What is the best way to learn to code from absolute scratch?" https://news.ycombinator.com/item?id=16299880
I would assume you have to use stock. Excel/Office ships with a VBA IDE that you get to with alt-F11 (and I think by viewing macro source through the menus.)
Alt-F11, thanks. Is there a macro recorder built into Excel that could make learning VBA syntax and UI-API equivalencies easy?
There's a macro recorder, but the code it generates isn't meant to be efficient or readable.
VBA is easy, you can just google it. It's very bad, but once you know it well, and if you stick to a functional style, you can do what you want. For Excel, get to know Range objects well, and to wring some polymorphism out of VBA, learn interfaces.
I've seen basically Mario in VBA in Excel, so I know it's real. "Show formulas" && [VBA, JS] vs an app with test coverage.
Are unit and/or functional automated tests possible with VBA; for software quality and data quality?
GSheets has Apps Script; which is JS, so standard JS patterns work. It looks like Excel online can run but cannot edit VBA macros; and Excel online has a JS API, too.
Quantum in the Chips and Science Act of 2022
> […] Grow a diverse, domestic quantum workforce: The expansion of the Federal Cyber Scholarship-For-Service Program to include artificial intelligence and quantum computing will bolster the Nation’s cyber defense against threats from emerging technologies, while quantum’s addition to the DOE Computational Science Graduate Fellowship program will expand the current workforce. The NSF Next Generation Quantum Leaders Pilot Program authorized by this legislation, and which builds upon NSF’s role in the Q-12 Education Partnership, will help the Nation develop a strong, diverse, and future-leaning domestic base of talent steeped in fundamental principles of quantum mechanics, the science that underlines a host of technologies. […]
From #Q12 > QIS K-12 Framework: https://q12education.org/learning-materials/framework :
> When relevant to the STEM subject, employ a learning cycle approach to develop models of quantum systems and phenomena, plan and carry out investigations to test their models, analyze and interpret data, obtain, evaluate and communicate their findings
Where does energy go during destructive interference?
From Wave_interference#Quantum_interference https://en.wikipedia.org/wiki/Wave_interference#Quantum_inte... :
> Here is a list of some of the differences between classical wave interference and quantum interference:
> - In classical interference, two different waves interfere; In quantum interference, the wavefunction interferes with itself.
> Classical interference is obtained simply by adding the displacements from equilibrium (or amplitudes) of the two waves; In quantum interference, the effect occurs for the probability function associated with the wavefunction and therefore the absolute value of the wavefunction squared.
> The interference involves different types of mathematical functions: A classical wave is a real function representing the displacement from an equilibrium position; a quantum wavefunction is a complex function. A classical wave at any point can be positive or negative; the quantum probability function is non-negative.
> In classical optical interference the energy conservation principle is violated as it requires quanta to cancel. In quantum interference energy conservation is not violated, the quanta merely assume paths per the path integral. All quanta for example terminate in bright areas of the pattern.
From Conservation_of_energy https://en.wikipedia.org/wiki/Conservation_of_energy :
> Classically, conservation of energy was distinct from conservation of mass. However, special relativity showed that mass is related to energy and vice versa by E = mc2, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass [e.g. photons, and other massful particles] can itself be converted to pure energy, and vice versa. However this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.
From Conservation_of_energy#Quantum_theory https://en.wikipedia.org/wiki/Conservation_of_energy#Quantum... :
> In quantum mechanics, energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for energy-momentum tensor operator. Due to the lack of the (universal) time operator in quantum theory, the uncertainty relations for time and energy are not fundamental in contrast to the position-momentum uncertainty principle, and merely holds in specific cases (see Uncertainty principle). Energy at each fixed time can in principle be exactly measured without any trade-off in precision forced by the time-energy uncertainty relations. Thus the conservation of energy in time is a well defined concept even in quantum mechanics.
[deleted]
Adding Auditing to Pip
Looks like I'm a little bit late to this article about mailing list discussion about adding ~CVE search to pip.
The broader trend here is to identify the names, versions, and hashes of all software installed packages in all languages and present an SBOM [1][2]. Does/would `pip audit` also lookup CVE vulns for extension modules written in other programming languages like C, Go, and Rust; or do existing tools that also already lookup vulns for Python packages
[1] https://westurner.github.io/hnlog/ ; Ctrl-F "SBOM"
[2] "Existing artifact vuln scanners, databases, and specs?" https://github.com/google/osv.dev/issues/55#issue-802542447
FWIW, from the OSV README https://github.com/google/osv.dev:
> This is an ongoing project. We encourage open source ecosystems to adopt the OpenSSF Vulnerability format to enable open source users to easily aggregate and consume vulnerabilities across all ecosystesm. See our blog post for more details.
> The following ecosystems have vulnerabilities encoded in this format:
> GitHub Advisory Database (CC-BY 4.0), PyPI Advisory Database (CC-BY 4.0), Go Vulnerability Database (CC-BY 4.0), Rust Advisory Database (CC0 1.0), Global Security Database (CC0 1.0) OSS-Fuzz (CC-BY 4.0)
> Together, these include vulnerabilities from:
> npm, Maven, Go, NuGet, PyPI, RubyGems, crates.io, Packagist, Linux, OSS-Fuzz
> Does/would `pip audit` also lookup CVE vulns for extension modules written in other programming languages like C, Go, and Rust
For the time being, the plan is to focus `pip-audit` on vulnerabilities in Python packages themselves, not native dependencies of packages that happen to contain binary extensions. The line is, however, blurry: if a Python package is written in C or Rust and has a vulnerability therein, that would be considered a Python ecosystem vulnerability for the purposes of the PYSEC DB. A vulnerability in libssl that happens to be exposed via a Python extension, however, would not be.
So something more comprehensive for the complete SBOM for all languages and extension modules is also advisable for "[Software] Component Inventory and Risk Assessment".
From the article:
> The PyPA maintains an advisory database that stores vulnerabilities affecting PyPI packages in YAML format. For example, pip-audit reported that the version of Babel on my system is vulnerable to PYSEC-2021-421, which is a local code-execution flaw. That PYSEC advisory refers to CVE-2021-42771, which is how the flaw is known to the wider world.
> As it turns out, my system is actually not vulnerable to CVE-2021-42771, as the Ubuntu security entry shows. The pip-audit tool looks at the version numbers of the installed PyPI package to decide which are vulnerable, but Linux distributions regularly backport fixes into earlier versions so the PyPI package version number does not tell the whole story—at least for those who get their PyPI packages from distributions rather than via pip.
pypa/advisory-database: https://github.com/pypa/advisory-database
Tesla’s self-driving technology fails to detect children in the road, tests find
Here is a comparison of this test with an older video:
https://twitter.com/TaylorOgan/status/1478802681141645322 (This is January 5th 2022)
Same author, same test, months later:
https://twitter.com/TaylorOgan/status/1556991029814886404 (This is August 9th 2022)
The difference? There is absolutely no improvement.
Expecting that the FSD (Fools Self Driving) beta software would get better over time as promised by Elon, this video shows that it doesn't look good at all with this latest failure which drivers on the road, and even pedestrians were no better off and are no safer with these updates.
It is not early days anymore and even with these updates, it is still an unsafe contraption which not only has been falsely and deceptively advertised by Tesla, it is not even near the claims of the robo-taxi level of FSD (Level 4 - 5) with the number of price increases over the years with a product which doesn't work looks more like a scam.
At this point, FSD is essentially Fools Self Driving.
Accident rate per million miles of driving.
Which competing EVs do you like and why?
FSD needs to be substantially better than the average human driver - not on par or worse (which this is).
Nobody is going to tolerate FSD vehicles bugging out and randomly driving into parked cars or pedestrians, killing people in the process.
You can tout millions of miles all day ever day, but it doesn't matter. When these vehicles are put into even semi-tricky situations, they fail far too often. Tesla's software doesn't even require tricky situations... it'll just randomly drive you into a wall or fire truck without warning.
Why? Human drivers do all of those things. Why should I care if I'm killed by a computer or a human? At least if I'm killed by a computer, the data will go into the training set and make future computers safer. But human drivers will learn nothing.
Despite the narrative being spun by FSD companies like Tesla and the rest - human drivers are actually really good at navigating complex, dynamic situations.
These systems however, are not.
I think most folks would rather not be killed because Elon's ego decided a decade ago that Tesla would never consider Lidar sensors, for example, despite the mounting evidence simple cameras are not sufficient.
Recall the Toyota stuck accelerator pedal issue years back? There are people who still refuse to own a Toyota.
What about the Boeing 737-MAX fiasco? An entirely software related bug caused a statistically trivial and insignificant amount of people to lose their lives - yet today there are still people who will never fly on a Boeing aircraft again, despite the issue being resolved.
People will not tolerate loss of life from these systems. They must be substantially better than the average human for the public to be interested.
> human drivers are actually really good at navigating complex situations. These systems however, are not.
This determination needs to be made with data. As far as I can tell there is no conclusive evidence that Teslas on autopilot are involved in more crashes per mile than human drivers, and some evidence that it is safer, though that comes from Tesla, so not giving that much weight. So either it's somewhere close to as safe as human drivers, or data collection in this area is just really bad. I don't know which.
The evidence provided by the likes of Tesla does not compare to real-world driving. They cannot produce that evidence yet, despite however many millions of miles on test circuits and well-known roads they want to tout - it's a false statistic designed to deliberately mislead the public into thinking the software is better than it really is.
The burden of proof is on the FSD companies, not average drivers. The likes of Tesla have to prove their software is far superior to human drivers - and so far they have failed to do so.
There are far too many FSD beta videos out recorded by real users that demonstrate obvious safety issues. The software is just not there yet, and may not be there for a long time, if ever.
Articles like this come out far too often to just be discarded as media "hit" pieces.
>Accident rate per million miles of driving.
Are all driving miles equivalent? Do you think accidents that occur with FSD enabled are the same as where they most often occur under normal human control? What proportion of miles are driven in sunny US states?
> Are all driving miles equivalent?
Which metric is proposed?
Is this an argument supported by data? What about a sound experimental design?
Perhaps we could look to medicine (instead of automotive engineering) for guidance regarding which reductions in stratified accident incidence rate are significant and warrant continued investment in sensor fusion and assistive AI?
Why do tree-based models still outperform deep learning on tabular data?
AlphaFold's database grows over 200x to cover nearly all known proteins
Much discussion a few days ago: https://news.ycombinator.com/item?id=32262856
The top comment there (from a structural biologist) is worth reading. Here's my opinion, as a computer scientist that worked in this area.
A protein sequence is analogous to a computer program, but the "machine" is a mostly-water solution, and the instructions are interpreted by summing up all the intermolecular forces at play as the sequence is squirted out of a little extruder as a string (as in the stuff your clothes are made of, not text). Bits of the string repel and attract each other and it globs up in some biologically useful structure. The folding problem is the problem of predicting that structure from the string.
Unlike the halting problem, there is no way to generate an execution trace saying a particular glob would be formed in reality. In fact, there is no way to perform a polynomial time check of the result, so we've already escaped the land of P and NP.
Also, things like temperature and proximity to other proteins mean that there might not be a unique fold for a given sequence. Therefore, like the halting problem, we have unknown inputs, and we need to figure out which states an arbitrary program can reach.
When someone claims to have "solved" folding, you should be as skeptical as you would be if someone claimed to have solved the halting problem for arbitrary machine code, and that they don't need any extra information about the machines that run that code. Although their program runs on conventional computers, it also works on programs written for quantum computers.
(Edit: That's not to say this work isn't useful, or that this press release overclaims. I've been hearing some pretty wild claims about this work elsewhere...)
For an unrelated reason, shortly after making that comment, I put 31 genes from a viral genome (the whole genome, assuming we have the reading frames correct and nothing else funky is going on) through AlphaFold. We're getting ready to do some proteomics to see what's in the capsid, and I wanted to inform the proteomics by doing some sequence analysis. Only three genes of the 31 came back with any sort of confidence. Two of the three were crystallized and solved by my group a few years back.
Is the AlphaFold team winning Folding@home? (which started at Washington University in St. Louis, home of the Human Genome Project)
FWIU, Folding@home has additional problems for AlphaFold, if not the AlphaFold team;
> Install our software to become a citizen scientist and contribute your compute power to help fight global health threats like COVID19, Alzheimer’s Disease, and cancer. Our software is completely free, easy to install, and safe to use. Available for: Linux, Windows, Mac
> which started at Washington University in St. Louis, home of the Human Genome Project
While Wash U was a contributor, I am confused about why you call it the home of the Human Genome Project. The Project seems a lot more strongly linked to the Whitehead/MIT in terms of press and the site of key figures.
https://en.wikipedia.org/wiki/Human_Genome_Project #History
"The Cost of Sequencing a Human Genome" https://www.genome.gov/about-genomics/fact-sheets/Sequencing...
If we're just sharing links, I have one too:
Both projects tackle related problems, but each is trying to answer a different question: https://news.ycombinator.com/item?id=32264059
> Folding@home answers a related but different question. While AlphaFold returns the picture of a folded protein in its most energetically stable conformation, Folding@home returns a video of the protein undergoing folding, traversing its energy landscape.
Is there any NN architectural reason that AlphaFold could not learn and predict the Folding@home protein folding interactions as well? Is there yet an open implementation?
I think it would be much harder to do that, since it probably requires modelling physics at some level, while AlphaFold is really just mining statistical correlations of structures and sequences.
Yes, there are open implementations of nearly-AlphaFold at this point.
FWIU there's no algorithmic reason that AlphaZero-style self play w/ rules could not learn the quantum chemistry / physics. Given the infinite monkey theorem, can an e.g. bayesian NN learn quantum gravity enough to predictively model multibody planetary orbits given an additional solar mass in transit through the solar system? (What about with "try Bernoulli's on GR and call it superfluid quantum gravity" or "the bond yield-curve inversion is a known-good predictor, with lag" as Goal-programming nudges to distributedly-partitioned symbolic EA/GA with a cost/error/survival/fitness function?)
E.g. re-derivations of Lean Mathlib would be the strings to evolve.
Django 4.1
The most anticipated Django release for me personally. Async ORM access is a massive quality of life improvememt, same for async CBV. The later is also a blocker for Django Rest Framework going async.
Sure most CRUD applications work perfectly fine using WSGI, but for anyone using other protocols like WS or MQTT in their app this is kind of a big deal.
## Asynchronous ORM interface
https://docs.djangoproject.com/en/4.1/releases/4.1/#asynchro... :
`QuerySet` now provides an asynchronous interface for all data access operations. These are named as-per the existing synchronous operations but with an `a` prefix, for example `acreate()`, `aget()`, and so on.
> The new interface allows you to write asynchronous code without needing to wrap ORM operations in `sync_to_async()`:
async for author in Author.objects.filter(name__startswith="A"):
book = await author.books.afirst()
> Note that, at this stage, the underlying database operations remain synchronous, with contributions ongoing to push asynchronous support down into the SQL compiler, and integrate asynchronous database drivers. The new asynchronous queryset interface currently encapsulates the necessary sync_to_async() operations for you, and will allow your code to take advantage of developments in the ORM’s asynchronous support as it evolves. […] See Asynchronous queries for details and limitations.## Asynchronous handlers for class-based views
> View subclasses may now define async HTTP method handlers:
import asyncio
from django.http import HttpResponse
from django.views import View
class AsyncView(View):
async def get(self, request, *args, **kwargs):
# Perform view logic using await.
await asyncio.sleep(1)
return HttpResponse("Hello async world!")
I don't fully understand the sync_to_async() operation here. So the underlying database is not async, then is Django just tossing the sync DB call onto a thread or something?
Yes, it is just being put on a thread. A future Django release will use async database drivers. Django is very incrementally adding support for async. I don't believe they have started work on async templates yet, which will be one of the last major areas before the framework is fully async.
Yes, they're working from the top of the stack down, so first it was views+middleware, now it's the top layer of the database stack, and in future releases they'll work on pushing native async lower and lower in the stack.
Basically every function call that does db-queries needs an "a"-prefixed version. aget(), acreate(), aget_or_create(), acount(), aexists(), aiterator(), __aiter__, etc. And in future versions they'll work on the lower level, undocumented api, like _afetch_all(), aprefetch_related_objects(), compiler.aexecute_sql(), etc.
Eventually you'll probably need to switch database drivers to something that actually supports async.
Why do they need Async templates? When a template renders it should be pure data and fully on the CPU. It seems Async templates would encourage a bad behavior of doing n+1 queries inside the HTML.
Django makes a lot of use of lazy evaluation. You'll often construct a QuerySet object in a Django view like this:
entries = Entry.objects.filter(
category="python"
).order_by("-created")[:10]
Then pass that to a template which does this: {% for entry in entries %}...
Django doesn't actually execute the SQL query until the template starts looping through it.Async template rendering becomes necessary if you want the templates to be able to execute async SQL queries in this way.
Jinja has this feature already with the enable_async=True setting - I wrote a tiny bit about that in https://til.simonwillison.net/sqlite/related-content
My immediate thought would be custom template tags. Like `{% popular_articles %}` would have to connect to the DB and load the articles. I am sure there are other reasons too.
exactly, it fires database call in a separare thread with coroutine awaiting this threads result.
Ask HN: Is there a tool / product that enables commenting on HTML elements?
I'm searching for a commercial product that enables commenting on HTML elements on a product so it can be shared with an org. Think google comments connected to Inspect Element in chrome that can be shared with an organization.
I want to get away from creating tickets or sending emails highlighting UX concerns. This is cumbersome and not very transparent or collaborative.
Does a tool like this already exist?
Hypothesis/h is a (Pyramid) web app with a JS widget that can be added to the HTML of a page manually or by the Hypothesis browser extension, which enables you to also comment on PDFs, WAVs, GIFs: https://github.com/hypothesis/h
From the W3C Web Annotation spec that Hypothesis implements https://www.w3.org/TR/annotation-model/ :
> Selectors: Fragment Selector, CSS Selector, XPath Selector, Text Quote Selector, Text Position Selector, Data Position Selector, SVG Selector, Range Selector, Refinement of Selection
> States: Time State, Request Header State, Refinement of State
Coinbase does not list securities. End of story
Of course Coinbase will say they're not securities.
All crypto should be treated as securities. The rampant insider trading, lack of transparency in project finances and decision making is insane.
I've consulted for a few crypto projects. In most of them, the founders completely misused ICO funds for personal gains and traded their own tokens with inside info.
If your project token is available to be purchased and sold in the US, they should be subject to the same rules other securities follow.
Scamming people shouldn't be this easy.
Just because you can insider trade doesn't make something a security in my view. You can do insider trading around gold or palladium or any other commodity for instance. You can do insider trading with property and land. You can even do insider trading with currencies.
They're not securities because you can insider trade them -- they're securities because they fairly clearly meet the Federal definition;
> The term “security” means any note, stock, treasury stock, security future, security-based swap, bond, debenture, evidence of indebtedness, certificate of interest or participation in any profit-sharing agreement, collateral-trust certificate, preorganization certificate or subscription, transferable share, investment contract, voting-trust certificate, certificate of deposit for a security, fractional undivided interest in oil, gas, or other mineral rights, any put, call, straddle, option, or privilege on any security, certificate of deposit, or group or index of securities (including any interest therein or based on the value thereof), or any put, call, straddle, option, or privilege entered into on a national securities exchange relating to foreign currency, or, in general, any interest or instrument commonly known as a “security”, or any certificate of interest or participation in, temporary or interim certificate for, receipt for, guarantee of, or warrant or right to subscribe to or purchase, any of the foregoing.
Your comment has not contributed anything. Which of those many categories do you think crypto falls within?
For example, foreign currency is not a security, and an intuitive stance would be that crypto is far closer to foreign currency than it is to stocks. Other sources have a more useful definition in my opinion [1]:
> Firstly, a security is simply a financial instrument that represents a specific ownership position in either a corporation, a creditor related to a governmental body, a publicly traded corporation, or rights to ownership as represented by an option. The security must represent some type of financial value. Securities are generally categorised as either debts or equities, with a debt security indicating money that has been borrowed and must be repaid.
Under this definition, it is hard to see how the average "boring" cryptocurrency is a security.
Is "must be an agreement for future performance" a fair first test of whether something is a securities contract?
1) If there is no contractual agreement for future performance, it cannot be a securities contract (a "security").
That we can test before applying 2) the Howey Test, and then 3) assessing whether it's a Payment, Utility, Asset, or Hybrid coin/token
From https://en.wikipedia.org/wiki/SEC_v._W._J._Howey_Co. :
> "In other words, an investment contract for purposes of the Securities Act means a contract, transaction or scheme whereby a person invests his money in a common enterprise and is led to expect profits solely from the efforts of the promoter or a third party, it being immaterial whether the shares in the enterprise are evidenced by formal certificates or by nominal interests in the physical assets employed in the enterprise."[1]
> "The test is whether the scheme involves an investment of money in a common enterprise with profits to come solely from the efforts of others. If that test be satisfied, it is immaterial whether the enterprise is speculative or non-speculative or whether there is a sale of property with or without intrinsic value."[1]
Collectible coins, trading cards, beanie babies, real estate, and rare earth commodities, for example, are not securities per US securities law. Do these things fail a hypothetical and hopefully helpful "must be an agreement for future performance" litmus test for whether a thing is a security per US case law precedent?
What must be fed into the FTC CAT is a different set of policies.
From "Guidelines for enquiries regarding the regulatory framework for ICOs [pdf]" (2018) https://news.ycombinator.com/item?id=16406967 https://westurner.github.io/hnlog/#comment-16407277 :
> This is a helpful table indicating whether a Payment, Utility, Asset, or Hybrid coin/token: is a security, qualifies under Swiss AML payment law.
> The "Minimum information requirements for ICO enquiries" appendix seems like a good set of questions for evaluating ICOs. Are there other good questions to ask when considering whether to invest in a Payment, Utility, Asset, or Hybrid ICO?
> Are US regulations different from these clear and helpful regulatory guidelines for ICOs in Switzerland?
As well, do investors have any responsibility for evaluating written agreements for future performance before investing?
Does a person have the responsibility of evaluating contracts before entering into them; for example, with the belief that it's a securities agreement for future performance? What burden of responsibility has a securities investor?
Does that business sell any registered securities at all? Why did you think they were selling you a security? Were you presented with a shareholders agreement? A securities agreement? Any sort written contract? Why did you think you were entering into a securities contract if there was no contract?
If someone makes hopeful, pumping, forward-looking statements, does that obligate them to perform?
When does the Statute of Frauds apply to phantom contractual agreements for future performance?
Statute of Frauds: https://www.investopedia.com/terms/s/statute-of-frauds.asp :
> The statute of frauds (SOF) is a legal concept that requires certain types of contracts to be executed in writing. The statute covers contracts for the sale of land, agreements involving goods worth over $500, and contracts lasting one year or more.
> The statute of frauds was adopted in the U.S. primarily as a common law concept—that is, as unwritten law. However, it has since been formalized by statutes in certain jurisdictions, such as in most states. In a breach of contract case where the statute of frauds applies, the defendant may raise it as a defense.* Indeed, they often must do so affirmatively for the defense to be valid. In such a case, the burden of proof is on the plaintiff. The plaintiff must establish that a valid contract was indeed in existence.
Statute of Frauds: https://en.wikipedia.org/wiki/Statute_of_frauds
Q: When should you check for a contractual agreement for future performance, for future returns? A: At least before you give more than $500.
In the US, cryptoasset exchanges have gained specific legal approval from each and every state where those assets are listed for sale.
What process does your state have for approving assets for listing by cryptoasset exchanges? When or why does a state reject a cryptoasset exchange's application to list a cryptoasset?
Is it the case that we have 50 state-level procedures for getting a cryptoasset approved for list by an exchange?
Is it the case that SEC does not provide a "formally reject an unregistered security which is thus not SEC jurisdiction" service?
If some 50 states failed to red flag a cryptoasset, I find it unreasonable to fault cryptoasset exchanges for choosing to list, or retail investors for failing to review a contract for future performance and investing.
I Looked into 34 Top Real-World Blockchain Projects So You Don’t Have To
I keep saying this, but:
Bitcoin is well over 10 years old by now. In 10 years, the internet was already clearly adding a ton of value all over the place. Bitcoin is not really a 'young technology' anymore, and the only thing it has enabled so far is risky, unregulated investment strategies, and an staggering amount of crime.
When the Internet was 10 years old the year was 1987.
Having heard this argument a nauseating number of times, it's fairly clear that people on both sides who say this use "the internet" when they mean "the web." So 1999 is a better "after 10 years'" benchmark here.
The web was the killer app built on the infrastructure of the internet. The blockchain ecosystem is still building it's infrastructure.
# 1970s: The Internet (US Research Institutions, ARPA, IETF, IEEE,)
TCP/IP, DNS (/etc/hosts), Routing Protocols for multiple DUN (Dial-Up Networking) links between schools, Gopher
# The Web (TimBL (@CERN), IETF, W3C)
HTTP, HTML: "Hyperlinks" between "Hypertext" documents; edges with "anchor text" types.
[No-crypto P2P]
# Web 2.0 (CoC Web Frameworks, AJAX, open source SQL, LAMP,)
Interactive HTML with user-contributed-content to moderate
# ~~Web 3.0~~ (W3C,)
Linked Data; RDF, RDFa
5-Star Linked Data: https://5stardata.info/
- Use URIs in your datasets; e.g. describe the columns of the CSVs with URIs: CSVW
# Web3 ("Zero-Trust")
SK: Secret Key
PK: Public Key
Crypto P2P: resilient distributed system application architectures with no points of failure
ENS (Ethereum Name Service) instead of DNS. ENS binds unique strings to accounts; just like NFTs.
(A non-fungible-token is a like a coin where each bill has a unique serial number. A fungible-token is a thing that there are transactable fractions of that needn't have unique identities: US coinage, barrels of oil, ounces of gold/silver. Non-fungible tokens are indivisible: you can't tear a bill in half because there's only the one serial number (and that's actually a federal crime in the USA, to deface money). Similarly, ENS entries - which map a string to an account id (hash of a public key) - can't be split into fractions and sold, so they're Non-FTs; NFTs)
(DNS is somewhat unfixably broken due to the permissiveness necessary to interact with non-DNSSEC-compliant domains resulting in downgrade attack risks: even with e.g. DNS-over-TLS, DNS-over-HTTPS, or DNS-over-QUIC securing the channel; if the DNS client does not reject DNS responses that do not have DNSSEC signatures, a DNS MITM will succeed. If you deny access to DNSSEC-unsigned domains at your DNS client config or the (maybe forwarding) DNS resolver on the router, what is the error message in the browser?)
NIST announces preliminary winners of post-quantum competition
After EC DRBG who cares about what NIST does?
Those who have to comply with their standards (e.g. FIPS 140-3), at the very least.
OP does have a point though, especially with DJB's thoughts (and others) on the matter.
https://nitter.net/hashbreaker?lang=en
For FIPS in particular, they've first gotta sunset the traditional algorithms for anyone to strictly need to care (and even then, parallel constructions of PQC+traditional could let other PQC algorithms in -- like the Chrome experiments -- from a FIPS perspective, you can treat the PQC like plaintext). And for them to be useful, adoption needs to occur in the IETF communities (PKIX, TLS, SSH, IKE, ...).
You're probably looking at least 5 years on the adoption window to customers running the lastest updates. NIST's blessing might help some of the IETF conversations that now need to happen. But not listening to DJB, given his track record, likely will anger a subset of IETF contributors and might hinder adoption.
It'll be interesting to see if IETF takes the more conservative approach advocates by DJB or if they continue on with NIST's blessing alone. But I'm just a watcher... :-)
Edit: and for the record, FIPS never mandated Dual EC DRBG but it was still a mistake for NIST to rubber stamp.
From https://news.ycombinator.com/item?id=31995535 :
> (IDK what the TLS (and FIPS) PQ Algo versioning plans are: 1.4, 2.0?)
Kyber, NTRU, {FIPS-140-3}?
Sorry, reading NIST is worse than reading tea leaves and I don't actively participate in the IETF TLS mailing lists enough to say any more than the next person :-) Just stating that the process (NIST to enterprise customer) takes time for complete PQC adoption across the entire stack.
It shouldn't take long to change software: how long does it take to add a dependency on an open implementation and add an optional config file const pending a standardized list of algos like TLS 1.3+ and FIPS, and fallback to non-PQ; like DNSSEC (edit: and WPA2+WPA3)? Do mining rig companies that specialize in ASICs and FPGAs yet offer competitively-priced TLS load balancers -- now with PQ -- or are they still more expensive than mainboards?
RStudio Is Becoming Posit
At first I thought they were renaming the IDE, which seemed like a clearly bad idea. Makes sense to rename the company I guess.
It tickles my brain every time I see it when I am reminded that the same guy who made ColdFusion and a diet app just decided to build a small company based upon a weird stats language 13 years ago. Good for him.
Those all sound like good names. From the article:
> To avoid this problem and codify our mission into our company charter, we re-incorporated as a Public Benefit Corporation in 2019.
> Our charter defines our mission as the creation of free and open source software for data science, scientific research, and technical communication. This mission intentionally goes beyond “R for Data Science” — we hope to take the approach that’s succeeded with R and apply it more broadly. We want to build a company that is around in 100 years time that continues to have a positive impact on science and technical communication. We’ve only just started along this road: we’re experimenting with tools for Python and our new Quarto project aims to impact scientific communication far beyond data science.
> [...] What does the new name mean for our commercial software? In many ways, nothing: our commercial products have supported Python for over 2 years. But we will rename them to Posit Connect, Posit Workbench, and Posit Package Manager so it’s easier for folks to understand that we support more than just R. What about our open source software? Similarly, not much is changing: our open source software is and will continue to be predominantly for R. That said, over the past few years we’ve already been investing in other languages like reticulate (calling Python from R), Python features for the IDE, and support for Python and Julia within Quarto. You can expect to see more multilanguage experiments in the future.
Apache Arrow may be the best solution for data interchange in "polyglot notebooks" with multiple programming languages where SIMDJSON-LD isn't fast enough to share references to structs (with data type URIs and quantity and unit URIs) with IPC and ref counting. https://github.com/jupyterlab/jupyterlab/issues/2815
This feels weird at first glance, but the IDE is staying as RStudio. I love the company and all they’ve done to make data analysis more accessible, so good on them and may they succeed even more in the future.
Still not sure what Quarto is above and beyond a normal RStudio notebook, though.
Quatro is a tool for converting notebooks into shareable formats (like PDF, HTML, Docx). Notebooks are very hard to share. You can't send it to your stakeholders, because Python code will scare them. You can use Quatro to convert notebooks into other formats and easily hide the code. It is like nbconvert on steroids.
I'm working on solution for notebooks sharing with non-technical users. Similar to Quatro it has YAML header, but it can also add widgets to the notebook and serve notebook as interactive web app. My framework is called https://github.com/mljar/mercury
You can use RISE (https://rise.readthedocs.io/en/stable/) for Jupyter notebooks to use it for presentations and also export to PDF.
FWIW repo2docker installs everything listed in /install.R or /.binder/install.R. I'll just cc this here because of the integration potential:
> repo2docker fetches a repository (from GitHub, GitLab, Zenodo, Figshare, Dataverse installations, a Git repository or a local directory) and builds a container image in which the code can be executed. The image build process is based on the [#REES] configuration files found in the repository.
Docs: https://repo2docker.readthedocs.io/en/latest/
python3 -m pip freeze | tee requirements.txt
conda env export -f environment.yml
conda env export --from-history -f environment.yml
python3 -m pip install jupyter-repo2docker
repo2docker .
# git branch -b mynewbranch; git commit; git push
repo2docker https://github.com/binder-examples/conda
repo2docker https://github.com/binder-examples/requirements
repo2docker https://github.com/binder-examples/voila #dashboards #ContainDS
jupyter-book/.binder/requirements.txt:
https://github.com/executablebooks/jupyter-book/blob/master/... python -m webbrowser https://mybinder.org/
"#REES #ReproducibleExecutionEnvironmentSpecification" config files that repo2docker will build a container from at least one of:
requirements.txt # Pip
environment.yml # Conda (mamba)
Pipfile
install.R
postBuild # run during build
start # run in ENTRYPOINT
runtime.txt
Dockerfilehttps://repo2docker.readthedocs.io/en/latest/config_files.ht...
repo2jupyterlite (WASM) sure would be useful for presentations, too.
Ask HN: Why are there so few artificial sunlight or artificial window products?
The demand for "natural light" in homes and offices is very high, and higher than the availability of actual daylight. And there seems to be a pretty feasible way to create a fake window (a light panel that mimics sunlight through a window), using LEDs and a fresnel lens. There's no shortage of videos around showing how to DIY such a thing, such as https://www.youtube.com/watch?v=8JrqH2oOTK4 and https://www.youtube.com/watch?v=GDeEuzKXCH4
So, why can't I find many such products for sale? There are a couple of high-end companies like https://www.coelux.com/, but where's the mass market stuff? Is there an opportunity being overlooked here, or am I just missing something?
The nuclear reaction at the center of our solar system, our sun, emits EM radiation of various wavelengths. FWIU, there are various devices for phototherapy (light therapy) which are more common in more non-equatorial lattitudes.
Light therapy: https://en.wikipedia.org/wiki/Light_therapy
There are various "corn bulb" LED products, but FWIU few of the products verifiably produce UVC light; and even fewer still are built from relatively new (full spectrum) UVC LEDs.
The Broan bath fans with SurfaceShield by Vyv bath fans produce ~"ultrablue" but not UVC or a detachable Chromecast Audio.
There are bulbs that switch from normal UVA to ultrablue and/or UVC on the second flIP of the circuit.
UVC causes cancer and cataracts and is blocked by the Earth's atmosphere so almost none reaches the surface. Do NOT shine UVC light on your eyes or skin!
From "Accidental Exposure to Ultraviolet-C (UVC) Germicidal Lights: A Case Report and Recommendations in Orthodontic Clinical Settings" (2021) https://journals.sagepub.com/doi/full/10.1177/03015742211013...
> While UVA radiation is the least hazardous and most associated with skin ageing, UVB is known to cause DNA damage and is a risk factor in developing sunburn, skin cancer, and cataracts. UVC is a “germicidal” radiation with the ability to penetrate deeper, killing bacteria and inactivating viruses.6 There are various types of UVC germicidal lamps—cold cathode germicidal UV lamps, hot cathode germicidal UV lamps, slimline germicidal UV lamps, high output germicidal UV lamps, UV LEDs, and UV lamp shapes with lamp connectors. The latter two are the safest to be used with the utmost precautions.
> This case report describes a vital observation of adverse effects produced by exposure to UV germicidal lamps. There are very few studies reporting accidental exposures to UVC at work in hospital settings. 7 [...]
> Importantly, overexposure to UVC radiation causes ocular and epidermal signs and symptoms. Typical skin reactions in a welder and acute sunburn produced by an electric insect-killing device with an irradiance of as much as 46 mW m-2 have been reported previously in the literature.13, 14 As for germicidal lamps, adverse effects include facial erythema, burning sensation, irritation, pain, keratoconjunctivitis, sunburn, conjunctival redness, blurry vision, photophobia, and irritation to the airway due to the generation of ozone from UVC lamps.
LiteFS a FUSE-based file system for replicating SQLite
LiteFS author here (also Litestream author). I'm happy to answer any questions folks have about how it works or what's on the roadmap.
Given a situation where replication is desirable, is using SQLite + LiteFS a better choice than just replicated Postgres?
Postgres is great and replicating it can work as well. One benefit to SQLite is that it's in-process so you avoid most per-query latency so you don't have to worry as much about N+1 query performance issues. From my benchmarks, I see latency from an application node to a Postgres node as high as 1 millisecond -- even if both are in the same region. SQLite, on the other hand, has per-query latency overhead of about 10-20µs.
Another goal is to simplify deployment. It's a lot easier and cost-effective to just deploy out 10 application nodes across a bunch of regions rather than having to also deploy Postgres replicas in each of those regions too.
SQLite v Postgres here is apples v oranges. Postgres is multi-reader multi-writer whereas SQLite is single-writer multi-reader among other things. They are both very fine databases but solve different use cases.
From https://www.sqlite.org/isolation.html :
> Isolation And Concurrency: SQLite implements isolation and concurrency control (and atomicity) using transient journal files that appear in the same directory as the database file. There are two major "journal modes". The older "rollback mode" corresponds to using the "DELETE", "PERSIST", or "TRUNCATE" options to the journal_mode pragma. In rollback mode, changes are written directly into the database file, while simultaneously a separate rollback journal file is constructed that is able to restore the database to its original state if the transaction rolls back. Rollback mode (specifically DELETE mode, meaning that the rollback journal is deleted from disk at the conclusion of each transaction) is the current default behavior.
> Since version 3.7.0 (2010-07-21), SQLite also supports "WAL mode". In WAL mode, changes are not written to the original database file. Instead, changes go into a separate "write-ahead log" or "WAL" file. Later, after the transaction commits, those changes will be moved from the WAL file back into the original database in an operation called "checkpoint". WAL mode is enabled by running "PRAGMA journal_mode=WAL".
> In rollback mode, SQLite implements isolation by locking the database file and preventing any reads by other database connections while each write transaction is underway. Readers can be active at the beginning of a write, before any content is flushed to disk and while all changes are still held in the writer's private memory space. But before any changes are made to the database file on disk, all readers must be (temporarily) expelled in order to give the writer exclusive access to the database file. Hence, readers are prohibited from seeing incomplete transactions by virtue of being locked out of the database while the transaction is being written to disk. Only after the transaction is completely written and synced to disk and committed are the readers allowed back into the database. Hence readers never get a chance to see partially written changes.
> WAL mode permits simultaneous readers and writers. It can do this because changes do not overwrite the original database file, but rather go into the separate write-ahead log file. That means that readers can continue to read the old, original, unaltered content from the original database file at the same time that the writer is appending to the write-ahead log. In WAL mode, SQLite exhibits "snapshot isolation". When a read transaction starts, that reader continues to see an unchanging "snapshot" of the database file as it existed at the moment in time when the read transaction started. Any write transactions that commit while the read transaction is active are still invisible to the read transaction, because the reader is seeing a snapshot of database file from a prior moment in time.
> An example: Suppose there are two database connections X and Y. X starts a read transaction using BEGIN followed by one or more SELECT statements. Then Y comes along and runs an UPDATE statement to modify the database. X can subsequently do a SELECT against the records that Y modified but X will see the older unmodified entries because Y's changes are all invisible to X while X is holding a read transaction. If X wants to see the changes that Y made, then X must end its read transaction and start a new one (by running COMMIT followed by another BEGIN.)
Or:
ROLLBACK; // cancel the tx e.g. because a different dbconn thread detected updated data before the tx was to be COMMITted.
// Replay the tx
BEGIN;
// replay the same SQL statements
COMMIT;
This is saying that SQLite allows reads and writes to happen simultaneously, but it's still single-writer. There's a WIP branch to add concurrent writes.
> Usually, SQLite allows at most one writer to proceed concurrently. The BEGIN CONCURRENT enhancement allows multiple writers to process write transactions simultanously if the database is in "wal" or "wal2" mode, although the system still serializes COMMIT commands.
https://www.sqlite.org/cgi/src/doc/begin-concurrent/doc/begi...
Shit. I was wrong. Thank you for sharing.
Heaviest neutron star on record is 2.35 times the Solar mass
What percentage of black holes are formed by neutron star events? What percentage of black holes are formed by gamma ray fluid field interactions?
"What If (Tiny) Black Holes Are Everywhere?" (@PBSSpaceTime) https://youtu.be/srVKjWn26AQ :
> ~ every 30 km
How does the gravitational spacetime fluid field disturbance of our comparatively baby non-neutron-star Sun compare in diameter and non-viscosity?
https://en.wikipedia.org/wiki/Sun :
> The Sun's diameter is about 1.39 million kilometers (864,000 miles), or 109 times that of Earth. Its mass is about 330,000 times that of Earth, comprising about 99.86% of the total mass of the Solar System. [20] Roughly three-quarters of the Sun's mass consists of hydrogen (~73%); the rest is mostly helium (~25%), with much smaller quantities of heavier elements, including oxygen, carbon, neon, and iron. [21]
> The Sun is a G-type main-sequence star (G2V). As such, it is informally, and not completely accurately, referred to as a yellow dwarf (its light is actually white). It formed approximately 4.6 billion [a] [14][22] years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into an orbiting disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core. It is thought that almost all stars form by this process.
> Every second, the Sun's core fuses about 600 million tons of hydrogen into helium, and in the process converts 4 million tons of matter into energy. This energy, which can take between 10,000 and 170,000 years to escape the core, is the source of the Sun's light and heat. When hydrogen fusion in its core has diminished to the point at which the Sun is no longer in hydrostatic equilibrium, its core will undergo a marked increase in density and temperature while its outer layers expand, eventually transforming the Sun into a red giant. It is calculated that the Sun will become sufficiently large to engulf the current orbits of Mercury and Venus, and render Earth uninhabitable – but not for about five billion years.
G2V "G Star": Hydrogen + Fusion => Helium https://en.wikipedia.org/wiki/G-type_main-sequence_star :
> A G-type main-sequence star (Spectral type: G-V), also often called a yellow dwarf, or G star, is a main-sequence star (luminosity class V) of spectral type G. Such a star has about 0.9 to 1.1 solar masses and an effective temperature between about 5,300 and 6,000 K. Like other main-sequence stars, a G-type main-sequence star is converting the element hydrogen to helium in its core by means of nuclear fusion, but however can also fuse helium when hydrogen runs out. The Sun, the star to which the Earth is gravitationally bound in the center of the Solar System, is an example of a G-type main-sequence star (G2V type). Each second, the Sun fuses approximately 600 million tons of hydrogen into helium in a process known as the proton–proton chain (4 hydrogens form 1 helium), converting about 4 million tons of matter to energy. [1][2] Besides the Sun, other well-known examples of G-type main-sequence stars include Alpha Centauri, Tau Ceti, Capella and 51 Pegasi. [3][4][5][6]
https://en.wikipedia.org/wiki/Neutron_star :
> A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich. [1] Except for black holes and some hypothetical objects (e.g. white holes, quark stars, and strange stars), neutron stars are the smallest and densest currently known class of stellar objects. [2] Neutron stars have a radius on the order of 10 kilometres (6 mi) and a mass of about 1.4 solar masses. [3] They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.
Show HN: Pg_jsonschema – A Postgres extension for JSON validation
pg_jsonschema is a solution we're exploring to allow enforcing more structure on json and jsonb typed postgres columns.
We initially wrote the extension as an excuse to play with pgx, the rust framework for writing postgres extensions. That let us lean on existing rust libs for validation (jsonschema), so the extension's implementation is only 10 lines of code :)
https://github.com/supabase/pg_jsonschema/blob/fb7ab09bf6050...
happy to answer any questions!
Nice work, however, I am structurally dubious of putting too much functionality onto a classical centralized RDBMS since it can't be scaled out if performance becomes a problem. It's CPU load and it's tying up a connection which is a large memory load (as implemented in postgres, connections are "expensive") and since this occurs inside a transaction it's holding locks/etc as well. I know it's all compiled native code so it's about as fast as it can be, but, it's just a question of whether it's the right place to do that as a general concern.
I'd strongly prefer to have the application layer do generic json-schema validation since you can spawn arbitrary containers to spread the load. Obviously some things are unavoidable if you want to maintain foreign-key constraints or db-level check constraints/etc but people frown on check constraints sometimes as well. Semantic validity should be checked before it gets to the DB.
I was exploring a project with JSON generation views inside the database for coupling the DB directly to SOLR for direct data import, and while it worked fine (and performed fine with toy problems) that was just always my concern... even there where it's not holding write locks/etc, how much harder are you hitting the DB for stuff that, ultimately, can. be done slower but more scalably in an application container?
YAGNI, I know, cross the bridge when it comes, butjust as a blanket architectural concern that's not really where it belongs imo.
In my case at least, probably it's something that could be pushed off to followers in a leader-follower cluster as a kind of read replica, but I dunno if that's how it's implemented or not. "Read replicas" are something that are a lot more fleshed out in Citus, Enterprise, and the other commercial offerings built on raw Postgres iirc.
All this does is make writing a CHECK constraint for a JSON blob relatively straightforward. And yes you should use CHECKs in the database. In practice, it is very easy for application code to either not check or screw it up. It is much harder to find and clean up bad data in the database than prevent it from being inserted. By making it a schema config, it is much easier to audit by other engineers and more likely to be correct.
If CHECKs are causing perf issues, then okay maybe put the problematic ones somewhere else.
You could cost the stored procedures' deployment costs in cryptographically-signed-redundant-data-storage bytes and penalize unnecessary complexity by Costing Opcodes (in the postgres database ~"virtual machine runtime", which supports many non-halting programming languages; such as rust, which is great to see.) and paying for redundant (smart contract output) consensus where it's worth it.
TIL about https://github.com/mulesoft-labs/json-ld-schema: "JSON Schema/SHACL based validation of JSON-LD documents". Perhaps we could see if that's something that should be done in pgx or in the application versions deployed alongside the database versions?
Computer science proof unveils unexpected form of entanglement
I always said that computer science was going to be the method to complete physics, whether it be finishing the Standard Model or proving finally Dark Matter is fiction, which is why we can't detect it. Maybe it's still not entirely clear, but just trust me.
Good lord no, just no. Computer Science is not going to be the method to "complete physics." "Complete physics" isn't even a sensible phrase. Source: me, (former) physicist.
Evolutionary algorithm https://en.wikipedia.org/wiki/Evolutionary_algorithm
Related techniques > Particle swarm optimization:
> Particle swarm optimization is based on the ideas of animal flocking behaviour.[7] Also primarily suited for numerical optimization problems
Symbolic EA will surpass all of human physics in the next 10 years.
To complete physics, which spacetime predictions can or must have zero error?
The notion that physics can ever be complete is...not even wrong.
Prove it.
Bazinga.
Perhaps the oracle could indicate when the Career().solve_physics() routine will halt. https://en.wikipedia.org/wiki/Halting_problem
Securing name resolution in the IoT: DNS over CoAP
The "Matter" (fka "Project Chip") IoT protocol (which succeeds OpenThread FWIU) Wikipedia SeeAlso links to CoAP. https://en.wikipedia.org/wiki/Matter_(standard)
What are the functional differences and opportunities in regards to name resolution?
Matter: https://github.com/project-chip/connectedhomeip
Are any of the "Certificate Transparency on a blockchain / dlt / immutable permissioned distributed ledger" approaches applicable to this problem domain? Solve DNS again for IoT?
Technically Matter can run purely local, no external web requests (well there are some attestation and certs that are confirmed by the Controller). Then Matter devices use Endpoints and Clusters for control, device to device or Controller to device. Matter devices' Commissioning process is done in 2 ways - for Thread devices, they require the operational dataset of the Thread network, which is held by the Border Router, then the Border Router needs information about the Thread Device (Device ID, Discriminator, and PIN Code). This can be exchanged over BLE or NFC with the Controller (smartphone app like Homekit, Google Home, etc). If you have WiFi only devices then Commissioning is done via the Controller only. In both of these cases any DNS lookup would be done by the Controller.
> Technically Matter can run purely local, no external web requests (well there are some attestation and certs that are confirmed by the Controller).
So Matter should work when the [W]LAN is up but there is no external DNS or IP connectivity?
> Then Matter devices use Endpoints and Clusters for control, device to device or Controller to device. Matter devices' Commissioning process is done in 2 ways - for Thread devices, they require the operational dataset of the Thread network, which is held by the Border Router, then the Border Router needs information about the Thread Device (Device ID, Discriminator, and PIN Code). This can be exchanged over BLE or NFC with the Controller (smartphone app like Homekit, Google Home, etc). If you have WiFi only devices then Commissioning is done via the Controller only. In both of these cases any DNS lookup would be done by the Controller.
Does the Controller optionally run DoH (DNS-over-HTTPS), DoT (DNS-over-TLS), DoQ (DNS-over-QUIC; which is easy to load-balance because it's UDP), DNS-over-CoAP, or plain-old unsecured DNS with optional DNSSEC validation? What about ENS (Ethereum Name Service; "web3 dns"; why was DNS reinvented for the smart-contract world? And what about Matter and IoT?
Found this which explains ENS, which is perhaps less obtusely more complex than your DNS-over-CoAP thing: https://www.cryptohopper.com/blog/6536-what-is-the-web3-doma... ( https://web3py.readthedocs.io/en/stable/ens_overview.html )
> ENS domains work similar to traditional domain names, but with the new web 3.0 infrastructure, they can create decentralized applications and websites, and store data or files on the blockchain.
> The ENS is the new domain naming system built on top of the Ethereum network that enables users to create memorable and distinctive addresses or usernames. It utilizes Ethereum's smart contracts to provide supplementary services to the conventional DNS and manage domain name registration and resolution. ENS allows users to create a single username for all their wallet addresses, decentralized apps, and websites in a distributed ecosystem.
> ENS utilizes three types of smart contracts: the registry, the registrars, and the resolvers.
And of course there are better than PIN codes and CRL-less x.509 certs for entropy there. DLTs are specifically designed to be resilient to [mDNS] DDoS.
Maybe powers of π don't have unexpectedly good approximations?
I wonder what happens for powers of e. This should be different than pi, since e has continued fraction coefficients that make a nice pattern [2, 1, 2, 1, 1, 4, 1, 1, ...]. e^2 does as well (https://oeis.org/A001204). (I know this fact but have never understood any of the proofs.)
But e^3 (https://oeis.org/A058282) does not have a "nice pattern", and neither does e^4 (https://oeis.org/A058283)
/? 3blue1brown e^ipi https://m.youtube.com/results?sp=mAEA&search_query=3blue1bro...
Given:
e = limit((1 + 1/n)^n, +∞) # Euler's number
i = √-1 # orthogonal; i_0^2 = -1
pi = (666/212 - 22/7)*π # circle circumference / diameter
Euler's identity: e^iπ + 1 = 0
Euler's formula: e^ix = cos(x) + i*sin(x)
Euler's formula:
https://en.wikipedia.org/wiki/Euler's_formulae (Euler's number) https://en.wikipedia.org/wiki/E_(mathematical_constant)
Is there something fundamental here - with e.g. radix base e - about countability and a continuum of reals, and maybe constructive interference?
When EM waves that are in phase combine, the resultant amplitude of the combined waveform is the sum of the amplitudes; constructive interference is addition. And from addition, subtraction & multiplication and exponents and logarithms.
And then this concept of phase and curl ( convergence and divergence ) in non-orthogonal, probably not conditionally-independent fluid fields that combine complexly and nonlinearly. Define distance between (fluid) field moments. A coherent multibody problem hopefully with unitarity and probably nonlocality.
Can emergence of complex adaptive behavior in complex nonlinear systems of fields emerge from such observable phenomena as countability (perhaps just of application-domain-convenient field-combinatorial multiples in space Z)?
Freezing Requirements with Pip-Tools
Did you ever try poetry[1]?
It is a great tool that I use in all my projects. It effectively solves dependency management for me in a very easy to use way.
Dependency management using requirements.txt used to give me such a headache, now I just have a pyproject.toml that I know works.
[tool.poetry.dependencies]
python = ">=3.8,<3.9"
pandas = "^1.4.3"
This basically means: Use the version of python between 3.8 and 3.9 and use any version higher than 1.4.3 for pandas.What I like about poetry is that it makes sure that the whole dependency graph of the packages that you add is correct. If it can not solve the graph, then it fails fast and it fails hard, which is a good thing.
This is probably a very bad explainer of what poetry does, but be sure to check it out! :)
Isn't that the same as adding version constraints to setuptools' setup.cfg? [1] pip will use these in its dependency resolver.
You can also constrain the Python version in there if you want.
[1] https://setuptools.pypa.io/en/latest/userguide/quickstart.ht...
Maybe it is the same, but the idea is to take that selected version of the dependency and store a hash checksum for it, so that one can later get the exact same dependencies. Poetry only does not source setup.cfg, but its tool-specific config file. This way it stays out of the way of any other tool.
pip-tools can hash as well: https://pip-tools.readthedocs.io/en/latest/#using-hashes
Using a tool specific config file seems like a design choice with upsides and downsides, which I respect
A Pipfile can store hashes for multiple versions of a package built for multiple architectures; whereas requirements.txt can only store the hash of one version of the package on one platform.
Can a requirements.txt or a Pipfile store cryptographically-signed hashes for each dependency? Which tool would check that not PyPI-upload but package-builder-signing keys validate?
FWIU, nobody ever added GPG .asc signature support to Pip? What keys would it trust for which package? Should twine download after upload and check the publisher and PyPI-upload signatures?
Are cryptographically-signed hashes really necessary in this case (why?)? What would be the secret, which is used for signing?
If the hashes are retrieved over the same channel as the package (i.e. HTTPS), and that channel is unfortunately compromised, why wouldn't a MITM tool change those software package artifact hash checksums too?
Only if the key used to sign the package / package manifest with per-file hashes is or was retrieved over a different channel (i.e. WKD, HKP (HTTPS w/w/o Certificate Pinning (*))), and the key is trusted to sign for that package, then install the software package artifact and assign file permissions and extended filesystem attributes.
From the sigstore docs: https://docs.sigstore.dev/ :
> sigstore empowers software developers to securely sign software artifacts such as release files, container images, binaries, bill of material manifests [SBOM] and more. Signing materials are then stored in a tamper-resistant public log.
/? sigstore sbom: https://www.google.com/search?q=sigstore+sbom
> It’s free to use for all developers and software providers, with sigstore’s code and operational tooling being 100% open source, and everything maintained and developed by the sigstore community.
> How sigstore works: Using Fulcio, sigstore requests a certificate from our root Certificate Authority (CA). This checks you are who you say you are using OpenID Connect, which looks at your email address to prove you’re the author. Fulcio grants a time-stamped certificate, a way to say you’re signed in and that it’s you.
https://github.com/sigstore/fulcio
> You don’t have to do anything with keys yourself, and sigstore never obtains your private key. The public key that Cosign creates gets bound to your certificate, and the signing details get stored in sigstore’s trust root, the deeper layer of keys and trustees and what we use to check authenticity.
https://github.com/sigstore/cosign
> our certificate then comes back to sigstore, where sigstore exchanges keys, asserts your identity and signs everything off. The signature contains the hash itself, public key, signature content and the time stamp. This all gets uploaded to a Rekor transparency log, so anyone can check that what you’ve put out there went through all the checks needed to be authentic.
Qubit: Quantum register: Qudits and qutrits
To just quote from Wikipedia:
"Qubit > Quantum register > Qudits and qutrits" https://en.wikipedia.org/wiki/Qubit#Qudits_and_qubits
> ## Qudits and qutrits
> The term qudit denotes the unit of quantum information that can be realized in suitable d-level quantum systems.[8] A qubit register that can be measured to N states is identical[c] to an N-level qudit. A rarely used[9] synonym for qudit is quNit, [10] since both `d` and N are frequently used to denote the dimension of a quantum system.
> Qudits are similar to the integer types in classical computing, and may be mapped to (or realized by) arrays of qubits. Qudits where the d-level system is not an exponent of 2 can not be mapped to arrays of qubits. It is for example possible to have 5-level qudits.
> In 2017, scientists at the National Institute of Scientific Research constructed a pair of qudits with 10 different states each, giving more computational power than 6 qubits.[11]
> Similar to the qubit, the qutrit is the unit of quantum information that can be realized in suitable 3-level quantum systems. This is analogous to the unit of classical information trit of ternary computers.
> ## Physical implementations [of qubits,]
> Any two-level quantum-mechanical system can be used as a qubit. Multilevel systems can be used as well, if they possess two states that can be effectively decoupled from the rest (e.g., ground state and first excited state of a nonlinear oscillator). There are various proposals. Several physical implementations that approximate two-level systems to various degrees were successfully realized. Similarly to a classical bit where the state of a transistor in a processor, the magnetization of a surface in a hard disk and the presence of current in a cable can all be used to represent bits in the same computer, an eventual quantum computer is likely to use various combinations of qubits in its design.
> The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only: [...]
See also: "Quantum logic gate" https://en.wikipedia.org/wiki/Quantum_logic_gate
>> Quantum Monte Carlo: https://en.wikipedia.org/wiki/Quantum_Monte_Carlo :
>> Quantum Monte Carlo encompasses a large family of computational methods whose common aim is the study of complex quantum systems. One of the major goals of these approaches is to provide a reliable solution (or an accurate approximation) of the quantum many-body problem. [...] The difficulty is however that solving the Schrödinger equation requires the knowledge of the many-body wave function in the many-body Hilbert space, which typically has an exponentially large size in the number of particles. Its solution for a reasonably large number of particles is therefore typically impossible,*
> What sorts of independent states can or should we map onto error-corrected qubits in an approximating system?
> Propagation of Uncertainty ... Numerical stability ... Chaotic convergence, ultimately, apparently: https://en.wikipedia.org/wiki/Propagation_of_uncertainty
> Programming the Universe: https://en.wikipedia.org/wiki/Programming_the_Universe :
> Lloyd also postulates that the Universe can be fully simulated using a quantum computer; however, in the absence of a theory of quantum gravity, such a simulation is not yet possible. "Particles not only collide, they compute."
> Quantum on Silicon looks cheaper in today dollars.
Morello's,
> Devide [sic] the universe in QFT field-equal halves A and B, take energy from A to make B look like A, then add qubit error correction, and tell me if there's enough energy to simulate the actual universe on a universe QC with no instruction pipeline.
Due to error correction; propagation of uncertainty.
"Quantum computing in silicon hits 99% accuracy" (2022) https://www.eurekalert.org/news-releases/940321 :
> Quantum computing in silicon hits the 99% threshold
> Morello et al achieved 1-qubit operation fidelities up to 99.95 per cent, and 2-qubit fidelity of 99.37 per cent with a three-qubit system comprising an electron and two phosphorous atoms, introduced in silicon via ion implantation.
> A Delft team in the Netherlands led by Lieven Vandersypen achieved 99.87 per cent 1-qubit and 99.65 per cent 2-qubit fidelities using electron spins in quantum dots formed in a stack of silicon and silicon-germanium alloy (Si/SiGe).
> A RIKEN team in Japan led by Seigo Tarucha similarly achieved 99.84 per cent 1-qubit and 99.51 per cent 2-qubit fidelities in a two-electron system using Si/SiGe quantum dots.
>> The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only: [...]
Qubit#Physical_implementations: https://en.wikipedia.org/wiki/Qubit#Physical_implementations
- note the "electrons" row of the table
> See also: "Quantum logic gate" https://en.wikipedia.org/wiki/Quantum_logic_gate
To improve search results on YouTube, use the search prefix “intitle:”
YouTube's search is horrible. No, I don't want "related results" that are completely unrelated to my query, neither results from my country when I specifically type in English and my whole UI is in English and I never watch non-English content while logged in.
- [ ] ENH: yt mobile app: Share search query urls
- [ ] ENH,SCH: https://schema.org/MediaObject and search cards
- [ ] UBY: search: transcript search snippets
OpenSSL Security Advisory
Here's a good summary of the flaw:
https://guidovranken.com/2022/06/27/notes-on-openssl-remote-...
Note that the bug is only in 3.0.4, which was released June 21, 2022. So if you didn't update to this version, it's unlikely you're vulnerable.
You're talking about the first CVE. The second one affects 1.1 too.
Thankfully I can't imagine anyone using AES-OCB.
As a non-crypto-nerd: How viable is it to make a “safe” OpenSSL, which just doesn’t support all the cipher modes (?) that the HN crowd would mock me for accidentally using?
OpenSSL > Forks: https://en.wikipedia.org/wiki/OpenSSL#Forks
TLS 1.3 specifies which curves and ciphers: https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_1...
(IDK what the TLS (and FIPS) PQ Algo versioning plans are: 1.4, 2.0?)
Mozilla [Open]SSL Config generator: https://ssl-config.mozilla.org/
Axial Higgs mode spotted in materials at room temperature (2022)
From the article "Axial Higgs mode spotted in materials at room temperature" https://physicsworld.com/a/axial-higgs-mode-spotted-in-mater... :
> The two pathways of interest to the team were the excitation of a Higgs mode with no intrinsic angular momentum and the excitation of an axial Higgs mode. By varying the polarization of the incoming laser light and the polarization of the detected light, the team was able to observe this interference and confirm the existence of the axial Higgs mode in the material. What is more, the observations were made a room temperature, whereas most other quantum phenomena can only be seen at very low temperatures.
> The team now hopes that their relatively simple experimental approach could be used to identify axial Higgs modes in other materials including superconductors, magnets, and ferroelectrics. This could prove useful for future technologies because materials containing axial Higgs modes could be used as quantum sensors. And because the mathematics of the axial Higgs mode is analogous to that used in particle physics, studying the quasiparticles could provide clues for what lies beyond the Standard Model of particle physics.
- [x] Room temperature quantum experiment
- [x] Superfluidity (see also Superfluid Quantum Gravity)
- [x] Angular momentum
"https://cosmosmagazine.com/science/new-particle-higgs-axial-... :
> “The detection of the axial Higgs was predicted in high-energy particle physics to explain dark matter. However, it has never been observed. Its appearance in a condensed matter system was completely surprising and heralds the discovery of a new broken symmetry state that had not been predicted.
> “Unlike the extreme conditions typically required to observe new particles, this was done at room temperature in a tabletop experiment where we achieve quantum control of the mode by just changing the polarisation of light.”
> Burch believes the simple setup provides opportunities for using similar experiments in other areas. “Many of these experiments were performed by an undergraduate in my lab,” he says. “The approach can be straightforwardly applied to the quantum properties of numerous collective phenomena, including modes in superconductors, magnets, ferroelectrics and charge density waves. Furthermore, we bring the study of quantum interference in materials with correlated and/or topological phases to room temperature, overcoming the difficulty of extreme experimental conditions.”
"Axial Higgs mode detected by quantum pathway interference in RTe3" (2022) https://www.nature.com/articles/s41586-022-04746-6
> [...] Here we discover an axial Higgs mode in the CDW system RTe3 using the interference of quantum pathways. In RTe3 (R = La, Gd), the electronic ordering couples bands of equal or different angular momenta4,5,6. As such, the Raman scattering tensor associated with the Higgs mode contains both symmetric and antisymmetric components, which are excited via two distinct but degenerate pathways. This leads to constructive or destructive interference of these pathways, depending on the choice of the incident and Raman-scattered light polarization.
New study shows highly creative people’s brains work differently from others'
Creativity #Neuroscience https://en.wikipedia.org/wiki/Creativity#Neuroscience
awesome-ideation-tools in re: neuroimaging https://github.com/zazaalaza/awesome-ideation-tools
/? creativity forgetting rate and instability in the brain: https://www.google.com/search?q=creativity+forgetting+rate+a...
Catastrophic interference is proposed as one cause of maybe pathological forgetting in the brain. This forgetting in the brain may be strongly linked with creativity. Are there links to hippocampal neurogenesis?
Catastrophic interference > Proposed solutions > Orthogonality: https://en.wikipedia.org/wiki/Catastrophic_interference :
> Input vectors are said to be orthogonal to each other if the pairwise product of their elements across the two vectors sum to zero. For example, the patterns [0,0,1,0] and [0,1,0,0] are said to be orthogonal because (0×0 + 0×1 + 1×0 + 0×0) = 0. One of the techniques which can create orthogonal representations at the hidden layers involves bipolar feature coding (i.e., coding using -1 and 1 rather than 0 and 1). [10] Orthogonal patterns tend to produce less interference with each other. However, not all learning problems can be represented using these types of vectors and some studies report that the degree of interference is still problematic with orthogonal vectors. [2]
Visualizing quantum mechanics in an interactive simulation
See a full description of the simulation software at:
https://www.spiedigitallibrary.org/journals/optical-engineer...
Practical and neat. It says TypeScript and "optical table"? Where are the sources? https://lab.quantumflytrap.com/lab
"Visualizing quantum mechanics in an interactive simulation – Virtual Lab by Quantum Flytrap" Optical Engineering (2022) https://www.spiedigitallibrary.org/journals/optical-engineer... :
> Abstract: Virtual Lab by Quantum Flytrap is a no-code online laboratory of an optical table, presenting quantum phenomena interactively and intuitively. It supports a real-time simulation of up to three entangled photons. Users can place typical optical elements (such as beam splitters, polarizers, Faraday rotators, and detectors) with a drag-and-drop graphical interface. Virtual Lab operates in two modes. The sandbox mode allows users to compose arbitrary setups. Quantum Game serves as an introduction to Virtual Lab features, approachable for users with no prior exposure to quantum mechanics. We introduce visual representation of entangled states and entanglement measures. It includes interactive visualizations of the ket notation and a heatmap-like visualization of quantum operators. These quantum visualizations can be applied to any discrete quantum system, including quantum circuits with qubits and spin chains. These tools are available as open-source TypeScript packages – Quantum Tensors and BraKetVue. Virtual Lab makes it possible to explore the nature of quantum physics (state evolution, entanglement, and measurement), to simulate quantum computing (e.g., the Deutsch-Jozsa algorithm), to use quantum cryptography (e.g., the Ekert protocol), to explore counterintuitive quantum phenomena (e.g., quantum teleportation and the Bell inequality violation), and to recreate historical experiments (e.g., the Michelson–Morley interferometer).
Quantum-Flytrap/quantum-tensors https://github.com/Quantum-Flytrap/quantum-tensors :
> A TypeScript package for sparse tensor operations on complex numbers in your browser - for quantum computing, quantum information, and interactive visualizations of quantum physics.
Quantum-Flytrap/bra-ket-vue https://github.com/Quantum-Flytrap/bra-ket-vue :
> An interactive visualizer for quantum states and matrices.
#Q12 /? q12 quantum: https://www.google.com/search?q=q12+quantum
This is pretty cool - quantum behavior is really not intuitive and being able to play with it helps clarify it
I do wish each level had more explanation about the concept it's teaching. You can figure things out if you know a small amount about how quantum mechanics work (like knowing once you measure something the superposition collapses) but the game would be a better teaching tool if there were explanations before/after you complete a level
For people interested in QM but without a physics/math background Carlo Rovelli has a book called Helgoland that explains a lot of the basics in a non-technical way
> if you know a small amount about how quantum mechanics work (like knowing once you measure something the superposition collapses)
FWIU it's possible to infer from nearby particles without incurring Heisenberg (and/or Bell's?) with Quantum Tagging?
"Helgoland" (2020) https://g.co/kgs/wZXRKQ
The other day I found a quantum circuit puzzle game called "QuantumQ" that has you add a Hadamard gate - statistical superposition - without requiring really any QM at all; which may imply that applied EA could solve the defined problems.
ray-pH/quantumQ: https://github.com/ray-pH/quantumQ
Quantum logic gate https://en.wikipedia.org/wiki/Quantum_logic_gate #Hadamard_gate
Show HN: CSVFiddle – Query CSV files with DuckDB in the browser
By no means am I crapping on what you've created — it looks great, and I've always wanted to try DuckDB and now you've made a frictionless entrypoint — just wanted to point out in general that querying CSV with SQL is more accessible than some people might have assumed. e.g. here's a recent TIL blogpost from Simon Willison about him discovering how to do sqlite queries against CSV from the command line: https://til.simonwillison.net/sqlite/one-line-csv-operations
One suggestion I would make: the Uber trips data is interesting, but might be too big for this demo? I was getting a few loading errors when trying it (didn't investigate where in the process the bottleneck was though)
On HN: "One-liner for running queries against CSV files with SQLite" https://news.ycombinator.com/item?id=31824030
A fast in-place interpreter for WebAssembly
Wizard: An advanced WebAssembly Engine for Research - https://github.com/titzer/wizard-engine
- [ ] SIMD? https://github.com/titzer/wizard-engine/search?q=simd
Wizard supports parsing and typechecking of SIMD instructions (passing all the spec tests for them) but no interpreter implementation yet, so it can't execute them. Help wanted :)
One-liner for running queries against CSV files with SQLite
SQLite's virtual table API (https://www.sqlite.org/vtab.html) makes it possible to access other data structures through the query engine. You don't need to know much if anything about how the database engine executes queries, you only need to implement the callbacks it needs to do its job. A few years ago I wrote an extension to let me search through serialized Protobufs which were stored as blobs in a regular database.
And in fact there is a CSV virtual table available from SQLite but it's not built in the normal client: https://www.sqlite.org/csv.html
It really should be, as the code is tiny and this functionality is not overly exotic.
In my experience [1], the CSV virtual table was really slow. Doing some analysis on a 1,291 MB file, a query took 24.7 seconds using the virtual table vs 3.4 seconds if you imported the file first.
The CSV virtual table source code is a good pedagogical tool for teaching how to build a virtual table, though.
[1]: https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html
https://github.com/liquidaty/zsv/blob/main/app/external/sqli... modifies the sqlite3 virtual table engine to use the faster zsv parser. have not quantified the difference, but in all tests I have run, `zsv sql` runs faster (sometimes much faster) than other sqlite3-on-CSV solutions mentioned in this entire discussion (unless you include those that cache their indexes and then measure against a post-cached query). Disclaimer: I'm the main zsv author
What are the differences between the zsv and csv parsers?
Is csvw with linked data URIs also doable?
Not sure what you mean by csvw. But, zsvlib is a CSV parser, and zsv is a CLI that uses zsvlib. zsv also uses the sqlite3 vtable based on the example in the original sqlite3 code, but it modifies it to use the zsvlib parser instead of the original CSV parser. The zsv parser is different from most CSV parsers in how it uses SIMD operations and minimizes memory copying. zsvlib parses CSV based on the same spec that Excel implements, so having data URIs in the CSV is fine, but if you wanted a compound value (e.g. text + link), you would need to overlay your own structure into the CSV text data (for example, embed JSON inside a column of CSV data). Not sure that answers your question but if not, feel free to add further detail and I'll try again...
## /? sqlite arrow
- "Comparing SQLite, DuckDB and Arrow with UN trade data" (2021) https://news.ycombinator.com/item?id=29010103 ; partial benchmarks of query time and RAM requirements [relative to data size] would be
- "Introducing Apache Arrow Flight SQL: Accelerating Database Access" (2022) https://arrow.apache.org/blog/2022/02/16/introducing-arrow-f... :
> Motivation: While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps.
## "The Virtual Table Mechanism Of SQLite" https://sqlite.org/vtab.html :
> - One cannot create a trigger on a virtual table.
Just posted about eBPF a few days ago; opcodes have costs that are or are not costed: https://news.ycombinator.com/item?id=31688180
> - One cannot create additional indices on a virtual table. (Virtual tables can have indices but that must be built into the virtual table implementation. Indices cannot be added separately using CREATE INDEX statements.)
It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?
> - One cannot run ALTER TABLE ... ADD COLUMN commands against a virtual table.
Are there URIs in the schema? Mustn't there thus be a meta-schema that does e.g. nested structs with portable types [with URIs], (and jsonschema, [and W3C SHACL])? #nbmeta #linkedresearch
## /? sqlite arrow virtual table
- sqlite-parquet-vtable reads parquet with arrow for SQLite virtual tables https://github.com/cldellow/sqlite-parquet-vtable :
$ sqlite/sqlite3
sqlite> .eqp on
sqlite> .load build/linux/libparquet
sqlite> CREATE VIRTUAL TABLE demo USING parquet('parquet-generator/99-rows-1.parquet');
sqlite> SELECT * FROM demo;
//
sqlite> SELECT * FROM demo WHERE foo = 123;
sqlite> SELECT * FROM demo WHERE foo = '123'; // incurs a severe query plan performance regression without immediate feedback
## Sqlite query optimization`EXPLAIN QUERY PLAN` https://www.sqlite.org/eqp.html :
> The EXPLAIN QUERY PLAN SQL command is used to obtain a high-level description of the strategy or plan that SQLite uses to implement a specific SQL query. Most significantly, EXPLAIN QUERY PLAN reports on the way in which the query uses database indices. This document is a guide to understanding and interpreting the EXPLAIN QUERY PLAN output. [...] Table and Index Scans [...] Temporary Sorting B-Trees (when there's not an `INDEX` for those columns) ... `.eqp on`
The SQLite "Query Planner" docs https://www.sqlite.org/queryplanner.html list Big-O computational complexity bound estimates for queries with and without prexisting indices.
## database / csv benchmarks
Show HN: Easily Convert WARC (Web Archive) into Parquet, Then Query with DuckDB
How does this compare with SQLite approaches shared recently?
Well there's a virtual table extension to read parquet files in SQLite. I've not tried it myself. https://github.com/cldellow/sqlite-parquet-vtable
Could this work with datasette (which is a flexible interface to sqlite with a web-based query editor)?
It appears so: https://github.com/simonw/datasette/issues/657
Bundling binary tools in Python wheels
The Python wheel "ecosystem" is rather nice these days. You can create platform wheels and bundle in shared objects and binaries compiled for those platforms, automated by tools like auditwheel (Linux) and delocate-wheel (OSX). These tools even rewrite the wheel's manifest and update the shared object RPATH values and mangle the names to ensure they don't get clobbered by stuff in the system library path. The auditwheel tool converts a "linux" wheel into a "manylinux" wheel which is guaranteed by the spec to run on certain minimum Linux kernels.
Wheels are merely zips with a different extension, so worst case for projects where these automation tools fail you you can do it yourself. You simply need to be careful to update the manifest and make sure shared objects are loaded in the correct order.
On the user side, a `pip install` should automatically grab the relevant platform wheel from pypi.org, or, if there is not one available, fall back to trying to compile from source. With PEP 517 this all happens in a reproduceable, isolated build environment so if you manage to get it building on your CI pipeline it is likely to work on the end user's machine.
I'm not sure what the state of wheels on Windows is these days. Is it possible to bundle DLLs in Windows wheels and have it "just work" the way it does for (most) Linux distros and OSX?
FWICS, wheel has no cryptographic signatures at present:
The minimal cryptographic signature support in the `wheel` reference implementation was removed by dholth;
The GPG ASC signature upload support present in legacy PyPI and then the warehouse was removed by dstufft;
"e2e" TUF is not yet implemented for PyPI, which signs everything uploaded with a key necessarily held in RAM; but there's no "e2e" because packages aren't signed before being uploaded to PyPI. Does twine download and check PyPI's TUF signature for whatever was uploaded?
I honestly haven't looked at conda's fairly new package signing support yet.
FWIR, in comparison to legacy python eggs with setup.py files, wheels aren't supposed to execute code as the user installing the package.
From https://news.ycombinator.com/item?id=30549331 :
https://github.com/pypa/cibuildwheel :
>>> Build Python wheels for all the platforms on CI with minimal configuration.
>>> Python wheels are great. Building them across Mac, Linux, Windows, on multiple versions of Python, is not.
>>> cibuildwheel is here to help. cibuildwheel runs on your CI server - currently it supports GitHub Actions, Azure Pipelines, Travis CI, AppVeyor, CircleCI, and GitLab CI - and it builds and tests your wheels across all of your platforms
You're right, both the infrastructure and metadata for cryptographic signatures on Python packages (both wheels and sdists) isn't quite there yet.
At the moment, we're working towards the "e2e" scheme you've described by adding support for Sigstore[1] certificates and signatures, which will allow any number of identities (including email addresses and individual GitHub release workflows) to sign for packages. The integrity/availability of those signing artifacts will in turn be enforced through TUF, like you mentioned.
You can follow some of the related Sigstore-in-Python work here[2], and the ongoing Warehouse (PyPI) TUF work here[3]. We're also working on adding OpenID Connect token consumption[4] to Warehouse itself, meaning that you'll be able to bootstrap from a trusted GitHub workflow to a PyPI release token without needing to share any secrets.
[1]: https://www.sigstore.dev/
[2]: https://github.com/sigstore/sigstore-python
Quantum Algorithm Implementations for Beginners
See the “Quantum Algorithm Zoo” for a comprehensive list of quantum algorithms.
https://quantumalgorithmzoo.org/
Massive amounts of work is needed to make these comprehensible.
> Tequila is an Extensible Quantum Information and Learning Architecture where the main goal is to simplify and accelerate implementation of new ideas for quantum algorithms. It operates on abstract data structures allowing the formulation, combination, automatic differentiation and optimization of generalized objectives. Tequila can execute the underlying quantum expectation values on state of the art simulators as well as on real quantum devices.
https://github.com/tequilahub/tequila#quantum-backends
Quantum Backends currently supported by tequilahub/tequila: Qulacs, Qibo, Qiskit, Cirq (SymPy), PyQuil, QLM / myQLM
tequila-tutorials/Quantum_Calculator.ipynb https://github.com/tequilahub/tequila-tutorials/blob/main/Qu... :
> Welcome to the Tequila Calculator Tutorial. In this tutorial, you will learn how to create a quantum circuit that simulates addition using Tequila. We also compare the performance of various backends that Tequila uses.
A problem that could possibly be quantum-optimized and an existing set of solutions: Scheduling
Optimize the scheduling problem better than e.g. SLURM and then generate a sufficient classical solution that executes in P-space on classical computers.
SLURM https://en.wikipedia.org/wiki/Slurm_Workload_Manager :
> Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]
Additional applications and use cases: "Employee Scheduling" > "Ask HN: What algorithms should I research to code a conference scheduling app" https://news.ycombinator.com/item?id=22589911
> [Hilbert Curve Scheduling] https://en.wikipedia.org/wiki/Hilbert_curve_scheduling :
> [...] the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity.[1] Other space filling curves may also be used in various computing applications for similar purposes.[2]
How to create a dashboard in Python with Jupyter Notebook
Surprised I hadn't heard of mplfinance. Similar code for econometric charts with bands and an annotation table with dated events would be useful and probably already exists somewhere. Maybe Predictive Forecasting libraries for time series already have something that could be factored out into a maintained package for such common charts? E.g. seaborn has real nice confidence intervals with matplotlib, too; though the matplotlib native color schemes are colorblind-friendly: "Perceptually Uniform Sequential colormaps" https://matplotlib.org/3.5.0/tutorials/colors/colormaps.html
awesome-jupyter > Rendering/Publishing/Conversion https://github.com/markusschanta/awesome-jupyter#renderingpu... :
> ContainDS Dashboards - JupyterHub extension to host authenticated scripts or notebooks in any framework (Voilà, Streamlit, Plotly Dash etc)
IIRC, ContainDS and Voila spawn per-dashboard and/or per-user Jupyter kernels with JupyterHub Spawners and Authenticators; like Binderhub ( https://mybinder.org/ ) but with required login and without repo2docker?
Streamlit lists Bokeh, Jupyter Voila, Panel, and Plotly Dash as Alternative dashboard approaches: https://github.com/MarcSkovMadsen/awesome-streamlit#alternat...
Says here that mljar/mercury is dual-licensed AGPL: https://github.com/mljar/mercury
From the readme I can't tell what powers the cited REST API functionality. From setup.py > mercury/requirements.txt it's DRF: Django REST Framework. https://github.com/mljar/mercury/blob/main/mercury/requireme...
E.g. BentoML is built on FastAPI which is async (sanic) and built by the DRF people, but FastAPI doesn't yet have the plethora of packages with tests/ supported by the Django community and DSF Django Software Foundation.
The REST API functionality allows to execute the notebook by POST request. It is powered by DRF. You can read more in the docs https://mercury-docs.readthedocs.io/en/latest/notebook-as-re...
The Y Combinator in Go with generics
I admire how someone passionate about a subject can provide in-depth explanation of _how_ that thing works. But to me, I still have absolutely no idea _why_ or _when_ I would need to apply this (even after reading the original post[0]). A motivating practical example would go a long way as an introduction before getting into the nitty-gritty details.
[0] https://eli.thegreenplace.net/2016/some-notes-on-the-y-combi...
The problem is marrying two ideas: lambda calculus (where all computation is done implicitly via anonymous functions), and recursion (where some set of functions call each other by name in a cycle).
This is a bit more interesting than it looks because ostensibly you're using the name of a function in its definition before that name ever has a meaning -- you're not really saying that you should literally call that symbol, but that the compiler should produce a function according to some set of rules that fills in the self references.
That doesn't work in lambda calculus though, where definitions are concrete and you don't have compiler magic to resolve the undefined symbol in your definition. A solution is to pass some function to itself as an argument, and you have a code pattern that goes along with it (the Y combinator).
This retains some "usefulness" in the sense that lambda calculus is still being explored for various purposes as a foundation for other things, and also in that languages based on that line of thought might benefit from using that structure explicitly. If you have some sort of syntactic sugar for recursion in your language of choice though and don't care about foundations then it's probably not very applicable.
Thanks, that helps a bit. So I get the what: a way to make an anonymous function recursive. As for the why, it seems to me it's because "lambda calculus doesn't have named functions." So this tells me this is a programming language design concern, as opposed to a pattern that can be used in everyday programming. Based on this, providing examples of how to implement a Y combinator in Clojure, Python, and Go is of questionable value IMO. And that's OK, since a lot of things we do don't need to have practical value (e.g. learning, understanding, etc).
Even if it doesn't have value for everyday programming, translating it into Clojure/Python/Go/whatever can make it easier to understand. Most explanations I've seen of the Y combinator just work in lambda calculus, which I at least find somewhat difficult to read; translating it into a more familiar language can make it easier to understand.
Fixed-point combinator > Y Combinator, Implementations in other languages: https://en.wikipedia.org/wiki/Fixed-point_combinator
Y-combinator in like 100 languages: https://rosettacode.org/wiki/Y_combinator #Python
Show HN: Pixie, open source observability for Kubernetes using eBPF
"Sysdig and Falco now powered by eBPF" too; https://sysdig.com/blog/sysdig-and-falco-now-powered-by-ebpf...
- "Linux Extended BPF (eBPF) Tracing Tools" https://www.brendangregg.com/ebpf.html lists eBPF commands, [CLI] frontends, relevant kernel components
- awesome-ebpf#manual-pages https://github.com/zoidbergwill/awesome-ebpf links to the kernel eBPF docs
- eBPF -> BPF Berkeley Packet Filter: https://en.wikipedia.org/wiki/Berkeley_Packet_Filter :
> Extensions & optimizations: Since version 3.18, the Linux kernel includes an extended BPF virtual machine with ten 64-bit registers, termed extended BPF (eBPF). It can be used for non-networking purposes, such as for attaching eBPF programs to various tracepoints.[5][6][7] Since kernel version 3.19, eBPF filters can be attached to sockets,[8][9] and, since kernel version 4.1, to traffic control classifiers for the ingress and egress networking data path.[10][11] The original and obsolete version has been retroactively renamed to classic BPF (cBPF). Nowadays, the Linux kernel runs eBPF only and loaded cBPF bytecode is transparently translated into an eBPF representation in the kernel before program execution.[12] All bytecode is verified before running to prevent denial-of-service attacks. Until Linux 5.3, the verifier prohibited the use of loops.
- ... EVM/eWASM opcodes have a cost in gas/particles; https://github.com/crytic/evm-opcodes ... (dlt) Dataset indexing curation signals & incentives
"Tracing in Kubernetes: kubectl capture plugin." https://sysdig.com/blog/tracing-in-kubernetes-kubectl-captur...
kubectl-capture: https://github.com/sysdiglabs/kubectl-capture.git
Is it possible to do this sort of low level distributed tracing with Pixie e.g. conditionally when a pattern expression matches?
Pixie contributor here. Lots of great eBPF resources there!
For the question of whether it's possible to trace conditionally when a pattern expression matches, Pixie's philosophy is to trace everything and then allow you to use the querying language (called PxL) to query for the conditions you were looking for. The idea is that you may not know what you should have been looking for until you need to troubleshoot.
To be able to drop into an (e.g. rr) debugger on condition (e.g. by replicating that node's state ~State Share ) might result in a less appropriately comprehensive automated test suite, or faster tests?
Looks like Jaeger (Uber contributed to CNCF) supports OpenTracing, OpenTelemetry, and exporting stats for Prometheus. https://github.com/cncf/landscape
Implementing strace in Rust
awesome-ebpf lists a few ebpf libs for rust; it looks like there are a few ~strace+ebpf tools: https://github.com/zoidbergwill/awesome-ebpf
Gitsign
Src: https://github.com/sigstore/gitsign
> Keyless Git signing with Sigstore!
> This is heavily inspired by [github/smimesign], but uses keyless Sigstore to sign Git commits with your own GitHub / OIDC identity
Physicists discover never-before seen particle sitting on a tabletop
Imagine the claim is justified, and pools of quantum fluids can transition into a phase that is decoupled from electromagnetic fields, and become dark matter. It would still be baryonic and would become ordinary matter just by heat. We don't have a lot of experience with what happens when quantum mechanical effects expand to a cosmological scale. Who would be surprised to find new physics there.
TIL about (models of) Superfluid Quantum Gravity: https://news.ycombinator.com/item?id=31377395 https://westurner.github.io/hnlog/#comment-31383784
> How is "GR on Bernoulli", GM cannot describe n*x*oo more precisely than oo, and Conway's surreal infinities aren't good axioms either (for GR or for QM with (chaotic) fluids which perhaps need either infinities plural or superfluid QG (instead of QFT fwics); not making sense?
Is this coherent after so much constructive interference in fluids?
The case for expanding rather than eliminating gifted education programs (2021)
>I agree that it is unacceptable to have eighth grade Algebra I classes and gifted education programs that disproportionately exclude Black and brown children. But if equity is the goal, we should be mandating that every eighth grader takes Algebra I—and then structure the entire pre-K to seventh grade student experience to ensure this is a legitimate possibility for every child. Since when does equity mean everyone gets nothing?
I don't know whether Algebra I is above or below what eighth graders can be expected to do, but raising the difficulty of every class to what the most advanced student is capable of is not the answer either. Everybody knows someone else that is better at math than they are, and someone who is not as good. It is universal common knowledge that the ease with which one takes to math is different for different people. Why can't we just admit it and quit trying to make people who are unique the same?
Subject categories such as "math" are also very broad. I struggled with math in elementary school, because it was mostly about memorization. 3 + 5 = 8. 6 x 7 = 42. Learning and being able to spit out those facts as quickly and flawlessly as possible was most of what I remember about math as a young kid. I was never good at that. I still am not good at that. I got Cs and sometimes lower grades in math.
Somehow, though, I passed a test in 7th grade to qualify to take algebra in 8th grade. Suddenly stuff started making sense, and I could see a point to it. You could actually solve an interesting problem. It wasn't just regurgitation of facts. It became more about thinking and connecting concepts and even being creative. I started getting As in math, and continue to do well in High School with geometry, trig, and calculus, but up until algebra was introduced I would certainly not have demonstrated a gifted ability in "math".
You struggled with math in elementary school because of deeply flawed teaching. Even at that early age, it's not hard to understand that "3 + 5 = 8" is not something that you'd need to literally memorize. Of course, the fact that teaching quality can vary to such an extent is itself noteworthy.
I don't think I agree. 3 + 5 = 8 isn't something you need to literally memorize, but if you don't memorize such facts of basic arithmetic you're at a disadvantage when it comes to doing homework and tests. It's easier and less error-prone to memorize 8*7 than it is to work it out manually every time you see it until it sticks.
For what it's worth, I struggled with math in elementary school and especially high school -- I failed algebra I, then passed it with a D; then I failed algebra II before passing it* with a D. It wasn't until I was forced to take a business calculus class in my 20s that it started clicking, and I assure you the quality of instruction wasn't any better at the college level than it was in high school. (I did eventually graduate with honors with a math degree.)
Agreed. It does need memorization. Anyone who don't think so can try to add and multiply in hexadecimal, what is 0xA + 0xC and what is 0x8 * 0x6? It becomes quite obvious memorization is required when you try it in any other radix other than decimal(which is memorized already).
It might be quite obvious that memorization is convenient, but 'required' means "you can't do it any other way" and that's obviously incorrect.
IDK why we'd assume that there's a different cognitive process for learning mathematics with radix 10 than with radix 16?
Mathematics_education#Methods https://en.wikipedia.org/wiki/Mathematics_education#Methods :
> [...] Rote learning: the teaching of mathematical results, definitions and concepts by repetition and memorisation typically without meaning or supported by mathematical reasoning. A derisory term is drill and kill. In traditional education, rote learning is used to teach multiplication tables, definitions, formulas, and other aspects of mathematics.
I think they mean that it is very useful to “cache” these primitive results, but it is absolutely possible to regenerate them if one is missing or the like.
Beautiful Soup
I found lxml.html a lot easier to work with than bs4, in case that helps anyone else.
On the off chance you were not aware, bs4 also supports[0] getting parse events from html5lib[1] which (as its name implies) is far more likely to parse the text the same way the browser would
0: https://www.crummy.com/software/BeautifulSoup/bs4/doc/index....
BeautifulSoup is an API for multiple parsers https://beautiful-soup-4.readthedocs.io/en/latest/#installin... :
BeautifulSoup(markup, "html.parser")
BeautifulSoup(markup, "lxml")
BeautifulSoup(markup, "lxml-xml")
BeautifulSoup(markup, "xml")
BeautifulSoup(markup, "html5lib")
Looks like lxml w/ xpath is still the fastest with Python 3.10.4 from "Pyquery, lxml, BeautifulSoup comparison" https://gist.github.com/MercuryRising/4061368 ; which is fine for parsing (X)HTML(5) that validates<(EDIT: Is xml/html5 a good format for data serialization? defusedxml ... Simdjson, Apache arrow.js)
I was curious, so I tried that performance test you linked to on my machine with the various parsers:
==== Total trials: 100000 =====
bs4 lxml total time: 110.9
bs4 html.parser total time: 87.6
bs4 lxml-xml total time: 0.5
bs4 xml total time: 0.5
bs4 html5lib total time: 103.6
pq total time: 8.7
lxml (cssselect) total time: 8.8
lxml (xpath) total time: 5.6
regex total time: 13.8 (doesn't find all p)
bs4 is damn fast with the lxml-xml or xml parsersYou want a proper html 5 parser that can handle non valid documents. And the fastest one is https://github.com/kovidgoyal/html5-parser over 30x faster than html5lib
Formal methods only solve half my problems
From "Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification" https://news.ycombinator.com/item?id=30194993 :
> Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ help with that at all?
Side-channel attack https://en.wikipedia.org/wiki/Side-channel_attack
Yes, there can still be side-channel and out-of-band attacks in FV systems. For example, pouring water into the server rack. That’s a cheeky example, but things like timing side-channels via speculative execution and cache hits wouldn’t be addressed by FV at the software level. I think there is some ongoing research for verifying software/hardware interactions but I’m not aware of a product in widespread use that checks this.
The use of constant-time algorithms for crypto is a way to mitigate things like timing side-channels, and there are other techniques like disabling hyper-threading that can help.
The depressing fact is that side channels exist any time there’s a shared computing resource. The question is how noisy the channel is, the data rate, etc.
It might be argued that side channels exist whenever there is a shared channel; which is always, because plenoptic functions, wave function, air gap, ram bus mhz, nonlocal entanglement
Category:Side-channel attacks https://en.wikipedia.org/wiki/Category:Side-channel_attacks
Show HN: An open source alternative to Evernote (Self Hosted)
For me, Evernote’s killer feature is automatic OCR. (Subs that I can just upload a doc through my phone’s camera).
It looks like this doesn’t have that, unfortunately.
Noted! I'll see if this feature can be added.
https://www.elastic.co/guide/en/elasticsearch/plugins/curren... :
> [Teh ElasticSearch Core Ingest Attachment Processor Plugin]: The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by using the Apache text extraction library Tika.
> The source field must be a base64 encoded binary. If you do not want to incur the overhead of converting back and forth between base64, you can use the CBOR format instead of JSON and specify the field as a bytes array instead of a string representation. The processor will skip the base64 decoding then
Apache Tika supported formats > Images > TesseractOCR: https://tika.apache.org/2.4.0/formats.html https://tika.apache.org/2.4.0/formats.html#Image_formats :
> When extracting from images, it is also possible to chain in Tesseract, via the TesseractOCRParser, to have OCR performed on the contents of the image.
/? Meilisearch "ocr" GitHub;
Looks like e.g. paperbase (agpl) also implements ocr with tesseractocr: https://docs.paperbase.app/
tesseract-ocr/tesseract https://github.com/tesseract-ocr/tesseract
/? https://github.com/awesome-selfhosted/awesome-selfhosted#sea... ctrl-f "ocr"
Would be good to have:
- Search results on a timeline indicating search match occurrence frequency; ability to "zoom in" or "drill down"
- "Find more like these" that prepopulates a search query form
- "Find more like these" that mutates the query pattern and displays the count for each original and mutated query along with the results; with optional optimization
The name needs to change.
This tool commits the cardinal sin of choosing a name GitSomething, where Git is only incidental to the domain that the product is concerned with (notetaking) and which is made worse by being yet another project that says "Git" where what it means is "we're using the GitHub API".
> You may not use the Marks in the following ways:
...
> In any way that indicates that Git favors one distribution, platform, product, etc. over another except where explicitly indicated in writing by Conservancy.
...
> In addition, you may not use any of the Marks as a syllable in a new word or as part of a portmanteau (e.g., "Gitalicious", "Gitpedia") used as a mark for a third-party product or service without Conservancy's written permission. For the avoidance of doubt, this provision applies even to third-party marks that use the Marks as a syllable or as part of a portmanteau to refer to a product or service's use of Git code.
http://git-scm.com/about/trademark
> Let's say as an example that it does backups. We'd prefer it not call itself GitBackups. We don't endorse it, and it's just using the name to imply association that isn't there. You can come up with similar hypotheticals: GitMail that stores mailing list archives in Git, or GitWiki that uses Git as a backing store.
https://public-inbox.org/git/20170202022655.2jwvudhvo4hmueaw...
What's also notable is that it conflicts with what is permitted: 'Commands like "git-foo" (so you run it as "git foo") are generally OK. This is Git's well-known extension mechanism[...]' When "git-foo" exists, we've approved "Git Foo" as a matching project name'. This combined with the fact that Git itself already ships with the "git-notes" command baked right in...
How do GitHub and Gitlab get away with this? Have they been given permission by Conservancy? Or are they in violation?
IIRC GitHub, (BitBucket), and GitLab existed before this new (license-addendum?) Trademark policy; but may be wrong. Which is to say that I don't recall there having been such a trademark policy at the time. Isn't it actually the "Old BSD License" that retains the "may not spaketh the name" clause?
(Bitcoin is also originally a LF project; in Git, like BitTorrent, and similar to BitGold only in name, for a reason. Bitcoin initially lacked a Foundation to hold trademarks, in particular.)
The choosealicense.com table of licenses in the appendix is a service of GitHub, and GitLab also donates free CI build runner minutes for Open Source projects: https://choosealicense.com/appendix/#trademark-use :
> Trademark use
> This license explicitly states that it does NOT grant trademark rights, even though licenses without such a statement probably do not grant any implicit trademark rights.
> IIRC GitHub, (BitBucket), and GitLab existed before this new (license-addendum?) Trademark policy
Trademarks are like copyright licenses where if you haven't been given explicit permission, then you don't have any. (The difference is that you can lose your trademark if you don't actively police misuse, but that's beside the point.)
Trademark > Registration: https://en.wikipedia.org/wiki/Trademark#Registration
Speak in complete sentences.
How are you using your whiteboard at home?
"How to teach technical concepts with cartoons" https://news.ycombinator.com/item?id=15575411 https://westurner.github.io/hnlog/#story-15574890 :
> I learned about visual thinking and visual metaphor in application to business communications from "The Back of the Napkin: Solving Problems and Selling Ideas with Pictures" [ https://www.danroam.com/my-books ]
California parents could soon sue for social media addiction
Creating regulations like this may end up acting as a regulatory moat for existing social network companies. This probably makes the overall social media pie smaller overall but guarantees incumbents slice of the pie.
We've seen this happen in other highly regulated areas such as healthcare, telecommunications, etc.
I suspect the way things are trending, in 10 years, very few entrepreneurs would be willing to start a new social media company.
Clearly the regulators see this as a potential downside, which is why the $100 million in gross revenue provision was added.
On the other hand, we want to prevent new companies forming on the premise they can make money from gamifying their product so much it becomes addictive. Now, addictiveness has to have some definition to be useful. It can't be some odd random user who suffers from some kind of condition that prevents them from self-regulating their particular addiction whereas for example other people don't get addicted. There have to be some thresholds and milestones which indicate some service is "addictive" and then regulate against that.
I think when companies consult with behavioral, and other, psychologists in order to craft better ways to wring more engagement out of their users, those companies are intentionally making their products addictive. I don't see it as being much different than exploiting the addictive potential of new nicotine salts, or exploiting additives to cigarettes to do the same.
It can be spun however anyone wants to spin it, but at the end of the day, that's exploiting biology to subvert the will of others. And yes, I know this can describe advertising, as well.
Agreed. This is why the "metaverse" needs oversight as well. It will be a new frontier ripe for gamification, explication and general behavioral control.
No, every business optimizes yield.
Smart businesses optimize UX for conversion. FREE with no revenue is loss.
What's next? Suing vegas for the lights? You don't want to gamble? Don't go to Vegas.
They already have "How much time have I spent in here?" features.
You may not optimize yield because the children are compulsive addictive and it's the internet's fault.
What's next? Suing the bar for allowing you to spend time there? You'll have to make it more unbearable.
Just do a little more censorship for me too, mmkay?
How about the movies? Are they allowed to optimize for engagement by screen testing, or no?
It's not Art it's Ari: You want to sell an art film? Take it to an art film festival.
How are they supposed to know that you don't want to be in the store anymore?
So, just {Facebook,} has to have a timer at the top? Above the fold? Indicating cumulative time spent? What about Amazon?
For context here, there are many existing methods for limiting ones access to certain DNS domains and applications at the client. A reasonable person can:
- Add an /etc/hosts file entry: `etchosts='/etc/hosts' echo '127.0.0.1 domain.com www.domain.com' >> "${etchosts}`
- Install a browser extension for limiting time spent by domain
- Buy a router that can limit access by DNS domain and time of day (with regard for the essential communications of others who could bypass default DNS blocking with a VPN that can be expected to also regularly fail)
- Install an app (with access to the OS process space of other programs in order to restrict them) to limit time spent by application and/or DNS domain
-- Enable "Focus Mode" in Android; with the "Digital Wellbeing" application
-- Enable "Screen Time" in iOS; and also report on and limit usage of apps [and also to limit access to DNS domains, you'd also need required integration with a required browser]
You can install just an IM (Instant Messenging) app, and only then learn strategies for focusing amidst information overload and alert fatigue.
Some users do manage brands and provide customer service and support non-blocked but blocking essential communications, while managing their health at a desk all day. Some places literally require you to check your mobile device at the door. What should the default timer above the fold on just facebook be set to?
>What's next? Suing the bar for allowing you to spend time there?
Well, actually: https://en.m.wikipedia.org/wiki/Dram_shop
> How are they supposed to know that you don't want to be in the store anymore?
Is the store - who you are not paying - a 1) business of public accomodation; or 2) obligated to provide such detection and ejection services; or 3) required to provide service?
You can't cut them off, they're Donny (who TOS don't apply to). You must cut them off, they can't they're not even. You are responsible for my behavior!
Best of luck suing the game publisher for optimizing the game, the author of a book you got for FREE for your time lost, suing the bar that you chose to frequent; do you think they're required to provide service to you? Did they prevent you from leaving? Did they prevent you from choosing to do something else, like entering a different URL in your address bar at will?
You should instead pay someone to provide the hypothetical service you're demanding fascist control over. CA is not a shareholder; and this new policy will be challenged and the state must apply said policy to all other businesses equally: you may not dog just {business you're trying to illegally dom} with an obligation to put a countdown or countup timer above the fold because they keep taking so much of your time.
EDIT: I just can't f believe they thought that only {Facebook} would have to check real name IDs at the door, run a stopwatch for each user with a clock above the fold, profile your mental health status, allow Donny to keep harassing just whoever, and tell you when it's time to go because you can't help yourself when it's time to leave the store they continued to optimize.
Microsoft Flight Simulator – Top Gun: Maverick Expansion
https://www.xbox.com/en-US/games/microsoft-flight-simulator
Wikipedia: https://en.wikipedia.org/wiki/Microsoft_Flight_Simulator
/? "Microsoft Flight Simulator – Top Gun: Maverick Expansion" https://www.google.com/search?q=%22Microsoft+Flight+Simulato...
Generating websites with SPARQL and Snowman, part 1
Cool to see a project(Snowman[0]) I'm the creator of mentioned here on HN.
I created Snowman to allow one to use RDF/SPARQL even in the user-facing parts of ones stack(and to learn Go).
E.g. Hugo-academic has Microdata RDF (Go) Templates; and supports Markdown, Jupiter, LaTeX: https://github.com/wowchemy/starter-hugo-academic
"Build pages from data source" https://github.com/gohugoio/hugo/issues/5074
Index funds officially overtake active managers
Index fund: https://en.wikipedia.org/wiki/Index_fund
> Comparison of index funds with index ETFs: In the United States, mutual funds price their assets by their current value every business day, usually at 4:00 p.m. Eastern time, when the New York Stock Exchange closes for the day. [40] Index ETFs, in contrast, are priced during normal trading hours, usually 9:30 a.m. to 4:00 p.m. Eastern time. Index ETFs are also sometimes weighted by revenue rather than market capitalization. [41]
Survivorship bias > Examples > In business, finance, and economics: https://en.wikipedia.org/wiki/Survivorship_bias
Why building profitable trading bot is hard?
I wants know why 90% of trading bots are fails. What's the reason behind it.
The stock market is very efficient - aka, the price reflects most, if not all available information at the time.
Therefore, a trading bot would need to trade on information that is not available to other bots, for it to be profitable. This is hard to achieve, and each bot that _do_ achieve it is making the market even more efficient.
Two investors can look at the same balance sheet and come to opposite conclusions. 90% of actively managed investment funds underperform the market. It's a hard problem, full stop. If you're good at solving hard problems, you could solve something less boring and probably make more.
Managed funds must pay managers; probably out of the returns.
ETFs (Electronically Traded Funds) have no fund management fees; like Class B stock, ETFs typically are not voting shares (which you don't have when you buy a mutual fund anyways).
Algotraders reference e.g. an S&P 500 ETF as the default benchmark for comparing a portfolio's performance.
Quarterly earnings reports are filed as XBRL XML. A value investor might argue that you don't need to rebalance a portfolio until there is new data about technical fundamentals for technical analysis.
The average bear trades on sentiment and comparatively doesn't at all appropriately hedge; this is part of Behavioral economics, the technical reason why some people actually can outperform the market, imho.
If the financial analyst does not have a (possibly piecewise) software function to at least test with backtesting and paper trading, do they even have an objective relative performance statistic? Your notebook or better should also model fees and have a parametrizable initial balance.
Here's the awesome-quant link directory: https://github.com/wilsonfreitas/awesome-quant
The Good Ol' Days of QBasic Nibbles
FWIU, QB64 will run nibbles.bas.
Nibbles (video game) https://en.wikipedia.org/wiki/Nibbles_(video_game)
Here's a PyGame version of snake/nibbles: https://github.com/caffeinemonster/python-pygame-nibbles/blo...
For in-browser games in Python with JupyterLite and the pyodide WASM kernel, there are now the pyb2d bindings for box2d: https://twitter.com/ThorstenBeier/status/1523573928702087169
Can we make a black hole? And if we could, what could we do with it?
I admit I know next to nothing about this stuff, but something doesn't add up. If everything has a Schwarzchild radius determined by its mass, then should we conclude that particles like electrons and protons also have a (very small) Schwarzchild radius? If the smaller it is, the sooner it explodes, then shouldn't atomic particles have all finished exploding a long time ago? When they explode, what do they eject, if not more subatomic particles like themselves? Alternatively, is the explanation that atomic particles are extended bodies whose sizes exceed their Schwarzchild radii instead being of point masses? If so, then what kind of stuff fills the interior of an electron? I don't have any answers but I have a feeling we're on shaky ground when we start trying to extrapolate general relativity concepts to atomic scales.
edit: typo
> If everything has a Schwarzchild radius determined by its mass, then should we conclude that particles like electrons and protons also have a (very small) Schwarzchild radius?
https://en.m.wikipedia.org/wiki/Black_hole_electron
> If the smaller it is, the sooner it explodes, then shouldn't atomic particles have all finished exploding a long time ago?
See my other comment here: https://news.ycombinator.com/item?id=31378092
> I have a feeling we're on shaky ground when we start trying to extrapolate general relativity concepts to atomic scales.
Correct. We know nothing about how to marry General Relativity with atomic-scale physics (quantum mechanics). That's why everyone and their dog are looking for a theory of quantum gravity.
> https://en.m.wikipedia.org/wiki/Black_hole_electron
Very interesting link - I suppose this could potentially make the problem slightly moot for electrons. Still, I don't think this works for other elementary particles, as black holes can't have color charge or weak hypercharge as far as I know (so they can't behave like quarks, gluons, W or Z bosons etc.)
> We know nothing about how to marry General Relativity with atomic-scale physics (quantum mechanics). That's why everyone and their dog are looking for a theory of quantum gravity.
True, though I think this is not even a problem in matching GR and QM, it is a problem in GR itself. The math of GR has infinities when looking at the center of a black hole, so we know there must be some other math that prevents the curvature from reaching infinity. We can of course easily invent infinitely many solutions to this problem, but there is no way to choose between them on an empirical basis, even in principle (since we can't ever experiment with the inside of a black hole).
A theory of quantum gravity would solve a different problem: GR is nonlinear, while QM is linear (if we ignore the Born rule) - so they can't describe the same system. Relatedly, if applying GR to a system described by a wave function, we are not able to compute how space time will curve given that a single particle(with its mass) is usually present at many points in space-time.
It is hoped that solving the second problem will also solve the first, but I'm not sure this is guaranteed.
> Still, I don't think this works for other elementary particles, as black holes can't have color charge or weak hypercharge as far as I know (so they can't behave like quarks, gluons, W or Z bosons etc.)
I think it is expected they can. The simple reason there are no explicit BH solutions with color charge is that, in contrast to electrodynamics, there's no classic field theory for the strong interaction that we could put into our Einstein-Hilbert action.
> I think this is not even a problem in matching GR and QM, it is a problem in GR itself.
Yes and no.
All kinds of theories have singularities and infinities. Classic electrodynamics is full of them and quantum field theory is, too. Nevertheless we still say the theories are fine and treat the singularities as pretty much nonphysical. ("Point particles don't really exist / a better theory will get rid of them", "We don't see the bare particles anyway, so let's remove the infinities using renormalization", et cetera.) Yes, spacetime singularities seem somewhat more severe, but I think we have good reasons to believe (e.g. the uncertainty relations) that a theory of quantum gravity would solve this conundrum. I mean, every single singularity we worry about in GR comes with infinite curvature and/or infinite energy densities, hence necessarily requires quantum mechanics to study.
On an unrelated note: Why is no one complaining that quantum field theory, from a mathematical point of view, is completely ill-defined? It surprises me time and again that people ascribe severe issues to GR ("It has singularities", "It's not quantum") and yet completely forget that the issues in quantum mechanics (both philophical and mathematical) are much more severe. GR, at the very least, is a mathematically absolutely rigorous theory, with well-defined objects and axioms and such. QFT, in turn, to this day is a toolbox of weird "shut-up-and-calculate" heuristics.
> We can of course easily invent infinitely many solutions to this problem, but there is no way to choose between them on an empirical basis, even in principle (since we can't ever experiment with the inside of a black hole).
There is one way: Come up with candidate theories of quantum gravity and with experiments to test quantum-gravitational effects outside a black hole (there are a few ideas) and select the right theory based on the experimental results and then have the theory predict what happens inside a black hole. Boom. If you say this approach is not valid as it'll remain a theoretical prediction and we still won't be able to peek inside a black hole, you're somewhat right. But right now we're having a discussion about spacetime singularities, which are a purely theoretical problem, too. No one has ever seen them.
> GR is nonlinear, while QM is linear (if we ignore the Born rule) - so they can't describe the same system.
We already know they are incompatible but linearity has nothing to do with it. The equations of motion of interacting quantum fields are non-linear, too. In fact, electrodynamics is, too, in some sense (backreaction & self-force), and we still managed to quantize it.
> Relatedly, if applying GR to a system described by a wave function, we are not able to compute how space time will curve given that a single particle(with its mass) is usually present at many points in space-time.
I wouldn't say this is just a related problem. This is the problem of quantum gravity.
> It is hoped that solving the second problem will also solve the first, but I'm not sure this is guaranteed.
Again, I think the reason people are hopeful are the uncertainty relations. A theory of quantum gravity necessarily has to incorporate them somehow.
/? Quantum gravity fluid
https://scholar.google.com/scholar?q=related:FV3voSY5-kYJ:sc...
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017)
> The hypothesis starts from considering the physical vacuum as a superfluid quantum medium, that we call superfluid quantum space (SQS), close to the previous concepts of quantum vacuum, quantum foam, superfluid vacuum etc. We usually believe that quantum vacuum is populated by an enormous amount of particle-antiparticle pairs whose life is extremely short, in a continuous foaming of formation and annihilation. Here we move further and we hypothesize that these particles are superfluid symmetric vortices of those quanta constituting the cosmic superfluid (probably dark energy). Because of superfluidity, these vortices can have an indeterminately long life. Vorticity is interpreted as spin (a particle's internal motion). Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity. In this model we don't resort to gravitons. Once comparing superfluid quantum gravity with general relativity, it is evident how a hydrodynamic gravity could fully account for the relativistic effects attributed to spacetime distortion, where the space curvature is substituted by flows of quanta. Also special relativity can be merged in the hydrodynamics of a SQS and we obtain a general simplification of Einstein's relativity under the single effect of superfluid quantum gravity.
IIRC, when I searched gscholar for "wave-particle-[fluid]" duality" a few weeks ago there were even more recent papers.
Does Quantum Chaos describe fluids or superfluids? https://en.wikipedia.org/wiki/Quantum_chaos
Do CAS tools must stop reducing symbolic expressions describe infinity such that?:
assert n*x*oo == oo
Conway's surreal numbers of infinity aren't quite it, I'm afraid. Countability or continuum? Did Hilbert spaces (described here in SymPy with degree n) quite exist back then? Degrees of curl; divergence and convergence
https://docs.sympy.org/latest/modules/physics/quantum/hilber...Sorry, but I'm afraid you're not making a lot of sense.
How is "GR on Bernoulli", GM cannot describe nxoo more precisely than oo, and Conway's surreal infinities aren't good axioms either (for GR or for QM with (chaotic) fluids which perhaps need either infinities plural or superfluid QG (instead of QFT fwics); not making sense?
No, I'm afraid not. You might want to provide some background on what you're trying to say and, also, on your notation.
DNS over Dedicated QUIC Connections
> Abstract: This document describes the use of QUIC to provide transport confidentiality for DNS. The encryption provided by QUIC has similar properties to those provided by TLS, while QUIC transport eliminates the head-of-line blocking issues inherent with TCP and provides more efficient packet-loss recovery than UDP. DNS over QUIC (DoQ) has privacy properties similar to DNS over TLS (DoT) specified in RFC 7858, and latency characteristics similar to classic DNS over UDP. This specification describes the use of DoQ as a general-purpose transport for DNS and includes the use of DoQ for stub to recursive, recursive to authoritative, and zone transfer scenarios.
> [...] DNS over HTTPS (DoH) [RFC8484] can be used with HTTP/3 to get some of the benefits of QUIC. However, a lightweight direct mapping for DoQ can be regarded as a more natural fit for both the recursive to authoritative and zone transfer scenarios, which rarely involve intermediaries. In these scenarios, the additional overhead of HTTP is not offset by, for example, benefits of HTTP proxying and caching behavior.
[deleted]
Twitter Deal Temporarily on Hold
I strongly feel that the spammers thing is smoke and mirrors.
The deal at $54.20 was maybe OK for him a month ago (it was great for Twitter because nobody else was interested at that price). Since then, Twitter reported disappointing numbers for the past quarter, and the entire market took a nose dive, with tech hit especially hard.
TWTR stock price was in the mid-30s before Musk's offer. Without the offer, and with the disappointing numbers, and with the recent market development, one could reasonably assume that the stock would be below 30 today.
Musk should just pay the break fee of $1bn, and renegotiate for $42.69 or whatever meme number he fancies. Why pay $42bn total for something when you can get it for $30bn.
When Musk started selling his TSLA shares to buy Twitter, TSLA fell by like $200 or like 20%.
It was becoming obvious that to buy Twitter, TSLA value would have to decline severely. IIRC, Musk only got around to selling $8 Billion pretax before quitting.
Musk thought he could afford Twitter. He sells some TSLA and the market collapses, and he suddenly decides against it.
Everything else is smoke and mirrors. Musk doesn't have the cash for the deal anymore (maybe he never had the cash for the deal). I guess Musk is looking at the $1 Billion ejection clause now and trying to weasel around it.
But at this rate, it looks like Musk owes Twitter $1 billion cash for this broke deal.
This is the pretty good point to all the "richest person" "net worth XX Billion" lists and calculations.
Most if not all of these people can't liquidate even close to their "net worth"
If you can get several tens of millions of dollars, liquid, on fairly short notice, there's almost nothing you can't buy on a whim. You may as well have infinite money. It doesn't matter that you can't liquidate another 90-99% of your net worth easily. You borrow $60m for that second yacht, then pay it off as soon as the tax situation looks favorable for realizing some gains. No big deal. Live like (er, better than) a king, pay taxes like a pauper. It's the American way.
Larger sums are only really useful for making big-splash investments. Like this. As far as personal spending goes, it's no obstacle, so it doesn't matter that they can't easily turn their entire net worth into a literal billions-of-dollars balance in a checking account.
Show HN: Pythondocs.xyz – Live search for Python documentation
Hi everyone!
I've been working on a web search interface for Python's documentation as a personal project, and I think it's ready for other people to use...
Please give it a go (and join me in praying to the server gods):
Here's the tech stack for those interested:
- Parser: Beautiful Soup + Mozilla Bleach
- Database: in-memory SQLite (aiosqlite) + SQLAlchemy
- Web server: FastAPI + Uvicorn + Jinja2
- Front end: Tailwind CSS + htmx + Alpine.js
I have ideas for future improvements but hopefully the current version is useful to someone.
Let me know what you think!
Does python not have built in documentation search? In R I use `??searchterm`.
It has help(), but I don't think there's a nice search feature included.
I mean you could use string methods to search for something of course, help just returns a string.
Then again, it's Python. There's probably some kind of
import search from helpTools
Or something among those lines, because... it always has this kind of stuffThe `pydoc` / `python -m pydoc` module doesn't do search; it would be O(n) for every search like grep without e.g. a sphinx searchindex: https://docs.python.org/3/library/pydoc.html
Sphinx searchindex.js does Porter stemming for English and other languages: https://github.com/sphinx-doc/sphinx/blob/5.x/sphinx/search/...
sphinxcontrib.websupport.search supports Xapian, Whoosh (Python), and null: https://github.com/sphinx-doc/sphinxcontrib-websupport/blob/...
sphinx-elasticsearch https://github.com/zeitonline/sphinx_elasticsearch :
> This is a stand-alone extraction of the functionality used by readthedocs.org, compatible with elasticsearch-6.
MeiliSearch (rust) compared with ElasticSearch (java), Algolia, TypeSense (C++): https://docs.meilisearch.com/learn/what_is_meilisearch/compa...
Is there a good way to index each sphinx doc set's searchindex.js?
Colleges where everyone works and there's no tuition
Sorry to depart from the feel-good narrative, but this kind of student labor is token or symbolic at best.
The kinds of jobs that students can perform are not nearly the kind of jobs that a college with a physical plant to maintain, professors to pay, finances to administer, etc. can make a meaningful dent in. (By and large), students are not going to be doing electrical work, maintenance, financial accounting, administration, etc. -- the day to day necessary jobs that keep an institution running -- at all, or enough to be offsetting the costs of those professionals and paying for salaries of the faculty. And you wouldn't want students to be handling many of the jobs that rely on people to conscientiously do them for years, and have institutional memory in mind. They're there to be students, not workers. That's why most student jobs are of the type similar to dorm cleaners, kitchen staff, library, computer room, etc. -- short term temp work requiring no great specific skills.
The article itself acknowledges that much of the income comes from the endowment, grants, donations. If you wanted to start a school on the premise of student labor paying for it, without these existing sources, there would be no way. The cost of running a school is simply way too high in developed countries to do this.
Having students work is not a bad thing. But it isn't some utopian possibility to solve the problems of the cost of education at nearly any typical school. To make this possible you'd have to start something like an educational coop in a developing country. And I doubt that would be a significant contributor to higher education, until it got larger and more professional, at which point you would start to run into the exact same issues above.
I actually entirely disagree with the premise that students should work. I think it's wrong, aside from a perhaps holiday job. They should study, that's it. If they are not busy enough to study, then they don't study hard enough.
That's why, I think, in some European countries, instead of giving debt to students, we give them a stipend. The society should be able to support students enough to study, so they wouldn't have to work.
True, being a student should be considered a full-time job. I'm from Europe and we get stipends that depend on grades (and financial situation), and studying is at least somewhat considered a duty and responsibility towards society then, instead of the American attitude where students are service consumers, customers who must be pandered to. You can see where that attitude has led on American campuses.
[deleted]
What are your most used self-hosted applications?
awesome-selfhosted/awesome-selfhosted lists quite a few: https://github.com/awesome-selfhosted/awesome-selfhosted
Sqldiff: SQLite Database Difference Utility
Which {SQL,} databases do this as a native, online database feature; so you don't have to pull backups to sqldiff?
E.g. django-reversion and VDM do versioned domain model on top of SQL, but not with native temporal database features.
Many apps probably could or should be written as mostly-append-only - not necessarily blocking until the previous record is available to hash – but very few apps are written that way, so run sqldiff offline.
There is Dolt, which started out as git for tables but is turning into a MySQL compatible database with good versioning. (Including branches and merges.)
dolthub/dolt https://github.com/dolthub/dolt:
dolt clone
dolt pull
dolt push
dolt checkout
dolt branch
dolt commit
dolt merge
dolt blame
dolt diff
mgramin/awesome-db-tools > schema > changes:
https://github.com/mgramin/awesome-db-tools#changesEthicalML/awesome-production-machine-learning#model-and-data-versioning: https://github.com/EthicalML/awesome-production-machine-lear...
GitBOM: Enabling universal artifact traceability in software supply chains
From https://gitbom.dev/resources/whitepaper/ :
> For this reason we propose two areas of work:
> 1. enhancing artifact-generating tools (e.g., compilers, linkers, and container image generators) to also output metadata regarding their inputs and outputs
> 2. defining a storage format which represents the minimum information to describe the artifact relationship tree, and which uses git’s on-disk storage format
> Following from (1), this approach will require minimal to no effort on the part of open source project maintainers, thus significantly increasing its chances of widespread adoption as compared to any approach which requires maintainers to perform additional actions (e.g., implementing substantive changes in their CI/CD or package build pipeline to generate an SBOM).
Requirements traceability: https://en.wikipedia.org/wiki/Requirements_traceability
codemeta/codemeta - Minimal metadata schemas for science software and code, in JSON-LD: https://github.com/codemeta/codemeta
Compostable fungi-based replacement for styrofoam
I'm heartened by the interest in better packaging and it seems like it's having a moment. Last week Cruz Foam[0] (probably a direct competitor) had an announcement that it had some celebrity investors[1]. It's probably not a popular opinion but I think adding more regulations or incentives (tax breaks) to discourage the use of non-biodegradable packaging, especially in foods, is long overdue.
[0] - https://www.cruzfoam.com [1] - https://www.cruzfoam.com/post/meet-our-new-investors-advisor...
Yeah, we have incentives to insulate homes but they all use terrible tech. The best solution still off-gases and can be dangerous if done improperly.
Home insulation has a longer useful life than most packaging, but you're right that it still usually ends up in a landfill. However the current incentives for home insulation are very different - usually to reduce home heating/cooling costs and reduced energy consumption is usually an environmentally sound policy. Should there be further incentives to encourage insulation alternatives which have a more eco-friendly end of life? Absolutely. And that is likely a harder problem given the productive lifetime of an average home, but definitely a worthy place to also put further incentives.
I think one of the solutions may be based on hemp, currently known as hempcrete or hemp concrete - essentially a mix of hemp hurd (the woody essence of the hemp plant), hydraulic lime, and water.
You mix it up and then all you need is just the wooden frame (if you're building load bearing walls), then you wrap it in hempcrete (floors, walls & roof), apply some mud plaster and you're done. No need for 6-10 layers full of plastics & glues.
Thanks to use of lime instead of cement the building even captures CO2, as the walls literally turn into stone over time.
Some hemp buildings are 200+ years old, in some cave in india they've even found 1500 y.o. hempcrete, and you can compost whole building at EOL.
Some other benefits: non-toxic, no off-gassing, no solvents, mold resistance, high vapor permeability, humidity control, durable, sustainable, carbon sequestration, fire and pest resistance, passive self regulation of temperature and humidity, great insulator
Can hempcrete be made from Mars or Moon aggregate instead of lime, and 4d printed?
FWIU, lime requires coral for production? Is lime sustainable?
Alternative solutions for: structural wood frame in a hempcrete structure: stacking hempcrete blocks on structural forms that are stronger and more insulting than structural concrete; green concrete, a carbon and thermal gradient sink; and Hempwood, which is apparently stronger than spec lumber of the same dimensions as well.
most lime comes from limestone, extracted by quarries/mines- although sometimes it is sourced from coral
the process of producing lime can involve a number of not-very-sustainable steps, hopefully which can be substituted for more environmentally responsible alternatives
How does this product compare to aerogel, which is inorganic without minor radiation? https://en.wikipedia.org/wiki/Aerogel
> Aerogels are produced by extracting the liquid component of a gel through supercritical drying or freeze-drying. This allows the liquid to be slowly dried off without causing the solid matrix in the gel to collapse from capillary action, as would happen with conventional evaporation. The first aerogels were produced from silica gels. Kistler's later work involved aerogels based on alumina, chromia and tin dioxide. Carbon aerogels were first developed in the late 1980s.[12]
Can aerogels be made with {formed,?} fungi-based production processes?
Aerogels tend to be fairly brittle, but more recent innovations make them probably work.
Fundamentally they are defined when you remove a solvent from a gel supercritically. High temperatures or pressures. Not very friendly for organics, but probably could work with a polymer.
Of course if you just want a low density polymer with air in it that's just styrofoam. The organic part is the interesting bit, and organic stuff usually has better strength due to multi-dimensional patterning whereas an aerogel would tend to have a single structure throughout.
In this domain it's important to remember that organic means carbon-containing, the word you're looking for is biological or biochemical. Polystyrene is an organic compound.
Sorry to have confused you. I used simplified language because this is not a site filled with chemists and I wanted to avoid confusing people.
Is it a carbon sink organic compound? Does it offgas VOCs?
HS chem was years ago. Does jsmol/pymol work in Jupiter notebooks? That probably doesn't at all model heat or other QFT or QG fields.
Though this one probably doesn't require an understanding of how quantum chemistry is actually occurring (Q12 STEM), I found this for protyping, which "operates on abstract data structures allowing the formulation, combination, automatic differentiation and optimization of generalized objectives. Tequila can execute the underlying quantum expectation values on state of the art simulators as well as on real quantum devices." https://github.com/tequilahub/tequila#quantum-backends
I'm not sure this is a useful framing.
If the chemicals used to make an aerogel are carbon sinks, the aerogel will be as well unless the energy use is higher than the sunk carbon.
If it's not biological, it's usually not a carbon sink. There are exceptions.
And as for outgassing, in my experience any polymer that you do gas sampling over in a closed container will be show some offgassing of something. It's just the nature of solids to release vapor, if there is zero in the gas phase then there's a lot of entropic force to drive there to be at least one in the gas phase. Like, a huge entropic driving force. Everything outgasses.
Whether there's a lot of outgassing depends on the chemicals that are broken down and the volatility and the temperature, I think this will almost always be low but measurable for long chain polymers, and modest for short chain and low density polymers (think how foam insulation smells when it gets heated in the sun if you've ever seen it). What is outgassed is definitely more important, so it really just is super case by case to do an actual safety analysis to compare them, and it would depend on the use conditions (temperature, sunlight, etc).
I don't know anything about jsmol or pymol, I don't think a software suite is necessary here. These processes are pretty basic and you can just look up relative volatilities and breakdown products for a given case. You definitely don't want to try to simulate the thermal breakdown of a polymer (which you'd define statistically with potentially hundreds of thousands of atoms each) -- especially not quantum mechanically -- into the gas phase. There are a lot of opportunities for much more basic issues than software libraries; like how you model the gas around the solid surface and what the structure of the solid surface is. Does gas move due to outside flow? Does it also carry away heat?
It's a hugely complex thing to actually calculate (and even for a few atoms QM simulations are horrific and easy to mess up) so instead I personally would just rely on the rules of thumb I talked about first and then look up actual data from bulk materials property measurements. I definitely wouldn't touch a simulation when real bulk property measurements are easily available.
Evolution is not a tree of life but a fuzzy network
I've always said it's not the Tree of Life, but rather the DAG of Life.
It may not even be acyclic. For example it appears fetal DNA is permanently detectable in mothers and may even have health benefits that conceivably increase the mother’s evolutionary fitness.
How can it be acyclic? A phylogenetic tree is a DAG. Organisms sharing DNA? That's definitely a cyclic graph.
If there were Schema.org/Animal and/or schema:AnimalInstance classes, what do you list under a :breed property to indicate that e.g. one parent is breed X and another is breed Y?! That's definitely not a DAG; that looks like a feature clustering dendrogram.
DNA barcoding > Mismatches between conventional (morphological) and barcode based identification https://en.wikipedia.org/wiki/DNA_barcoding#Mismatches_betwe...
Taxonomy (biology) https://en.wikipedia.org/wiki/Taxonomy_(biology)
FWIU, there's at least one DNA-based organism naming system; IDK how much that helps resolve :Animal and :AnimalInstance if at all?
My assumption as someone not formally trained in advanced biology is that the phylogenetic tree has an implied arrow of time, and each node of the tree is a point where a mutation occurred that was sufficient to create a new and separate family/order/genus/etc afterwards, with leaf nodes representing species.
In this sense a cycle can’t occur because a branch would need to reach back in time and rejoin with an ancestral branch. It would be a different kind of grandfather paradox: who would be the ancestor and who the descendant in a cycle?
Maybe someone could help point out the flaws in this model.
Your assumption is true, but only for rooted phylogenetic trees. In unrooted trees the node representing the most ancestral state is absent and thus no directionality can be determined.
U.S. interest rates have soared everywhere but savings accounts
The issue is that there is no incentive for banks to increase interest rates on accounts as they are already sitting on too much cash. Banks make money by lending money out, in times where banks are strapped for cash on hand, you will see interest rates increase. I don’t see this changing in the near future.
> "...there is no incentive for banks to increase interest rates on accounts as they are already sitting on too much cash."
and folks wonder why banks are so strictly regulated... no, banks are never sitting on too much cash unless they've made a marketing and/or an operational error. most banks are highly levered, meaning they're lending out, say, 10× the cash they hold, so they never "have too much cash on hand". quite the opposite. banks continually lobby regulatory agencies to raise their leverage thresholds so they can lever up even more and rake in more of that sweet, nearly risk-free[0] cash flow.
that they don't raise savings rates is purely out of greed, not necessity. they're also under no competitive pressure to do so, which indicates a malfunctioning market (functioning markets are by definition competitive). and more galling, they charge you hidden fees out the wazoo for the "privilege" of banking in a mine field.
banks are core infrastructure, much like roads and housing. i'd rather go back to a simpler form where banks were only allowed to make money on the spread between lending and savings rates, making them boring and having to compete (with higher savings rates, for example) for your business. all the risk-taking extensions to banking can still exist, just in a separate, firewalled entity with no access to that (nearly) risk-free cash flow.
[0]: risk-free in the finance sense of being free of idiosyncratic risk, not systemic risk.
You’ve just described the Glass-Steagall act. It has a long and storied history.
It was created in response to the Great Depression, then repealed by the Gramm-Leach-Bliley act because money. Then the Dodd-Frank act tried to have it reinstated but failed, also because money.
You may already be familiar with all of this but mentioning in case you’re not.
(https://en.m.wikipedia.org/wiki/Glass–Steagall_legislation)
There’s a quite brilliant documentary that came out in 2010, Inside Job, that covers this, but which was completely overshadowed by the much less informative The Big Short which came out 5 years later.
There was a separation between savings and investment banking between ~1933-1999.
Glass–Steagall in post-financial crisis reform debate: https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall_in_post...
(Edit: Monetary policy / Monetarism > Current State , Liquidity trap > Global financial crises of 2008 and 2020: https://en.wikipedia.org/wiki/Liquidity_trap )
How do microlending and DeFi rates democratize subsidized capital availability?
From IL-RFC-1 Interledger Architecture https://interledger.org/rfcs/0001-interledger-architecture/ :
> Settlement for one account MUST NOT depend on the status of any other accounts.
> If settlement of one account in the Interledger is contingent on the status of another account or relationship, this could create the threat of cascading risks and failures, similar to problems that occurred during the 2008 global financial crisis. Nodes can protect themselves from such risks by choosing to use settlement technologies such as collateralized payment channels where available. These types of arrangements can provide high-speed settlement without a risk that the other side may not pay. For more information on different ledger types and settlement strategies, see IL-RFC-22: Hashed Timelock Agreements.
> Nodes can also choose never to settle their obligations. This configuration may be useful when several nodes representing different pieces of software or devices are all owned by the same person or business, and all their traffic with the outside world goes through a single “home router” connector. This is the model of `moneyd`, one of the current implementations of Interledger.
Changing std:sort at Google’s scale and beyond
> LLVM history: Back then we recognized some really interesting benchmarks and we didn’t recall anybody trying to really benchmark sorts on different data patterns for standard libraries.
Timsort https://en.wikipedia.org/wiki/Timsort :
> In the worst case, Timsort takes O(n log n) comparisons to sort an array of n elements. In the best case, which occurs when the input is already sorted, it runs in linear time, meaning that it is an adaptive sorting algorithm. [3]
> It is advantageous over Quicksort for sorting object references or pointers because these require expensive memory indirection to access data and perform comparisons and Quicksort's cache coherence benefits are greatly reduced. [...]
> Timsort has been Python's standard sorting algorithm since version 2.3 [~2002]. It is also used to sort arrays of non-primitive type in Java SE 7,[4] on the Android platform,[5] in GNU Octave,[6] on V8,[7] Swift,[8] and Rust.[9]
Sorting algorithm > Comparison of algorithms https://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_o...
Schwartzian transform https://en.wikipedia.org/wiki/Schwartzian_transform :
> In Python 2.4 and above, both the sorted() function and the in-place list.sort() method take a key= parameter that allows the user to provide a "key function" (like foo in the examples above). In Python 3 and above, use of the key function is the only way to specify a custom sort order (the previously supported cmp= parameter that allowed the user to provide a "comparison function" was removed). Before Python 2.4, developers would use the lisp-originated decorate–sort–undecorate (DSU) idiom,[2] usually by wrapping the objects in a (sortkey, object) tuple
Big-O Cheatsheet https://www.bigocheatsheet.com/
Quicksort in Python 7 ways (and many other languages) on RosettaCode: https://rosettacode.org/wiki/Sorting_algorithms/Quicksort#Py...
Thanks, I'll remove the note about "recalling"
xeus-cling didn't exist back then: https://github.com/jupyter-xeus/xeus-cling#a-c-notebook
https://github.com/fffaraz/awesome-cpp#debug
A standard way to benchmark and chart {sorting algorithms, web framework benchmarks,} would be great.
The TechEmpower "framework overhead" benchmarks might have at least average case sorting in there somewhere: https://www.techempower.com/benchmarks/#section=data-r20&hw=...
awesome-algorithms https://github.com/tayllan/awesome-algorithms#github-librari...
awesome-theoretical-computer-science > Machine Learning Theory, Physics; Grover's; and surely something is faster than Timsort: https://github.com/mostafatouny/awesome-theoretical-computer...
Christ, the description on that second repo is so comically terribly spelled that I can’t figure out whether it’s a joke:
> The interdicplinary of Mathematics and Computer Science, Distinguisehed by its emphasis on mathemtical technique and rigour.
(Perhaps a witty satire on the reputation of mathmos for being godawful at anything literary...)
Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations
Euler equations (fluid dynamics) https://en.wikipedia.org/wiki/Euler_equations_(fluid_dynamic...
> In fluid dynamics, the Euler equations are a set of quasilinear partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity. [1]
Navier–Stokes equations https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equation...
> The Navier–Stokes equations mathematically express conservation of momentum and conservation of mass for Newtonian fluids.
Computational fluid dynamics https://en.wikipedia.org/wiki/Computational_fluid_dynamics
Numerical methods in fluid mechanics https://en.wikipedia.org/wiki/Numerical_methods_in_fluid_mec...
- [ ] Quantum fluid https://en.wikipedia.org/wiki/Quantum_fluid
> Quantum mechanical effects become significant for physics in the range of the de Broglie wavelength. For condensed matter, this is when the de Broglie wavelength of a particle is greater than the spacing between the particles in the lattice that comprises the matter. [...]
> The above temperature limit T has different meaning depending on the quantum statistics followed by each system, but generally refers to the point at which the system manifests quantum fluid properties. For a system of fermions, T is an estimation of the Fermi energy of the system, where processes important to phenomena such as superconductivity take place. For bosons, T gives an estimation of the Bose-Einstein condensation temperature.
Classical fluid https://en.wikipedia.org/wiki/Classical_fluid
> Classical fluids [1] are systems of particles which retain a definite volume, and are at sufficiently high temperatures (compared to their Fermi energy) that quantum effects can be neglected [...] Common liquids, e.g., liquid air, gasoline etc., are essentially mixtures of classical fluids. Electrolytes, molten salts, salts dissolved in water, are classical charged fluids. A classical fluid when cooled undergoes a freezing transition. On heating it undergoes an evaporation transition and becomes a classical gas that obeys Boltzmann statistics.
Chaos theory https://en.wikipedia.org/wiki/Chaos_theory
> Chaos theory is [...] focused on underlying patterns and deterministic laws highly sensitive to initial conditions in dynamical systems that were thought to have completely random states of disorder and irregularities. [1] Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnectedness, constant feedback loops, repetition, self-similarity, fractals, and self-organization.
Quantum chaos https://en.wikipedia.org/wiki/Quantum_chaos
> If quantum mechanics does not demonstrate an exponential sensitivity to initial conditions, how can exponential sensitivity to initial conditions arise in classical chaos, which must be the correspondence principle limit of quantum mechanics?
... in dynamic, nonlinear - possibly adaptive - complex systems.
google/jax-cfd https://github.com/google/jax-cfd#other-awesome-projects lists "Other differentiable CFD codes compatible with deep learning"
FWIU the AlphaZero for Fusion optimization is for the non-fluid plasma Deep Learning convex optimization part of the problem?
Is it ever enough to model just fluids, is modeling quantum chemistry necessary as well?
pip install tequila-basic
> Quantum Backends Currently supported: [Qulacs,
Qibo,
Qiskit,
Cirq,
PyQuil,
QLM, myQLM]https://github.com/tequilahub/tequila#quantumchemistry
https://github.com/tequilahub/tequila-tutorials/blob/main/Ch...
awesome-fluid-dynamics https://github.com/lento234/awesome-fluid-dynamics :
- "12 Steps to Navier-Stokes"
- Differential programming https://en.wikipedia.org/wiki/Differentiable_programming : gradients; sometimes gradient descent
- Neural Networks for PDE
"Differentiable function" https://en.wikipedia.org/wiki/Differentiable_function
> In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain
If there is a ZeroDivisionError because a denominator is zero, is that actually a differentiable function?
-x**-1
-1/x
There's a symbolic result at or approaching that limit. IDK how exactly that applies to quantum-scale fluid effects at room temperature? Shouldn't there be a triality in terms of relativity, post-relativity QM with hair, and fluids converging or diverging to infinity?Divergence theorem ... Fluid diverconvergences https://en.wikipedia.org/wiki/Divergence_theorem :
> The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to integration by parts. In two dimensions, it is equivalent to Green's theorem. [...] Explanation using liquid flow [...]
"Logarithms yearning to be free" re: symbolic limits and currently non-axiomatic Infinity https://news.ycombinator.com/item?id=31015139
... Quantum thermodynamics, fluids, and chaotic divergence
The "derivative" at a certain point is defined as the limit of the ratio between function differences and argument differences.
In the context of derivatives, "exists" means that this limit is a finite number.
If the derivative function has an expression which is a fraction whose denominator becomes null at a point, the derivative may exist or not exist at that point.
If the corresponding numerator at that point is non-null (i.e. the derivative is infinite), then the derivative does not exist and the original function is not differentiable at that point.
If the numerator is also null, the derivative may exist or not at that point, depending on whether the limit of the derivative exists or not.
Are there no symbolic results from derivatives of any order?
Is it more correct to say that, if the denominator is zero at that point, the derivative is just non-Real because it's e.g. `n*x*oo`?
Practically, do we just say that such actually discontinuous functions are still mostly differentiable but the derivative does not exist in non-symbolic space?
> ... Quantum thermodynamics, fluids, and chaotic divergence
FWIW, does [quantum] thermodynamics predict emergent behaviors amongst self-organizing systems apparently at least temporarily contradicting a tendency to entropic decay?
My understanding is that no: fluid dynamics, quantum fluid dynamics, and quantum chemistry are not sufficient to describe and thus cannot predict emergent behaviors in complex nonlinear - possibly emergently adaptive - complex systems.
Emergence occurs in/of/by/within/betwixt/between systems; in application emergent programs require human-level intelligence ethical filters: https://en.wikipedia.org/wiki/Emergence
Perhaps before describing physical systems with current best known descriptions as multi-field (QFT,QQ,) wave-particle[-fluid] interactions with convergent and divergent e.g convection, it's appropriate to compare the difference between Classical and Quantum Wave Interference: https://en.wikipedia.org/wiki/Wave_interference#Quantum_inte...
> some of the differences between classical wave interference and quantum interference: (a) In classical interference, two different waves interfere; In quantum interference, the wavefunction interferes with itself. (b) Classical interference is obtained simply by adding the displacements from equilibrium (or amplitudes) of the two waves; In quantum interference, the effect occurs for the probability function associated with the wavefunction and therefore the absolute value of the wavefunction squared. (c) The interference involves different types of mathematical functions: A classical wave is a real function representing the displacement from an equilibrium position; a quantum wavefunction is a complex function. A classical wave at any point can be positive or negative; the quantum probability function is non-negative.
Thus our best descriptions of emergent behavior in fluids (and chemicals and fields) must presumably be composed at least in part from quantum wave functions that e.g. Navier-Stokes also fit for; with a fitness function.
Gigahertz topological valley Hall effect in NEMS phononic crystals
"Gigahertz topological valley Hall effect in nanoelectromechanical phononic crystals" (2022) https://www.nature.com/articles/s41928-022-00732-y
> Topological phononic crystals can manipulate elastic waves that propagate in solids without being backscattered, and could be used to develop integrated acousto-electronic systems for classical and quantum information processing. However, acoustic topological metamaterials have been mainly limited to macroscale systems that operate at low (kilohertz to megahertz) frequencies. Here we report a topological valley Hall effect in nanoelectromechanical aluminium nitride membranes at gigahertz (up to 1.06 GHz) frequencies. We visualize the propagation of elastic waves through phononic crystals with high sensitivity (10–100 fm) and spatial resolution (10–100 nm) using transmission-mode microwave impedance microscopy. The valley Hall edge states, which are protected by band topology, are observed in both real and momentum space. Robust valley-polarized transport is evident from wave transmission across local disorder and around sharp corners. We also show that the system can be used to create an acoustic beamsplitter.
Quantum Hall effect > Photonic quantum Hall effect: https://en.wikipedia.org/wiki/Quantum_Hall_effect
> The quantum Hall effect, in addition to being observed in two-dimensional electron systems, can be observed in photons. Photons do not possess inherent electric charge, but through the manipulation of discrete optical resonators and coupling phases or on-site phases, an artificial magnetic field can be created. [17][18][19][20][21] This process can be expressed through a metaphor of photons bouncing between multiple mirrors. By shooting the light across multiple mirrors, the photons are routed and gain additional phase proportional to their angular momentum. This creates an effect like they are in a magnetic field.
The Hall effect article describes how Superposition states can be prepared with the Hall effect.
So, in crystal lattices, is there less noise due to scattering?
[deleted]
Doing small network scientific machine learning in Julia faster than PyTorch
A library I designed a few years ago (https://github.com/Netflix/vectorflow) is also much faster than pytorch/tensorflow in these cases.
In "small" or "very sparse" setups, you're memory bound, not compute bound. TF and Pytorch are bad at that because they assume memory movements are worth it and do very little in-place operations.
Different tools for different jobs.
In "small" setups, you're actually more likely to be overhead-bound if anything (especially on CPU).
Would you mind defining "overhead"?
IMO, one of the big sources of overhead is the PCIe bus.
Transferring data to-and-from the PCIe / GPU takes time. If the CPU is faster and can perform the calculation before the PCIe is done transferring, then there's no point even touching the GPU.
PCIe is on the scale of ~5000 nanoseconds (20,000 CPU cycles assuming 4GHz). A PCIe write, then read could be ~40,000 CPU cycles or so, and it turns out that a lot of things can be done in that time before the GPU was even _NOTIFIED_ that there was work to do.
CPUs can contact other CPU-cores within 50 to 500 nanoseconds or so, depending on the distance. A 64-core CPU could then have 64-cores * 18000 clocks == ~1-million CPU-clock cycles before the GPU gets any message at all.
There's no mention of GPUs, TPUs, or indeed QPUs in the memory hierarchy described by Wikipedia!
Locality of reference ("Data locality") > Spatial and temporal locality usage : https://en.wikipedia.org/wiki/Locality_of_reference
Memory hierarchy https://en.wikipedia.org/wiki/Memory_hierarchy :
> Most modern CPUs are so fast that for most program workloads, the bottleneck is the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy [citation needed]. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called the space cost, as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level. The resulting load on memory use is known as pressure (respectively register pressure, cache pressure, and (main) memory pressure). Terms for data being missing from a higher level and needing to be fetched from a lower level are, respectively: register spilling (due to register pressure: register to cache), cache miss (cache to main memory), and (hard) page fault (main memory to disk).
Is it that PCIe is necessarily implied by the debuggable pipeline specified by the von Neumann architecture? https://en.wikipedia.org/wiki/Von_Neumann_architecture
Otherwise, computation within RAM avoids interconnect saturation.
"Neuromorphic" computing, stateful RAM with operators mapped to particle interactions:
Memristor > Derivative devices > memtransistor https://en.wikipedia.org/wiki/Memristor#Derivative_devices
Quantum reservoir computing: https://en.wikipedia.org/wiki/Reservoir_computing#Quantum_re...
But these still need a faster and wider (and qubit) bus than PCIe, too: https://en.wikipedia.org/wiki/PCI_Express
https://en.wikipedia.org/wiki/Programming_the_Universe :
> Lloyd also postulates that the Universe can be fully simulated using a quantum computer; however, in the absence of a theory of quantum gravity, such a simulation is not yet possible. "Particles not only collide, they compute."
Quantum on Silicon looks cheaper in today dollars.
Devide the universe in QFT field-equal halves A and B, take energy from A to make B look like A, then add qubit error correction, and tell me if there's enough energy to simulate the actual universe on a universe QC with no instruction pipeline.
Logarithms yearning to be free
> The author opens with the example of finding the antiderivative of xn. When n ≠ -1 the antiderivative is another power function, but when n = -1 it’s a logarithm.
What a neat limit. Probably best to leave the powerfn/logfn() as a dumb symbolic symbol until the end (until after later parameter substitution)?
I don't follow. The antiderivative fails because of division by zero. What limit? And what does symbolic manip do?
The idea is that you want to calculate
integral{from t=0, to t=x} t^{-1+0} dx
but you calculate instead
f(x) = lim_{ε->0} [ integral{from t=0, to t=x} t^{-1+ε} dx ]
that is just
= lim_{ε->0} [ (t^ε - 1) / ε ]
replacing t^ε
= lim_{ε->0} [ (e^(ln(t)*ε) - 1) / ε ]
by https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule
= lim_{ε->0} [ ln(t)*e^(ln(t)*ε) / 1 ]
that is easy to calculate
= ln(t)*e^(ln(t)*0) / 1
= ln(t)*1 / 1
= ln(t)
So f(t)=ln(t)
So the type of the return value changes at asymptotes/limits (already) and thus that's not a pure function in terms of math. If a [math] function returns a more complex type signature instead of throwing a ZeroDivisionError (as Python core does) what is that then called? Is it differentiable or no, etc?
We throw ZeroDivisionError instead of axiomatically defining a ranking for
scalar*parameter*inf
if x > 0:
2*x*inf > x*inf
# because
2 > 1
But basically every CAS just prematurely throws away all terms next to infinity (by replacing the information in that expression with just infinity)? And nothing yet implements e.g. Conway's Surreal numbers infinities?Is negative infinity to the infinity greater or lesser than infinity?
assert (-1*math.inf)**math.inf == math.inf
assert (-1*sympy.oo)**sympy.oo == sympy.oo
Here's a dumb Real/Function instead of prematurely discarding information that could be useful: from sympy import symbol
from sympy.abc import x
Infinity = symbol('Infinity', real=True) # *
#
from sympy.symbols import Wild
All the axioms just change there. limit(-x**-1)
An uphill battle for certain."[Python-ideas] Re: 'Infinity' constant in Python" https://mail.python.org/archives/list/python-ideas@python.or...
> So the type of the return value changes at asymptotes/limits (already) and thus that's not a pure function in terms of math.
Plenty of functions are discontinuous. Almost all of them, in fact. (This isn't true constructively, but GP is clearly doing classical analysis.)
> If a [math] function returns a more complex type signature instead of throwing a ZeroDivisionError (as Python core does) what is that then called? Is it differentiable or no, etc?
Depends on the smooth structures you've imposed on the domain and codomain, which requires much more powerful types to represent than are available in any mainstream language I'm aware of. (You might be able to contort Haskell into something sort of close, but it wouldn't be simple.)
Show HN: Monocle – bidirectional code generation library
I just published a bidirectional code generation library. Afaik it's the first of its kind, and it opens up a lot of possibilities for cool new types of dev tools. The PoC is for ruby, but the concept is very portable. https://blog.luitjes.it/posts/monocle-bidirectional-code-gen...
Yeah back in the 90's it was called "round-tripping"
https://www.ibm.com/docs/en/rhapsody/8.2?topic=developing-ro...
I did a lot of code generation work in those years, working on the two dominant Mac-based generators (AppMaker and Prototyper) but was never ambitious enough to try round-tripping because of the horrors of parsing C++.
https://en.wikipedia.org/wiki/Round-trip_engineering :
> Round-trip engineering (RTE) is a functionality of software development tools that synchronizes two or more related software artifacts, such as, source code, models, configuration files, and even documentation.[1] The need for round-trip engineering arises when the same information is present in multiple artifacts and therefore an inconsistency may occur if not all artifacts are consistently updated to reflect a given change. For example, some piece of information was added to/changed in only one artifact and, as a result, it became missing in/inconsistent with the other artifacts.
Source-to-source_compiler > See also > #ROSE, : https://en.wikipedia.org/wiki/Source-to-source_compiler
Organization Discussions – GitHub Changelog
"What is GitHub Discussions? A complete guide" https://resources.github.com/devops/process/planning/discuss...
> GitHub Discussions vs. GitHub Issues: When to use each: When and how to use Discussions and Issues on a project—and how to turn a Discussion into an Issue (and vice versa).
Looks like they also support Polls now. There should be an obligatory note about e-voting, private keys (SK/PK), ZK, and "master keys" in non- Zero Trust Systems . https://github.com/topics/e-voting
Zero-trust security model: https://en.wikipedia.org/wiki/Zero_trust_security_model
"What’s new in GitHub Discussions: Organization Discussions, polls, and more" (2022-04) https://github.blog/2022-04-12-whats-new-in-github-discussio...
Unfortunately code can indeed generate code.
Fossil of dinosaur killed in asteroid strike found, scientists claim
The telling quote is:
"Dinosaurs: The Final Day with Sir David Attenborough will be broadcast on BBC One on 15 April at 18:30 BST. A version has been made for the US science series Nova on the PBS network to be broadcast later in the year."
say no more.
Over here that's PBS NOVA "Dinosaur Apocalypse: The Last Day" (2022) with Sir David Attenborough. https://www.pbs.org/wgbh/nova/video/dinosaur-apocalypse-the-...
It says "MAY 11, 2022 AT 10PM". Presumably that's EST or EDT.
This decent 25minute version will do until then. [https://www.youtube.com/watch?v=UChQHrjefVU] (Less a distinguished British accent. ;-> )
List of impact craters on Earth https://en.wikipedia.org/wiki/List_of_impact_craters_on_Eart...
Cretaceous–Paleogene extinction event (C-P or also K-Pg event) https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_e... :
> A wide range of species perished in the K–Pg extinction, the best-known being the non-avian dinosaurs. It also destroyed myriad other terrestrial organisms, including some mammals, birds,[21] lizards,[22] insects,[23][24] plants, and all the pterosaurs.[25] In the oceans, the K–Pg extinction killed off plesiosaurs and mosasaurs and devastated teleost fish,[26] sharks, mollusks (especially ammonites, which became extinct), and many species of plankton. It is estimated that 75% or more of all species on Earth vanished.[27] Yet the extinction also provided evolutionary opportunities: in its wake, many groups underwent remarkable adaptive radiation—sudden and prolific divergence into new forms and species within the disrupted and emptied ecological niches. Mammals in particular diversified in the Paleogene,[28] evolving new forms such as horses, whales, bats, and primates. The surviving group of dinosaurs were avians, ground and water fowl who radiated into all modern species of bird.[29] Teleost fish,[30] and perhaps lizards[22] also radiated.
https://en.wikipedia.org/wiki/Near-Earth_object :
> A near-Earth object (NEO) is any small Solar System body whose orbit brings it into proximity with Earth. By convention, a Solar System body is a NEO if its closest approach to the Sun (perihelion) is less than 1.3 astronomical units (AU).[2] If a NEO's orbit crosses the Earth's, and the object is larger than 140 meters (460 ft) across, it is considered a potentially hazardous object (PHO).[3] Most known PHOs and NEOs are asteroids, but a small fraction are comets. [1]
Asteroid impact avoidance https://en.wikipedia.org/wiki/Asteroid_impact_avoidance :
> In 2016, a NASA scientist warned that the Earth is unprepared for such an event.[3] In April 2018, the B612 Foundation reported "It's 100 percent certain we'll be hit by a devastating asteroid, but we're not 100 percent sure when."[4] Also in 2018, physicist Stephen Hawking, in his final book, Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. [5][6][7] Several ways of avoiding an asteroid impact have been described.[8] Nonetheless, in March 2019, scientists reported that asteroids may be much more difficult to destroy than thought earlier.[9][10] In addition, an asteroid may reassemble itself due to gravity after being disrupted.[11] In May 2021, NASA astronomers reported that 5 to 10 years of preparation may be needed to avoid a virtual impactor based on a simulated exercise conducted by the 2021 Planetary Defense Conference. [12][13][14]
- Nkalakatha the "#megamaser" (2022) https://www.sciencealert.com/extreme-megamaser-discovered-sh... ... (Can we harvest or collect and point - or pull against - such energy sources which are already present, using a minimum pertubative force in n-body idk QFT+QG fields in order to effectively read and write to points in stochastic spacetime, which is presumably all embedded in the horizon disc of one or more microscopic and larger black holes?)
Can a train of multiple solar collector + sails + lasers exceed (c + nlimit_approaching_c, or does that warp spacetime for NEOs, over their net effective path through spacetime?
- A QG Quantum Gravity duality for idk Gravitational lensing > Explanation in terms of spacetime curvature: https://en.wikipedia.org/wiki/Gravitational_lens
Claimed moons of Earth https://en.wikipedia.org/wiki/Claimed_moons_of_Earth
> Although the Moon is Earth's only natural satellite, there are a number of near-Earth objects (NEOs) with orbits that are in resonance with Earth. These have been called "second" moons of Earth. [3]*
> 469219 Kamoʻoalewa, an asteroid discovered on 27 April 2016, is possibly the most stable quasi-satellite of Earth.[4] As it orbits the Sun, 469219 Kamoʻoalewa appears to circle around Earth as well. It is too distant to be a true satellite of Earth, but is the best and most stable example of a quasi-satellite, a type of near-Earth object. They appear to orbit a point other than Earth itself, such as the orbital path of the NEO asteroid 3753 Cruithne. Earth trojans, such as 2010 TK7, are NEOs that orbit the Sun (not Earth) on the same orbital path as Earth, and appear to lead or follow Earth along the same orbital path.
> Other small natural objects in orbit around the Sun may enter orbit around Earth for a short amount of time, becoming temporary natural satellites. As of 2020, the only confirmed examples have been 2006 RH120 in Earth orbit during 2006 and 2007,[1] and 2020 CD3 in Earth orbit between 2018 and 2020. [5][6]
ʻOumuamua from ~Vega (2017)?: https://en.wikipedia.org/wiki/%CA%BBOumuamua
We’ve got a science opportunity overload: Launching the Wolfram Institute
Wind Turbine Blades Can’t Be Recycled, So They’re Piling Up in Landfills (2020)
"GE produces world's largest recyclable wind turbine blade" (2022) https://newatlas.com/energy/ge-worlds-largest-recyclable-win...
> The 62-meter (203-ft) prototype blade is made with Elium resin from materials company Arkema, which is a glass-fiber reinforced thermoplastic. Not only is the material 100 percent recyclable, it is said to deliver a similar level of performance to thermoset resins that are favored for their lightweight and durability.
> Through a chemical recycling method, the material can be depolymerized and turned into a new resin for re-use, acting as a proof-of-concept for a circular economy loop for the wind energy sector. Before that happens, in the coming weeks LM Wind Power will start full-scale structural testing to verify the blade's performance
And for blades that aren’t recycled, they can be shredded into insulation pellets or feedstock for cement.
Why doesn't that work with existing thermosets?
Could these be feedstock for something like these formed LEGO-style hempcrete blocks with structure that are stronger and more insulating than structural concrete? https://youtu.be/eqLXXjvQXgI
Traditional composites are difficult to recycle because of several reasons: - the thermoset polymer matrix cannot be melted or separated from the fiber in any reasonable way - the thermoset resin cannot be reformed or reshaped into a useful shape or material - graphite fiber composites in particular are very, very hard on cutting tools and are difficult to chop up (not sure about glass fiber composites, but on bulk they have similar strength characteristics to some aluminums)
These properties are difficult to separate from composites because we want composite parts to survive the elements, often high temperatures, and be robust and reliable with little fatigue
What about waterjet?
Or e.g. SYLOS laser for transmutation?
Or replicating nanomachines?
Or a vat of something like the stuff that eats plastic and everything else in the ocean if it gets out?
Probably just orders of magnitude cheaper to just bury it somewhere
https://www.quora.com/How-many-wind-turbines-could-fit-in-1-... :
> The density of wind turbines per unit of land area depends on many factors. Wind farms containing very large (1.5–2 megaWatt) turbines may have only one for every 25–50 acres. Smaller turbines can be more closely spaced.
> At either rate, the overall harvestable wind energy per unit of land at fully rated conditions (approx. 12 meters/sec wind speed), is generally in the range of 50 to 100 kW per acre.
harvestable_wind_energy = 50-100 kW/acre
https://www.energy.gov/eere/articles/wind-turbines-bigger-be... : average_rotor_diameter: 120m (393ft)
blade_length = 1/2 * average_rotor_diameter = 60m (197m)
How many rotor blades can be stored on an acre? Acre = 43560 ft**2
blade_depth__on_end =
spacing_vertical =
spacing_horizontal =
How much less $ does an acre storing the maximum number of wind turbines yield compared to a wind turbine on, say, 50 acres?Sphere packing https://en.wikipedia.org/wiki/Sphere_packing
Wind_turbine_design#Blade_recycling https://en.wikipedia.org/wiki/Wind_turbine_design#Blade_recy...
In Python, [JupyterLite or Mamba] Pint and Uncertainties might be helpful for units and uncertainty/error propagation.
The https://numpy.org/ pyolite (WASM) IPython demo has SymPy installed in the env but not Pint?
GhostSCAD: Marrying OpenSCAD and Golang
I wonder how many OpenSCAD wrappers now exist? I know of scad-clj [0], openpyscad [1], and solidpython [2].
I particularly like scad-clj, because of `lein auto generate`. It watches source files, and regenerates the OpenSCAD files automatically, which OpenSCAD then also picks up. Although I'm not well versed in Clojure, and find debugging Clojure tricky, the workflow is just so good.
[0] https://github.com/farrellm/scad-clj
you know the language is limited/poorly thought out when dozens of people re putting time into making tools that do nothing but wrap it in order to make it usable.
https://github.com/CadQuery/cadquery :
> CadQuery is often compared to OpenSCAD. Like OpenSCAD, CadQuery is an open-source, script based, parametric model generator. However, CadQuery stands out in many ways and has several key advantages:
> The scripts use a standard programming language, Python, and thus can benefit from the associated infrastructure. This includes many standard libraries and IDEs.
> CadQuery's CAD kernel Open CASCADE Technology (OCCT) is much more powerful than the CGAL used by OpenSCAD. Features supported natively by OCCT include NURBS, splines, surface sewing, STL repair, STEP import/export, and other complex operations, in addition to the standard CSG operations supported by CGAL
> Ability to import/export STEP and the ability to begin with a STEP model, created in a CAD package, and then add parametric features. This is possible in OpenSCAD using STL, but STL is a lossy format.
> CadQuery scripts require less code to create most objects, because it is possible to locate features based on the position of other features, workplanes, vertices, etc.
> CadQuery scripts can build STL, STEP, and AMF faster than OpenSCAD.
What are some of the advantages of OpenSCAD tooling?
The existence of true one-way functions depends on Kolmogorov complexity
Isn’t Kolmogorov Complexity hard to measure, because it depends on the language chosen?
The value of K for any language is within a constant of K for any other language. To convince yourself this is true, imagine constructing an interpreter for your preferred language in whatever language you are given. You can always prepend this (fixed-sized) interpreter to a program in your preferred language, adding a constant value to the complexity.
K can be arbitrarily large though.
I think the key is that K is fixed per language, not per input. So if you have a sequence of inputs of complexity 1, 10, 100, 1000, 10000...in language L1, then in language L2, the complexities will be something like 1+K, 10+K, 100+K, 1000+K, 10000+K,.... In the limit of very-high-complexity-input (pick a random TB of data as your string) the language-specific constant is overwhelmed.
1+K, 10+K, ... are merely upper bounds on the Kolmogorov complexities in L2, but the actual complexities can be lower, by an arbitrary amount for some strings. Furthermore, what's considered complex in L1 and L2 can be very different, and although "most" strings would be complex in either language, they don't have to be the same subsets in L1 and L2.
What is the correlation between _ complexity and e.g. EVM/eWASM opcode [CPU,] costs?
https://news.ycombinator.com/item?id=28499001
> Smart contracts cost CPU usage with costed opcodes. eWASM (Ethereum WebAssembly) has costed opcodes for redundantly-executed smart contracts (that execute on n nodes of a shard) https://ewasm.readthedocs.io/en/mkdocs/determining_wasm_gas_...
> AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU?
Language of fungi derived from their electrical spiking activity
Lifetime Annotations for C++
Looks like at least some of the authors are working for Google (if not everyone), with compiler backgrounds. I wonder if this is a strategical investment from Google to make cxx like approach more robust? Chrome team showed some interests on using Rust but not sure if there's any significant code written in Rust from then. Benefit from using Rust beside well isolated library might be quite limited without this kind of lifetime annotation while interop itself is additional cognitive overheads for programmers.
I think it makes the most sense for Google, considering the ridiculous amount of C++ code. Even Spanner alone is larger than most company's codebases, and rewriting it in Rust would take decades, especially because of the paradigm mismatch (C++ embraces shared mutability, Rust rejects it).
It also makes sense because in Spanner, doing any minor change required many months of review, because it was such critical infrastructure. Refactoring was a non-starter in a lot of cases.
So, a more gradual approach, just adding annotations that don't themselves affect the behavior of the program (just assist in static analysis) makes much more sense.
Rust does not reject shared mutability. Rather, it is explicitly reified via the Cell<>, RefCell<>, etc. patterns. Even unsafe shared mutability ala idiomatic C++ is just an UnsafeCell<> away.
In theory yes, but when you look at the average Rust program, those mechanisms are avoided in favor of more idiomatic approaches.
Interior mutability is used all over the place. It's absolutely necessary. The thing is you are forced to be clear about the runtime safety mechanism you're using. In general, it gets abstracted away behind a safe API.
Indeed it's present under the hood, we are operating on a CPU after all, which treats all of RAM as one giant shared-mutable blob of data. It will always be there, under some abstraction.
The point I'm trying to communicate is that in practice, for various reasons, Rust programs do not use shared mutable access to objects to the same extent that C++ programs do. For example, C++ programs use observers, and dependency injection (the pattern, not the framework) to have member pointers to mutable subsystems, and we just don't often see that in Rust programs. This is the paradigm mismatch I'm highlighting: to rewrite a C++ program in Rust often requires a large paradigm shift. The pain is particularly felt when making them interoperate in a gradual refactoring.
This is IMO one the bigger reasons that big rewrites to new languages fail, and why new languages benefit from being multi-paradigm, so that there's no paradigm mismatch to impede migrations.
IDK why the private, internal paradigms of e.g. Spanner would even need to be reimplemented in a clone instead of a port?
Looks like CockroachDB (Go), TiDB (Go, Rust), and YugabyteDB (C++) are Distributed SQL alternatives to Spanner.
... Which may or may not be able to rely upon inexpensive Lamport clocks: https://en.wikipedia.org/wiki/Lamport_timestamp#Lamport's_lo...
From https://westurner.github.io/hnlog/#comment-30603322 re https://en.wikipedia-on-ipfs.org/wiki/Database_index :
https://github.com/wilsonzlin/edgesearch :
> * Serverless full-text search with Cloudflare Workers, WebAssembly, and Roaring Bitmaps *
> "Edgesearch builds a reverse index by mapping terms to a compressed bit set (using Roaring Bitmaps) of IDs of documents containing the term, and creates a custom worker script and data to upload to Cloudflare Workers"
WASM or [C++] to WASM?
TIL about Roaring Bitmaps: /?q=roaring+bitmap https://medium.com/@amit.desai03/roaring-bitmaps-fast-data-s...
All those alternatives you mentioned are nowhere close to being on the level of Spanner in reliability or performance, particularly in a high contention scenario.
From https://discourse.llvm.org/t/rfc-lifetime-annotations-for-c/... :
> We are designing, implementing, and evaluating an attribute-based annotation scheme for C++ that describes object lifetime contracts. It allows relatively cheap, scalable, local static analysis to find many common cases of heap-use-after-free and stack-use-after-return bugs. It allows other static analysis algorithms to be less conservative in their modeling of the C++ object graph and potential mutations done to it. Lifetime annotations also enable better C++/Rust and C++/Swift interoperability.
> This annotation scheme is inspired by Rust lifetimes, but it is adapted to C++ so that it can be incrementally rolled out to existing C++ codebases. Furthermore, the annotations can be automatically added to an existing codebase by a tool that infers the annotations based on the current behavior of each function’s implementation.
> Clang has existing features for detecting lifetime bugs [...]
edit: Chrome evaluated previous [[clang::lifetimebound]], and that wasn't enough:
https://docs.google.com/document/d/e/2PACX-1vRZr-HJcYmf2Y76D...
Is this doc talking about the same feature that this post is about? It sounds like the doc is talking about a different, less-precise analysis that the post cites as prior art.
Learn about Concept Maps
I grew up hearing a lot of buzz for concept maps (or mind maps as I heard). I never found them to be as powerful as advertised.
A concept map is slightly easier than writing in bulletpoints because it can ignore structure. But this makes it incredibly difficult for anyone, including myself, to read after it is written.
At best, I use a concept map as an intermediary stage for figuring out how to write in bulletpoints, write in paragraphs, or draw in diagrams. This intermediary stage never lasts overnight for me, and I discard the concept map immediately.
("Mind maps are similar in structure to concept maps, developed by learning experts in the 1970s, but differ in that mind maps are simplified by focusing around a single central key concept."):
Perfect one sentence summary. I wasted a minute or two searching online for the difference between the two, but I should have read your comment first! Thanks for communicating precisely and concisely.
While scoping is necessary and useful, I hadn't ever learned that there were categorical restrictions upon mind map graphs, definitionally
Centrality, Components, Cliques might be helpful for general network analysis: https://networkx.org/documentation/stable/reference/algorith...
The difference between a URI and a URL; a URL is supposed to be dereferenceable and retrievable.
Named Graphs are composed of triples:
{named_graph_uri, subject_uri, predicate_uri, value_uri, [datatype_uri, language_uri]}
{g, s, p, o, [d, l]}
E.g. Rdflib supports various stores for and representations of RDF: https://en.wikipedia.org/wiki/RDFLibThough, general systems are more complex than the average [concept] network: there are nonlinearities in so many of the observed relations between components that trees and DAGs are laughably insufficient; systems descriptions require more than naievely-acyclical graphs without nonlinearity in independent relations even.
Graph (disambiguation) https://en.wikipedia.org/wiki/Graph
Glossary of systems theory https://en.wikipedia.org/wiki/Glossary_of_systems_theory
How best to describe nonlinear relations without a MultiDiGraph that bundles edges together by [QFT] field, because they're objectively not statistically independent?
s/value_uri/object/, which may be a URI:
{named_graph_uri, subject_uri, predicate_uri, object, [datatype_uri, language_uri]}
{g, s, p, o, [d], [l]}
A mindmap, where e.g. g=example.org/mind-maps/2022#our-mind-map and "a" is "xsd:type": @base <_#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix schema: <https://schema.org/> .
:Node, a, rdfs:Class .
:linksTo, a, rdfs:Property .
:Shapes, a, :Node .
:Shapes, :linksTo, :Rectangle .
:Rectangle, a, :Node .
:Rectangle, :linksTo, :Square .
:Square, a, :Node .
:Square, rdfs:label, "Square" .
:Square, schema:name, "Square" .
:Rectangle, rdfs:subClassOf, :Shape .
:Square, rdfs:subClassOf, :Rectangle .
https://en.wikipedia.org/wiki/Notation3#Comparison_of_Notati...The Personal Security Checklist
Also good: "The SaaS CTO Security Checklist [Redux]" https://github.com/vikrum/SecurityChecklists
"The Personal Infosec & Security Checklist" https://www.goldfiglabs.com/guide/personal-infosec-security-...
"The DevOps Security Checklist Redux" https://www.goldfiglabs.com/guide/devops-security-checklist/
... Years ago, I helped develop a checklist app for a hospital (in Python and JS at the time).
TIL checklists usually are justified, and may be the only process for collaboratively improving process controls that a healthy organization handling feedback has established; who gets to send PRs to the checklist, and what criteria should be applied such that evidence-based variations of process are objectively tested?
"Post-surgical deaths in Scotland drop by a third, attributed to a checklist" (2019) https://news.ycombinator.com/item?id=19684376
Study Tips from Richard Feynman
There's the famous Feynman method i.e. Study hard, Teach it to others, Identify the gaps in your own knowledge, and Simplify/Synthesize.
But people often also forget that there is another method associated with Feynman aka The Feynman Algorithm (which was outlined by his colleague Murray Gell-Mann, a noble prize winning physicist himself). This method goes as follows:
1. Write down the problem. 2. Think real hard. 3. Write the solution.
Not to discount or discredit anything (or anyone) here but we must understand that Feynman was no average person and that his advice on anything related to learning or problem-solving must be viewed through an optics that adequately adjusts for his intellect as well.
> Teach it to others
If there's no one around, try explaining it to yourself. An easy method to determine whether you superficially understood a topic or not. After reading some material, it's remarkable how little of it you've truly understood sometimes.
Minimodem – general-purpose software audio FSK modem
If you are interested in transmitting data over sound between airgapped devices, make sure to also checkout ggwave [0]. I've been working on this library on and off during the past year in my free time. I focused on making a FSK protocol that is robust in noisy environments at the cost of low bandwidth. Found some fun applications in various hobby projects.
Could this be used to embed e.g. the sports "game clock" clock time(s) in broadcast TV/audio/video streams; in order to synchronize an air-gapped device next to the media signal reproduction unit?
For example at a grille during the game.
FWIU, e.g. Chromecast have ultrasonic pairing.
Digital broadcasting adds significant latency so it would be in sync with the TV but still out of sync with the actual game (and other broadcast receivers).
The same issue affects radio time signal (e.g. https://en.wikipedia.org/wiki/Greenwich_Time_Signal, last beep is the exact top of the hour), so some broadcasters will no longer play them on internet streams/digital radio, rather than be inaccurate.
So live TV could broadcast [ultrasonic] timecodes during ad breaks such that the gameclock would be synchronized even when the viewer initially tunes in during ad breaks, but "Video On Demand" would be fine because it's not the current time timecode, it would be a game time timecode?
Selecting between multiple audible [sports bar TVs, radios] might imply need for a sonewhat-unique signal code, or only over e.g. USB would work with more than one game on; because there's naturally noise in that airgap channel.
Yes absolutely! (Edit: or maybe not :-) see sibling comment for more info)
Actually, the "Waver" youtube video linked at the top of the README has an embedded ultrasound transmission at around 0:36. I can for example decode the message on my iPhone by simply running the Waver app and playing the video on my PC.
Some people have already done some work for AV sync in the issues of the project [0].
https://www.adafruit.com/product/3421 https://learn.adafruit.com/adafruit-i2s-mems-microphone-brea... :
> The I2S is a small, low-cost MEMS mic with a range of about 50Hz - 15KHz, good for just about all general audio recording/detection.
From https://en.wikipedia.org/wiki/Ultrasound :
> Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.
Raspberry Pi Pico (RP2040), Pi Zero [2 W]: $5+
Power supply, case, wall-mount:
Mic: ~$6
Huge [red] 7-segment display with I2C and voltage regulator components taped to the wall next to the sports bar TVs:
Ask HN: Why don't PCs have better entropy sources?
After reading the thread "Problems emerge for a unified /dev/*random" (1) I was wondering why PCs don't have a bunch of sensors available to draw entropy from.
Is this assumption correct, that adding a magnetometer, accelerometer, simple GPS, etc to a motherboard would improve its entropy gathering? Or is there a mathematical/cryptographical rule that makes the addition of such sensors useless?
Do smartphones have better entropy gathering abilities? It seems like phones would be able to seed a RNG based on input from a variety of sensors that would all be very different between even phones in the same room. Looking at a GPS Android app like Satstat (2) it feels like there's a huge amount of variability to draw from.
If such sensors would add better entropy, would it really cost that much to add them to PC motherboards?
----
(1) https://news.ycombinator.com/item?id=30848973
(2) https://mvglasow.gitlab.io/satstat/ & https://f-droid.org/en/packages/com.vonglasow.michael.satsta...
I think it's worth repeating this:
While there are platforms with better and worse hardware sources of unpredictable bits, the problem with Linux /dev/random isn't so much a hardware issue, but rather a software one. The fundamental problem Linux tries to solve isn't that hard, as you can see from the fact that so many other popular platforms running on similar hardware have neatly solved it.
The problem with the LRNG is that it's been architecturally incoherent for a very long time (entropy estimation, urandom vs random, lack of clarity about seeding and initialization status, behavior doing bootup). As a result, an ecosystem of software has grown roots around the design (and bugs) of the current LRNG. Major changes to the behavior of the LRNG breaks "bug compatibility", and, because the LRNG is one of the core cryptographic facilities in the kernel, this is an instance where you really really don't want to break userland.
The basic fact of kernel random number generation is this: once you've properly seeded an RNG, your acute "entropy gathering" problem is over. Continuous access to high volumes of high-entropy bits are nice to have, but the kernel gains its ability to satisfy gigabytes of requests for random numbers from the same source that modern cryptography gains its ability to satisfy gigabytes of requests for ciphertext with a 128 bit key.
People looking to platform hardware (or who fixate on the intricacies of threading the LRNG isn't guest VMs) are mostly looking in the wrong place for problems to solve. The big issue today is that the LRNG is still pretty incoherent, but nobody really knows what would break if it was designed more carefully.
The piece I've been missing in this whole debate: why isn't the existing RNG simply frozen in its current bug-exact-behavior state and a new /dev/sane_random created?
Stuff that depends on the existing bugs in order to function can keep functioning. Everything else can move to something sane.
Obviously I'm missing something here.
> Obviously I'm missing something here
For a start, there's a long tail of migrating all useful software to /dev/sane_random. Moreover, there's a risk new software accidentally uses the old broken /dev/random.
Besides, /dev/sane_random essentially exists; it's just a sysctl called getrandom().
It's not that simple; Donenfeld wants to replace the whole LRNG with a new engine that uses simpler, more modern, and more secure/easier-to-analyze cryptography, and one of the roadblocks there is that swapping out the engine risks breaking bugs that userland relies on.
Postgres wire compatible SQLite proxy
Awesome to see work in the DB wire compatible space. On the MySQL side, there was MySQL Proxy (https://github.com/mysql/mysql-proxy), which was scriptable with Lua, with which you could create your own MySQL wire compatible connections. Unfortunately it appears to have been abandoned by Oracle and IIRC doesn't work with 5.7 and beyond. I used it in the past to hack together a MySQL wire adapter for Interana (https://scuba.io/).
I guess these days the best approach for connecting arbitrary data sources to existing drivers, at least for OLAP, is Apache Calcite (https://calcite.apache.org/). Unfortunately that feels a little more involved.
NYTimes/DBSlayer (2007) wraps MySQL in JSON: https://open.nytimes.com/introducing-dbslayer-64d7168a143f
ODBC > Bridging configurations: https://en.wikipedia.org/wiki/Open_Database_Connectivity#Bri...
awesome-graphql > tools: security: https://github.com/chentsulin/awesome-graphql#tools---securi...
... W3C SOLID > Authorization and Access Control: https://github.com/solid/solid-spec#authorization-and-access...
"Hosting SQLite Databases on GitHub Pages" (2021) re: sql.js-httpvfs, DuckDB https://news.ycombinator.com/item?id=28021766
[edit] TIL the MS ODBC 4.0 spec is MIT Licensed, on GitHub , and supports ~challenge/response token auth: "3.2.2 Web-based Authentication Flow with SQLBrowseConnect" https://github.com/microsoft/ODBC-Specification/blob/master/...
> Applications that are unable to allow drivers to pop up dialogs can call SQLBrowseConnect to connect to the service.
> SQLBrowseConnect provides an iterative dialog between the driver and the application where the application passes in an initial input connection string. If the connection string contains sufficient information to connect, the driver responds with SQL_SUCCESS and an output connection string containing the complete set of connection attributes used.
> If the initial input connection string does not contain sufficient information to connect to the source, the driver responds with SQL_NEED_DATA and an output connection string specifying informational attributes for the application (such as the authorization url) as well as required attributes to be specified in a subsequent call to SQLBrowseConnect. Attribute names returned by SQLBrowseConnect may include a colon followed by a localized identifier, and the value of the requested attribute is either a single question mark or a comma-separated list of valid values (optionally including localized identifiers) enclosed in curly braces. Optional attributes are returned preceded with an asterix (*).
> In a Web-based authentication scenario, if SQLBrowseConnect is called with an input connection string containing an access token that has not expired, along with any other required properties, no additional information should be required. If the access token has expired and the connection string contains a refresh token, the driver attempts to refresh the connection using the supplied refresh token.
Also TIL ODBC 4.0 supports: https://github.com/microsoft/ODBC-Specification :
> Semi-structured data – Tables whose schema may not be defined or may change on a row-by-row basis
> Hierarchical Data – Data with nested structure (structured fields, lists)
> Web Authentication model
Show HN: Redo – Command line utility for quickly creating shell functions
Currently, I'll append a comment to a frequently used command for easy searching from my history. For a simple example, I can access `git diff -w ; git diff -w --cached # gitdiff` by pressing Ctrl+R and typing `# gitd`.
For commands I use frequently or that are clunky to maintain as one-liners, I'll convert them into functions in my bashrc.
This seems like the best of both worlds in many ways, or at least is a great third option to have.
From https://westurner.github.io/hnlog/#comment-20671184 ::
> I log shell commands with a script called usrlog.sh that creates [per-]$USER and per-virtualenv tab-delimited [$_USRLOG] logfiles with unique per-terminal-session identifiers [$_TERM_ID] and ISO8601 timestamps; so it's really easy to just grep for the apt/yum/dnf commands that I ran ad-hoc when I should've just taken a second to create an Ansible role with `ansible-galaxy init ansible-role-name ` and referenced that in a consolidated system playbook with a `when` clause. https://westurner.github.io/dotfiles/usrlog.html#usrlog
stid \#tutorial; echo "$_TERM_ID"
tail -n11 $_USRLOG
ut -n11
grep "$_TERM_ID" "$_USRLOG"
usrlog_grep "$_TERM_ID"
ug "$_TERM_ID"
usrlog_grep_parse "$_TERM_ID"
ugp "$_TERM_ID" # `type ugp`
usrlog.sh: https://github.com/westurner/dotfiles/blob/master/scripts/us...Rustc_codegen_GCC can now bootstrap rustc
It's really great to be able to skip compiling LLVM when bootstrapping a whole OS/toolchain from source and only Rust happens to pull LLVM in.
Besides the gcc work I'm actually also hoping for the rustc_codegen_cranelift work to land one day!
Compiling llvm is a nightmare
Compiling LLVM for what purpose? Hacking on it? Producing an optimized build? Installing a build on a user's machine? You should be more specific.
The default clean build works just fine. On my machine I can clone the Git repo, do `mkdir build && cmake ../llvm && make -j64` and end up with working binaries after 5 minutes.
The build take 37GB by default here, and the binaries under bin/ are immediately usable work. If you want a smaller build with shared library just add a parameter. Want to throw in sub-projects such as LLD or Clang? Just another parameter. It's all very well-documented.
64 cores, and it takes 5 minutes? That’s… not awesome. Most people do not have 64 (presumably high performance) cores lying around for the purpose of building dependencies.
10 minutes of a c6i.32xlarge costs less than a dollar at on-demand rates in the us-east-2 AWS region. That'll get you 128 vCPUs and 256 gb of memory.
I think people put up with a lot of stuff that there's no reason to any more with on-demand computing.
Q: "Does my data fit in RAM?" https://news.ycombinator.com/item?id=22309883
A: https://yourdatafitsinram.net
Is there a one-liner to build llvm e.g. with k8s and containers, or just a `git push` to a PR? #GitOps
And another to publish a locally-signed tagged build?
`conda install rust` does install `cargo`.
`conda install -c conda-forge -y mamba; mamba install -y rust libllvm14 lit llvm llvm-tools llvmdev`
conda-forge/llvmdev-feedstock; ninja, cmake, python >= 3 https://github.com/conda-forge/llvmdev-feedstock/blob/main/r...
.scripts/run_docker_build.sh is called by build-locally.py: https://github.com/conda-forge/llvmdev-feedstock/blob/main/....
Black Holes Shown to Act Like Quantum Particles
Wouldn’t a singularity be the smallest point possible? By definition it is pretty much a cyclone of the smallest units of space time fabric. Any particle would need to exist above this scale in order to exist within the construct of the universe. If there were any object smaller than this it would mean that whatever makes up a unit/point/pixel of space time fabric is actually an emergent construct of much smaller units. If there were smaller said units composing even space time itself, perhaps these would be or be composed by a finally irreducibly small unit, the properties of which give rise to all variations of matter/energy, the laws which govern them and their equality and convertablity according to special relativity.
Most scientists do not think that black holes have a "real singularity." The idea of a mass that occupies a single point in space comes from taking Einstein's equations beyond the event horizon, ie. the curvature of space is greater than light so means nothing can have any resistance to distance/size. At some point between the event horizon and the center, quantum gravity (which is not understood) has to come into the picture. One idea is the "black hole firewall," which is that mass gets "stuck" at the boundary of the black hole.
White hole > Big Bang/Supermassive White Hole https://en.wikipedia.org/wiki/White_hole :
> Some researchers have proposed that when a black hole forms, a Big Bang may occur at the core/singularity, which would create a new universe that expands outside of the parent universe
Where's the exit on this thing?
Do the CMB or GWB indicate that the Shapley Attractor / Great Attractor interaction is the omnidirectional mouth / center of it all, or no? https://en.wikipedia.org/wiki/Shapley_Supercluster
Gravitational wave background > Cosmological sources https://en.wikipedia.org/wiki/Gravitational_wave_background
... FWIU, all we've done is observe minute confirmatory fluctuations from the ground; e.g. LIGO (Nobel Prize in Physics 2017, Thorne, NumFocus-supported tools)? Could that tell us where the center is? Should there even be one if we're actually inside a white hole / black hole combo?
What "dark matter" do we subtract or add to the CMB, GWB,?
... Severe questions of cosmological topology, in regards to a manifold for holographic projection
@PBSSpaceTime videos:
"Where Is The Center of The Universe?" (2022) https://youtu.be/BOLHtIWLkHg
"Could The Universe Be Inside A Black Hole?" (2022) https://youtu.be/jeRgFqbBM5E
"The Holographic Universe Explained" (2019) https://youtu.be/klpDHn8viX8
"Are Black Holes Actually Fuzzballs?" (2022) https://youtu.be/351JCOvKcYw
From https://www.quantamagazine.org/massive-black-holes-shown-to-... :
> Physicists are using quantum math to understand what happens when black holes collide. In a surprise, they’ve shown that a single particle can describe a collision’s entire gravitational wave.
"Scale invariance in quantum field theory" https://en.wikipedia.org/wiki/Scale_invariance#Scale_invaria...
Dagger: a new way to build CI/CD pipelines
Hi everyone, I'm one of the co-founders of Dagger (and before that founder of Docker). This is a big day for us, it's our first time sharing something new since, you know... Docker.
If you have any questions, I'll be happy to answer them here!
Hi! I've browsed the docs quickly, and I have a few questions.
Seems to assume that all CI/CD workflows work in a single container at a time pattern. How about testing when I need to spin up an associated database container for my e2e tests. Is it possible, and just omitted from the documentation?
Not familiar with cue, but can I import/define a common action that is used across multiple jobs? For example on GitHub I get to duplicate the dependency installation/caching/build across various jobs. (yes, I'm aware that now you can makeshift on GitHub a composite action to reuse)
Can you do conditional execution of actions based on passed in input value/env variable?
Any public roadmap of upcoming features?
> Seems to assume that all CI/CD workflows work in a single container at a time pattern.
Dagger runs your workflows as a DAG, where each node is an action running in its own container. The dependency graph is detected automatically, and all containers that can be parallelized (based on their dependencies) will be parallelized. If you specify 10 actions to run, and they don't depend on each other, they will all run in parallel.
> How about testing when I need to spin up an associated database container for my e2e tests. Is it possible, and just omitted from the documentation?
It is possible, but not yet convenient (you need to connect to an external docker engine, via a docker CLI wrapped in a container) We are working on a more pleasant API that will support long-running containers (like your test DB) and more advanced synchronization primitives (wait for an action; terminate; etc.)
This is discussed in the following issues:
- https://github.com/dagger/dagger/issues/1337
- https://github.com/dagger/dagger/issues/1249
- https://github.com/dagger/dagger/issues/1248
> Not familiar with cue, but can I import/define a common action that is used across multiple jobs?
Yes! That is one of the most important features. CUE has a complete packaging system, and we support it natively.
For example here is our "standard library" of CUE packages: https://github.com/dagger/dagger/tree/main/pkg
> For example on GitHub I get to duplicate the dependency installation/caching/build across various jobs. (yes, I'm aware that now you can makeshift on GitHub a composite action to reuse)
Yes code reuse across projects is where Dagger really shines, thanks to CUE + the portable nature of the buildkit API.
Note: you won't need to configure caching though, because Dagger automatically caches all actions out of the box :)
> Can you do conditional execution of actions based on passed in input value/env variable?
Yes, that is supported.
> Any public roadmap of upcoming features?
For now we rely on raw Github issues, with some labels for crude prioritization. But we started using the new Github projects beta (which is a layer over issues), and plan to open that to the community as well.
Generally, we develop Dagger in the open. Even as a team, we use public Discord channels (text and voice) by default, unless there is a specific reason not to (confidential information, etc.)
Thank you for the detailed response. I appreciate you taking the time. One last question/note.
> Note: you won't need to configure caching though, because Dagger automatically caches all actions out of the box :)
Is this strictly because it's using Docker underneath and layers can be reused? If so, unless those intermediary layers are somehow pushed/pulled by the dagger github action (or any associated CI/CD tool equivalent), experience on hosting server is going to be slow.
Sidenote, around 2013 I've worked on a hacky custom container automation workflow within Jenkins for ~100 projects, and spent considerable effort in setting up policies to prune intermediary images.
Thus on certain types of workflows without any prunning a local development machine can be polluted with hundreds of images, unless the user is specifically made aware of stale images. Does/will dagger keep track of the images it builds? I think a command like git gc could make sense.
> > Note: you won't need to configure caching though, because Dagger automatically caches all actions out of the box :)
> Is this strictly because it's using Docker underneath and layers can be reused?
Not exactly: we use Buildkit under the hood, not Docker. When you run a Dagger action, it is compiled to a DAG, and run by buildkit. Each node in the DAG has content-addressed inputs. If the same node has been executed with the same inputs, buildkit will cache it. This is the same mechanism that powers caching in "docker build", but generalized to any operation.
The buildkit cache does need to be persisted between runs for this to work. It supports a variety of storage backends, including posix filesystem, a docker registry, or even proprietary key-value services like the Github storage API. If buildkit supports it, Dagger supports it.
Don't let the "docker registry" option confuse you: buildkit cache data isn't the same as docker images, so it doesn't carry the same garbage collection and tag pruning problems.
> Don't let the "docker registry" option confuse you: buildkit cache data isn't the same as docker images, so it doesn't carry the same garbage collection and tag pruning problems.
IIRC doesn't buildkit store its cache data as fake layer blobs + manifest?
I don't see how it can avoid the garbage collection and tag pruning problems since those are limitations of the registry implementation itself.
You still need to manage the size of your cache, since in theory it can grow infinitely. But it’s a different problem than managing regular Docker images, because there are no named references to worry about: just blobs that may or may not be reused in the future. The penalty for removing the “wrong” blob is a possible cache miss, not a broken image.
Dagger currently doesn’t help you remove blobs from your cache, but if/when it does, it will work the same way regardless of where the blobs are stored (except for the blob storage driver).
Is there a task runtime stat for a blob pruning task?
This sounds like memoization caching: https://en.wikipedia.org/wiki/Memoization
> In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.
Re: SBOM: Software Bill of Materials, OSV (CloudFuzz), CycloneDX, LinkedData, ld-proofs, sigstore, and software supply chain security: "Podman can transfer container images without a registry" https://news.ycombinator.com/item?id=30681387
Can Dagger cache the (layer/task-merged) SBOM for all of the {CodeMeta, SEON OWL} schema.org/Thing s?
Grafana Mimir – Horizontally scalable long-term storage for Prometheus
There isn't a link to the project on the page (that I could find) so it almost looked like it's not open source. But here it is: https://github.com/grafana/mimir.
You have to find the "Download" button and click it, it's very non-obvious :< The entire page seems to be designed to funnel you into signing up for their paid service, which makes sense, but still doesn't feel great...
Recently switched from their cloud service back to on-premise. The cloud version wasn't being updated and the entire setup experience left a lot to be desired with how you connect their on-premise grafana agent, especially if you aren't using their easy button deployment stuff. Also, billing for metrics is insane, as on any given day my metric load may vary between 5-7k or more. This caused some operational overhead as I was constantly tweaking scrapers to reduce useless metrics.
For $50/mo, you can self host everything easier, cheaper and with more control IMO.
> For $50/mo, you can self host everything easier, cheaper and with more control IMO.
Can you give an example as to how you could self host a grafana stack for $50/month? On AWS that buys you 4 cores, 8GB memory and 0 storage, and it's certainly not easier than clicking one button on the grafana website.
There are Helm charts available for all Grafana products so if you already run a Kubernetes cluster and have spare capacity you can just throw it up there. Loki supports shipping logs to GCS/S3 natively and Prometheus can use Cortex (also available as a Helm chart) to do the same. Once you throw Grafana behind SSO and implement a backup cronjob you're done until you reach scale and have to start deploying/scaling individual components separately.
I implemented most of the above using Terraform on a managed DigitalOcean cluster on a Saturday a few months back; it wasn't super-hard. Alternatively you could rent a few VPSes someplace and use k3s or similar to get an unmanaged cluster.
Physicists Build Circuit That Generates Clean, Limitless Power from Graphene (2020)
It's been two years, what is their progress? It has probably failed to be exploitable then ?
ScholarlyArticles that cite "Fluctuation-induced current from freestanding graphene" (2020) https://scholar.google.com/scholar?cites=3748350803951231734...
Recommendations when publishing a WASM library
conda: "Adding a WebAssembly platform" https://github.com/conda/conda/issues/7619
pyodide.loadPackage("numpy");
pyodide.loadPackage("https://foo/bar/numpy.js");
# import micropip
micropip.install('https://example.com/files/snowballstemmer-2.0.0-py2.py3-none-any.whl')
"Creating a Pyodide package" > "2. Creating the meta.yaml file"
https://pyodide.org/en/stable/development/new-packages.html#...conda-forge: "WASM as a supported architecture" https://github.com/conda-forge/conda-forge.github.io/issues/...
A statically typed scripting language that transpiles to Posix sh
If you’re not using SSH certificates you’re doing SSH wrong (2019)
Rant engaged. As a person who feels responsible for ensuring what I build is secure, the security space feels inscrutably defeating. Is there a dummies guide, MOOC, cert, or other instructional material to better get a handle on all these things?
SSH keys make sense. But certificates? Is this OIDC, SAML, what? Is it unreasonable to request better and deeper "how to do {new security thing}" when PKI is a new acronym to someone? Where can I point my data science managers so they can understand the need and how to implement measures to have security on PII-laden dashboards? As so on.
I can surely recommend reading into SSH certificates using the ssh-keygen manpage. No any extra tools required.
I sign SSH certificates for all my keypairs on my client devices, principal is set to my unix username, expiry is some weeks or months.
The servers have my CA set via TrustedUserCAKeys in sshd_config (see manpages). SSH into root is forbidden per default, i SSH into an account with my principal name and then sudo or doas.
My gain in all of this: I have n clients and m servers. Instead of having to maintain all keys for all clients on all servers, i now only need to maintain the certificate on each client individually. If i loose or forget a client, its certificate runs out and becomes invalidated.
`ssh-keygen` #Certificates: https://man7.org/linux/man-pages/man1/ssh-keygen.1.html#CERT...
"DevSec SSH Baseline" ssh_spec.rb, sshd_spec.rb https://github.com/dev-sec/ssh-baseline/blob/master/controls...
"SLIP-0039: Shamir's Secret-Sharing for Mnemonic Codes" https://github.com/satoshilabs/slips/blob/master/slip-0039.m...
> Shamir's secret-sharing provides a better mechanism for backing up secrets by distributing custodianship among a number of trusted parties in a manner that can prevent loss even if one or a few of those parties become compromised.
> However, the lack of SSS standardization to date presents a risk of being unable to perform secret recovery in the future should the tooling change. Therefore, we propose standardizing SSS so that SLIP-0039 compatible implementations will be interoperable.
Google's Certificate Transparency Search page to be discontinued May 15th, 2022
I know there are CT search services like crt.sh, but is it practical to download the raw data and search it locally? If the logs are append‐only, it feels like a perfect usecase for rsync.
Yes, you can use the certificate-transparency go code to pull down from the trillian API https://github.com/google/certificate-transparency-go/blob/m...
You would need to know the index, or you could just iterate over a range
Do you think that CT log data replication would be more secure and efficient if the CT logs were stored by a zero trust distributed application like a blockchain, instead of Merkle signatures in a database owned by one party (that's now discontinuing free indexing, at least)?
From "Oak, a Free and Open Certificate Transparency Log" (LetsEncrypt 2019) https://news.ycombinator.com/item?id=19920002 :
> Trillian is a centralized Merkle tree: it doesn't support native replication [...] According to the trillian README, trillian depends upon MySQL/MariaDB and thus internal/private replication is as good as the SQL replication model (which doesn't have a distributed consensus algorithm like e.g. paxos).
And what about indexing and search queries at volume, again without replication?
From "A future for SQL on the web" https://news.ycombinator.com/item?id=28158491 :
> https://thegraph.com/docs/indexing
>> Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function.
>> GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network.
>> Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing.
FWIW, here's how OSV affords search queries: https://github.com/google/osv#data-dumps
> For convenience, these sources are aggregated and continuously exported to a GCS bucket maintained by OSV: gs://osv-vulnerabilities
> This bucket contains individual entries of the format gs://osv-vulnerabilities/<ECOSYSTEM>/<ID>.json as well as a zip containing all vulnerabilities for each ecosystem at gs://osv-vulnerabilities/<ECOSYSTEM>/all.zip
> E.g. for PyPI vulnerabilities:
# Or download over HTTP via https://osv-vulnerabilities.storage.googleapis.com/PyPI/all.zip
gsutil cp gs://osv-vulnerabilities/PyPI/all.zip
Hopefully, with an incentivized Blockchain Indexing service and/or e.g. GCS buckets that you just always `cp` and then load locally and then query locally, we can find a solution for queries of the growing CT Certificate Transparency logs.Implementing a toy version of TLS 1.3
This is wonderful.
I was super excited to dive in an find the RSA code so I could preen about Bleichenbacher's vulnerability, but she neatly sidestepped that by doing ECDH. Then I thought, well, maybe it's P-curve ECDH and I can preen about invalid curve attacks on static-ephemeral ECDH. But nope, X25519! My point here, apart from making fun of myself for being the kind of person who would write this stuff on a message board, is TLS 1.3 is pretty solid.
The "block thing" that's kind of weird is, I assume, the TLS Record Layer. TLS runs (ordinarily) over TCP, which provides a non-demarcated stream of bytes. TLS breaks that stream up into records, and runs its handshake messages over one type of record, (say) HTTPS over another, and "alerts" over a third. The Record Layer also interacts, I think, with TLS's misbegotten compression system?
In the same vein as this project (but with different goals) is Trevor Perrin's tlslite, which is implemented in pure Python: https://github.com/trevp/tlslite
> tlslite
"PEP 543 – A Unified TLS API for Python" #interfaces (-2016) https://peps.python.org/pep-0543/#interfaces
Learning a protocol by writing a toy client (or toy server) is a blast. It's so satisfying to see a real, production-quality server sending real responses to your little mess.
You'd probably enjoy work as a software pentester, where the docket --- at least for non-web-applications, which admittedly are the most common project if you don't specialize --- is almost entirely building tooling-grade implementations of random protocols so you can test for vulnerabilities.
Here's caddy's go/tls wrapper with e.g. ACME, OCSP stapling: https://github.com/caddyserver/caddy/blob/master/modules/cad...
Django-ca also does OCSP and certbot-compatible ACMEv2 w/ known limitations: https://django-ca.readthedocs.io/en/latest/acme.html#known-l...
E.g. https://google.github.io/clusterfuzzlite/ is likely not so great at protocols because that requires testing concurrent and distributed systems and TLAplus, which at least currently can't find side channels FWIU.
https://github.com/secfigo/Awesome-Fuzzing#network-protocol-...
OSS-Fuzz runs CloudFuzz[Lite?] for many open source repos and feeds OSV OpenSSF Vulnerability Format: https://github.com/google/osv#current-data-sources
The quantum technology ecosystem explained
Wikipedia (dbpedia, wikidata,) concept URIs:
Category:Quantum mechanics https://en.wikipedia.org/wiki/Category:Quantum_mechanics
Applications_of_quantum_mechanics https://en.wikipedia.org/wiki/Applications_of_quantum_mechan...
List of emerging technologies https://en.wikipedia.org/wiki/List_of_emerging_technologies may have inspiration for applications of currently-discovered quantum mechanical phenomena.
#Q12 is the Quantum K12 talent.
QoS: Quantum-on-Silicon may very well scale; but how can we store un-collapsed output qubits?
Quantum tagging,
> QoS: Quantum-on-Silicon may very well scale; but how can we store un-collapsed output qubits?
Aren't all current programmable quantum processors essentially made of qubit memory cells capable of certain in-memory operations?
All current programmable quantum processors are made from unicorn hairs and fairy dust.
The current state of the art is trying to prove that the thing did something quantum. Programmability is at best swapping some wires around to change that something.
This is false. Quantum computers are not programmed like telephone switch boards. They're programmed typically by sending signal pulses (of RF, DC, or light) of the right shape at the right time to physical elements and they react. It really is programming.
Moreover, it's remarkably easy to see that the machines—universal gate-based machines—are doing something quantum. What's not easy to see is if they'll ever break away from their scaling challenges and become useful, large scale machines.
Well, NMR has been doing that for quite some time but nobody considers it "computing". Realistically, you're encoding a problem within a physical system by perturbing its energy levels, letting them evolve, and then doing readout. This works because computation is universal.
Again, no. Maybe you're thinking about adiabatic quantum computation, something popularized by D-Wave. But most people in the industry don't call their machines "(universal) computers," but rather "quantum annealers."
This is not the same as universal, gate-based computation, which does take an actual program containing instructions, and executes those instructions more-or-less sequentially, just as any computer programmer would expect. The state of the system begins with what is essentially equivalent to a large array of 0's, and you manipulate it accordingly, without resorting to evolution as your primary computational means of marching forward. In fact, left alone, these computers are designed to remain static, like an ordinary computer. (However, they still couple with the environment and decohere, which is one of the major challenges we, as humanity, face in building a useful quantum computer.) This is what Rigetti, IBM, Google, HRL, Amazon, IonQ, ColdQuanta, etc. are doing. They use superconducting transmon qubits, ion qubits, neutral atom qubits, or silicon quantum dot qubits. (There are other companies and qubit technologies still.)
Hey, um, I think I do know what I'm talking about :). You can see more of this historical side effort, which didn't pan out for a number of reasons: https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_qua... A 7 qubit QC NMR was implemented in 2001.
All these systems are just evolution of spin systems or other similar systems. The big difference with an NMR quantum computer is that it manipulates ensembles of spins.
The comment about universality of computing is that many physical processes can be used to compute things. I didn't say that qcs were universal computers.
Please try to read what i'm writing more carefully.
I know what NMR is. The systems are not evolutions of NMR. They're fundamentally different in their design, construction, and physics. An exchange-only silicon dot quantum computer is very different than an NMR quantum computer, which is very different than a neutral atom quantum computer.
These computers, these days, are by and large considered computers which execute programs. Perhaps in the 90s-00s that wasn't the case since physicists cared more then about the construction of a laboratory apparatus and less about turning it into a programmable device with program and data I/O.
I can't comment on what you know or don't know, but what you're writing is misleading and not accurate. The two points you've made (about what's considered computing and how modern quantum computers work) are what I refute.
I don't understand your point. Everything I said above is completely and totally technically correct from a physical point of view. You are ascribing to my statements meaning which I did not intend.
Modern trapped ion computers are doing the same underlying operations as NMR: you are using RF or other energy to perturb the energy states of underlying particles or other components, and reading out the results.
There was also a whole field called 'dna computing' which you would call biologists in a lab, but they were absolutely doing computing
I'm intently curious what sort of common-sense explanation you'd ascribe to how one programs an ordinary classical computer. Would you describe a computer programmer as one who orchestrates a delicate movement of charge through an intricate arrangement of n–p–n bipolar transistors?
Quantum computers really can be instructed to do an abstract operation on an abstract quantity, just like I can on an ordinary computer. You tell quantum computers to add and multiply, just as you do in your favorite programming language. In the programming language Quil, which a couple quantum computers of different base technologies use, one can literally write down a matrix of numbers in a program to define a new mathematical function, and later use that function to manipulate the contents of the computer's RAM[1]. On top of this, you can have loops, boolean conditions, and all that jazz a programmer expects. You don't even need a physicist to do this; a completely physics-ignorant programmer could do this.
All this business about energy levels, evolution, etc. are distractions, just as the electrodynamics of a transistor are distractions from what it means to program a computer.
[1] I'm abusing the word "RAM" here, where I truly mean the state of your quantum register(s).
> one can literally write down a matrix of numbers in a program to define a new mathematical function, and later use that function to manipulate the contents of the computer's RAM[1].
This "Quantum Computing for Computer Scientists" video https://youtu.be/F_Riqjdh2oM explains classical and quantum operators as just matrices. What are other good references?
Quantum state: https://en.wikipedia.org/wiki/Quantum_state
Quantum logic; quantum logical operators: https://en.wikipedia.org/wiki/Quantum_logic
> All this business about energy levels, evolution, etc. are distractions, just as the electrodynamics of a transistor are distractions from what it means to program a computer.
But a classical simulator - like e.g. qiskit - for a quantum circuit/experiment/function must run the experiment very^very^very many times to even probabilistically approximate a sufficient quantum system; because of the combinatorial probabilistic explosion that results from adding just one more basis state.
What are the fundamental limitations of quantum simulators? Maybe it's possible.
Quantum simulator: https://en.wikipedia.org/wiki/Quantum_simulator
- [ ] Maybe Twistor theory has insight into a classical geometrical formulation that could be run on a non-QC?
Amplituhedron: https://en.wikipedia.org/wiki/Amplituhedron
[Photon] wave-particle constructive superpositions approximate which operators, which may form a neat topology like this:
- [ ] > A research question for a new school year:
> The classical logical operators form a neat topology. Should we expect there to be such symmetry and structure amongst the quantum operators as well? https://commons.m.wikimedia.org/wiki/File:Logical_connective...
Your assessment of how a quantum simulator works is not quite right. These simulators represent the entire probability distribution of basis states succinctly as an array. This array grows very large (exponentially) in the number of qubits.
A simulator only needs to run a computation once (which is multiplication of matrices in a tensor product space) and look at the resulting state. You don't need to run anything multiple times to approximate a quantum state.
The questions you're asking are the whole point of the field of quantum information science. On an ordinary quantum computer where quantum state will collapse to a basis state upon readout, indeed you might need to gather statistics to determine the answer to whatever you've asked your computer. However, "very many times" is mathematically bounded in some way for an ideal quantum computer. It's like saying "we need to do many^many^many^many comparisons to do quicksort". Well yes, but we have a relationship between the size of the input (N) and the average number of comparisons needed (N log N), which makes the algorithm feasible in practice. This is the same with quantum algorithms.
There are also different kinds of quantum algorithms. Some are more probabilistic in nature. Others are—again in purely ideal circumstances—give you the right answer in one go.
As a side note: It is very hard for me to read, understand, and respond to your comments. They seem like random buzzword soups and aren't very coherently put together, mixed with random links and references.
Re: error in simulators and actual QC hardware, which we do need for a reason: Quantum Error Correction # General_codes https://en.wikipedia.org/wiki/Quantum_error_correction#Gener...
How to best quantize reals into matrices (~= tensors)? https://en.wikipedia.org/wiki/Quantization_(signal_processin...
> Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A quantum error correcting code protects quantum information against errors of a limited form.
Here's "Quantum Algorithm Zoo" by Microsoft Quantum: https://quantumalgorithmzoo.org/
And "Timeline of quantum computing and communication" https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_...
I have a hard time with the idea that the outcome of the ultimate quantum simulation is a collapsed float.
Quantum Monte Carlo: https://en.wikipedia.org/wiki/Quantum_Monte_Carlo :
> Quantum Monte Carlo encompasses a large family of computational methods whose common aim is the study of complex quantum systems. One of the major goals of these approaches is to provide a reliable solution (or an accurate approximation) of the quantum many-body problem. [...] The difficulty is however that solving the Schrödinger equation requires the knowledge of the many-body wave function in the many-body Hilbert space, which typically has an exponentially large size in the number of particles. Its solution for a reasonably large number of particles is therefore typically impossible,
What sorts of independent states can or should we map onto error-corrected qubits in an approximating system?
Propagation of Uncertainty ... Numerical stability ... Chaotic convergence, ultimately, apparently: https://en.wikipedia.org/wiki/Propagation_of_uncertainty
"Quantum computing: A taxonomy, systematic review and future" (2022) https://doi.org/10.1002/spe.3039
"Multi-qubit quantum logic operations with ion-implanted donor spins in silicon" (2022) https://scholar.google.com/citations?view_op=view_citation&h... https://meetings.aps.org/Meeting/MAR22/Session/G39.1 (Veritasium video)
> Among semiconductor qubits, the electron and nuclear spins of donors in silicon play a special role for their conceptual simplicity (a 31P donor in silicon is similar to hydrogen in vacuum) and their exceptional coherence times [1] and 1-qubit gate fidelities [2]. Here I will present experimental progress on multi-qubit logic operations with donor spins, which point to several credible pathways for scalability using ion-implanted donors in MOS-compatible devices. The current state of the art is a hybrid electron-nuclear 3-qubit processor [3], where two 31P nuclear spin qubits are coupled to the same electron. The shared electron enables a geometric nuclear two-qubit CZ gate, which we perform with 99.37% average fidelity. NMR single-qubit gates reach fidelities up to 99.95%, and state preparation and measurement are performed with 98.95% fidelity. These three metrics show how close this system is to operating at fault-tolerance thresholds. Further, we entangle the two nuclei with the electron to prepare a 3-qubit GHZ state with 92.5% fidelity. Electron-nuclear entanglement unlocks the ability to connect nuclear qubits via the electrons, for instance using exchange interactions [4]. We have operated a weakly (~10 MHz) exchange-coupled 31P donor pair as a 2-qubit electron system, with native CROT gates performed by resonant microwaves. Gate fidelity benchmarks are underway and will be reported at the Meeting. On the engineering side, we have demonstrated the ability to implant single donors in silicon with confidence up to 99.85% [5]. This striking result identifies ion implantation as a scalable and accurate manufacturing strategy for spin-based quantum computers in silicon.
QoS: Quantum-on-Silicon
The survey article above just says "[Quantum] Output"? Is that different from registers? How long are those states ah coherent?
"Researchers store a quantum bit for a record-breaking 20 milliseconds" (2022) https://phys.org/news/2022-03-quantum-bit-fora-record-breaki...
> By managing to store a qubit in a crystal (a "memory") for 20 milliseconds, a team from the University of Geneva (UNIGE) has set a world record and taken a major step towards the development of long-distance quantum telecommunications networks.
What are repeaters, and what are [quantum] prepared states in re: registers and longer-term storage for non-collapsed (or just probabilistic?) qubit outputs?
Programming current quantum computers does not require swapping wires around. You can go to https://quantum-computing.ibm.com right now, drag some operations around in an editor, and have their quantum computer run those operations. No one is madly dashing around changing wires when you do that.
That was how they did it back in the old days though.
Q: "Ask HN: What's the Equivalent of 'Hello, World' for a Quantum Computer?" https://news.ycombinator.com/item?id=22707580 [ IBM Qiskit, Microsoft Q#, Google TFQ TensorFlow Quantum, Google Cirq ([NumFOCUS,] SymPy) ]
- https://www.tensorflow.org/quantum/tutorials/hello_many_worl...
A: Set a register to zero and see how many times it reads as zero: A) in the local software simulator; and B) with just one modern day qubit register.
Debugging with GDB
One absolutely essential companion to gdb is 'rr' [0], it's a 'post mortem' debugger, but unlike a core file, you can replay the action, in reverse if necessary. You can reverse-cont, reverse-step create watchpoints, run forward again etc etc. It's absolutely amazing!
The best thing about time travel debugging is that you effectively get an instruction-perfect reproducer for the problem you're looking at.
Even if the problem only occurs one time in a thousand, if you have recorded it then you can review it in arbitrary detail as many times as you need. You can even send the recording to another developer for their feedback.
Probably the second best thing about time travel is watchpoints[0] + reverse-continue[1] - combining these two features, you can answer "why is that value there?" incredibly quickly.
Reverse execution + watchpoints is so powerful it can speed up the diagnosis of memory corruptions, race conditions, algorithmic bugs, etc from days/weeks to hours.
[0] https://sourceware.org/gdb/onlinedocs/gdb/Set-Watchpoints.ht...
[1] https://sourceware.org/gdb/onlinedocs/gdb/Reverse-Execution.... - GDB has an implementation of reverse debugging but larger projects like `rr` (https://rr-project.org/) or `udb` (https://undo.io/solutions/products/udb/ - which, disclaimer, I work on) exist to do this with higher performance on complex, real-world programs.
(edit: format links better)
Time travel debugging https://en.wikipedia.org/wiki/Time_travel_debugging :
> Interactive debuggers include the ability to modify code and step forward based on updated information.[4] Reverse debugging tools allow users to step backwards in time through the steps that resulted in reaching a particular point in the program. Time traveling debuggers provide these features and also allow users to interact with the program, changing the history if desired, and watch how the program responds.[5]
https://github.com/rr-debugger/rr :
> System requirements: Linux kernel ≥ 3.11 is required (for PTRACE_SETSIGMASK).
> rr currently requires either:
> An Intel CPU with Nehalem (2010) or later microarchitecture. [OR] Certain AMD Zen or later processors
Is there an rr-like reverse time-travel debugging tool for ARM64/aarch64?
Are GUIs like Voltron and Ghidra helpful for gdb and/or rr-like traces?
rr works in principle on AArch64 though it requires an extremely recent CPU and the rest of system userspace to be compiled correctly.
Any gdb front end can probably be made to work with rr. Our current project is (shameless plug) https://pernos.co/ which provides a super-advanced GUI for working with (x86) rr traces.
"Support ARM" #1373 https://github.com/rr-debugger/rr/issues/1373
rr (software) https://en.wikipedia.org/wiki/Rr_(debugging)
Record and replay debugging https://en.wikipedia.org/wiki/Record_and_replay_debugging
/? site:github.com inurl:awesome https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
Arpchat – Text your friends on the same network using just ARP
libp2p/rust-libp2p: https://github.com/libp2p/rust-libp2p
"chat with mdns does not work on the internet" https://github.com/libp2p/go-libp2p-examples/issues/64 ... mdns.go: https://github.com/libp2p/go-libp2p/blob/master/examples/cha...
'Quantum hair’ could resolve Hawking’s black hole paradox, say scientists
Does this mean we can finally see the (collapsed, 'classical photonic') quantum information of dinosaurs which is never destroyed; by recalling information embedded in or relayed through such spaces; which are induced by cosmic rays, neutron stars, and possibly accelerators?
... That we can finally see dinosaurs?
A Primer on Proxies
Is it ok if I don’t like the term “reverse proxy”?
I find it entirely confusing and non-intuitive. I put it up there with idiotic terms like “OTT” which AFAIK just means “connected to the internet”.
Has always bothered me also, but that's the industry term so we are kind of stuck with it.
Proxy components are officially called Intermediaries in thr HTTP semantic specification; see https://httpwg.org/http-core/draft-ietf-httpbis-semantics-la....
Intermediaries can have different purposes. The official alternative to reverse proxy is "gateway", which is unfortunately overloaded with other kinds of gateways in networking.
Naming things is hard. Reverse proxy isn't great but all things considered is unique enough to allow folks to discriminate the sort of HTTP proxying that is happening
An HTTP reverse proxy forwards HTTP requests and adds e.g. X-Forwarded-For and X-Forwarded-Host headers.
https://www.nginx.com/resources/wiki/start/topics/examples/f... :
X-Forwarded-For: 12.34.56.78, 23.45.67.89
X-Real-IP: 12.34.56.78
X-Forwarded-Host: example.com
X-Forwarded-Proto: https
TIL from the nginx docs that there's a standardized way to forward HTTP without the X- prefix on the unregistered headers: Forwarded: for=12.34.56.78;host=example.com;proto=https, for=23.45.67.89
What is the difference between a reverse proxy and a load balancer?k8s calls this "Ingress" and there are multiple "Ingress API" implementers; which essentially must reload the upstream server list on SIGHUP. https://kubernetes.io/docs/concepts/services-networking/ingr...
List of k8s Ingress Controllers: https://kubernetes.io/docs/concepts/services-networking/ingr...
> What is the difference between a reverse proxy and a load balancer?
Reverse proxy may or may not loadbalance requests. For example, in a sidecar configuration it can just terminate tls, provide telemetry, etc and forward everything to local port.
A [load-balancing] reverse proxy can also keep WAF rules in RAM for processing requests and responses. WAF: Web Application Firewall (OWASP CRS ruleset, CF ruleset,)
Methods for delegating HTTP requests to another application, with per-message overhead and inevitably-necessarily-tunable buffering: Layer 2 (MAC on a local segment), Layer 3 (IP), Layer 4 (TCP, UDP ports), Layer 7: HTTP parse and forward over network sockets or file sockets, defy separation of concerns and least privileges and run the (e.g. non-blocking Lua,) app within the webserver, Layer 7+: container service mesh Ingress API,
e.g. FastCGI uses file sockets, which avoids additional TCP overhead but doesn't really scale because sockets and network filesystems.
(ASGI is the Asynchronous WSGI, which specifies $ENVIRONMENT_VARIABLE names as an interface contract in order to decouple web [[reverse] proxy] servers from web applications.)
Fundamentally, which variables passed in the e.g. os.environ dict like $REMOTE_USER and IDK is it like $SSL_CLIENT_CERT_SHA384, SSL_CLIENT_CERT_*; should downstream web applications simply trust as valid strings over what network path?
TLS re-termination.
Non-root [web] servers must run on ports less than 1024, which e.g. iptables or nftables (or eBPF) can easily port-forward to only if rewriting URLs within potentially-signed assets within HTTP messages and HTTP/3 UDP streams isn't necessary.
Ask HN: Tools to generate coverage of user documentation for code
Does anyone know of tools/techniques for generating a coverage report of user documentation over source code? In other words, I'd like to be able to automatically determine when there is documentation missing for an implemented feature or if there is documentation for a feature that was removed.
I'm asking because I've been working on a tool for many years and it's grown large enough that I have a hard time keeping track of what all the parts do and what has and has not been documented. What I'm thinking I'd like to do is add a tag to a piece of source code (C++) and put the corresponding tag in the user docs (RST). Then, during the build, I'd like a report of untagged code, code/docs with matching tags, and code/docs with unmatched tags.
I also feel like there should be at least two levels of documentation to be tracked: 1) the overall feature and 2) fine-details about how the feature behaves under different circumstances. For example, a feature that interacted with the network would have an overall description and some extra notes about how timeouts/retries are handled.
Does anyone have any experience with such a thing or have a similar interest?
"[Python-Dev] Should set objects maintain insertion order too?" https://mail.python.org/archives/list/python-dev@python.org/... :
> Are there URIs for Big-O notation that could be added as e.g. structured attributes in docstrings or annotations?
Looks like docutils supports bibliographic fields and a `metadata` directive: https://docutils.sourceforge.io/docs/ref/rst/directives.html...
RST field lists: https://docutils.sourceforge.io/docs/ref/rst/restructuredtex...
`sphinx-apidoc -M`: https://www.sphinx-doc.org/en/master/man/sphinx-apidoc.html
sphinx-contrib/apidoc re-runs sphinx-apidoc on every build to generate the doc stubs: https://github.com/sphinx-contrib/apidoc
awesome-sphinxdoc: https://github.com/yoloseem/awesome-sphinxdoc
Requirements traceability: https://en.wikipedia.org/wiki/Requirements_traceability
Test coverage: https://en.wikipedia.org/wiki/Code_coverage
...
Someday I'd like to be able to express RDF triples in RST; wherein the preceding heading and/or a specific attribute specify the subject URI and then attr/value pairs are predicate/objects. IDK if that's even necessary for this application where you'd have the dotted-path of the __doc__ string to imply the subject URI?
: https://twitter.com/westurner/status/1384081580218408962 :
How to express #LinkedData in
- [ ] LaTeX
- [x] HTML: RDFa, JSON-LD, Microdata
- [ ] YAML: YAML-LD
- [ ] Markdown: MyST-Markdown for executablebooks/jupyter-book (Sphinx roles and directives)
- [ ] reStructuredText (docutils)
- [ ] SCH: SEON OWL ontology could grow a "Documentation"/"Docstring" rdfs:Class/owl:Class and requisite domains on possibly already covering properties to describe the URI-traceable relations between the artifacts: https://github.com/sealuzh/onts-seon :
> SEON consists of multiple ontologies, for example to describe stakeholders, activities, artifacts, and the relations among all of them. The ultimate goal is to facilitate the implementation of tools that help software engineers to manage software systems over their entire life-cycle.
jupyter-book (Sphinx (docutils), myst-parser, jupyter notebooks) > #879 "Generate OpenGraph metadata for social previews and cards" https://github.com/executablebooks/jupyter-book/issues/879#i...
Show HN: `Git add -p` with multiple buckets
There are probably a couple good ways to avoid trying to split `git add -p` with e.g. jujutsu `jj`? https://github.com/martinvonz/jj/blob/main/docs/git-comparis...
If CI isn't running tests for every PR, `git add -p` can create pretty patches that fail when attempting to `git bisect` later.
> Comprehensive support for rewriting history: Besides the usual rebase command, there's `jj describe` for editing the description (commit message) of an arbitrary commit. There's also `jj edit`, which lets you edit the changes in a commit without checking it out. To split a commit into two, use `jj split`. You can even move part of the changes in a commit to any other commit using `jj move`.
Beavers back in London after 400-year absence
Here in the US midwest beavers are shot and their dams dynamited wherever they are found.
The majority of the US used to be endless chains of mosquito-infested beaver ponds, the entire length of every stream and river. Nearly uninhabitable by people.
The extinction of the beaver is what made the US agriculturally useful. There's not a single landowner I know, who wants a random pond flooding their field, road or house.
For better or worse, the age of the beaver is definitely over. At least in the US.
In the Pacific Northwest, beavers increase resilience to wildfire and drought. They do so by creating natural fire breaks and by increasing groundwater storage. They also do this in a way that also creates fish habitat and deeper pools that coldwater fish like trout and salmon can take advantage of to survive lower water years.
They're really ecologically important, and while some may complain that they fall trees, that's actually very beneficial in the west where we have forest fires that burn far more trees than any beavers could hope to destroy to build their structures. This is not just for the northwest, but other drought and fire-prone western regions (Oregon, Colorado).
TL;DR Our elimination of beavers is exacerbating the problems of drought and fire in the american west.
Good video from PBS's Terra series: https://www.youtube.com/watch?v=6lT5W32xRN4
"Leave It to Beavers" (2018) PBS Nature [53m] https://g.co/kgs/5yA9R5 :
> A growing number of scientists, conservationists and grass-roots environmentalists have come to regard beavers as overlooked tools when it comes to reversing the disastrous effects of global warming and world-wide water shortages. Once valued for their fur or hunted as pests, these industrious rodents are seen in a new light through the eyes of this novel assembly of beaver enthusiasts and “employers” who reveal the ways in which the presence of beavers can transform and revive landscapes. Using their skills as natural builders and brilliant hydro-engineers, beavers are being recruited to accomplish everything from finding water in a bone-dry desert to recharging water tables and coaxing life back into damaged lands.
Apparently beavers are attracted to the sound of running water.
Show HN: A Graphviz Implementation in Rust
Author of a web-based GraphViz editor [1] here. We're currently using viz.js [2], which is the original GraphViz compiled to WASM. Since viz.js is unmaintained, we're looking for a replacement.
This library looks really promising! As it's written in rust, it should be possible to port it to WASM. Are there plans in that direction?
Is it planned to be a drop-in replacement for GraphViz (or the dot engine)?
[1]: https://edotor.net [2]: https://github.com/mdaines/viz.js
awesome-graphviz #language-bindings, https://github.com/CodeFreezr/awesome-graphviz#language-bind...
awesome-network-analysis #javascript lists a few libraries. https://github.com/briatte/awesome-network-analysis#javascri...
FWIW, JupyterLite builds WASM as well
How to record data for reinforcement learning agent from any Linux game (2020)
The title is misleading, it promises recording data from *any* game, yet it requires some UDP telemetry to be implemented in the game itself and uses F1 as the game.
I was expecting something with a) strace to track IO or b) gdb to monitor specific memory regions which contain game state or c) some eBPF stuff (to track IO or memory aka a+b combined).
"Sysdig vs DTrace vs Strace: A technical discussion." https://sysdig.com/blog/sysdig-vs-dtrace-vs-strace-a-technic...
There are sysdig chisels that reference gdb.
https://github.com/openai/retro :
> Gym Retro lets you turn classic video games into Gym environments for reinforcement learning and comes with integrations for ~1000 games. It uses various emulators that support the Libretro API, making it fairly easy to add new emulators.
Podman can transfer container images without a registry
I wish containers hadn't created the abstractions of a registry and an image instead of exposing it all as tar files (which is what it kind of is under the covers) served over a glorified file server. This leads to people assuming there's some kind of magic happening and that the entire process is very arcane, when in reality it's just unpacking tar files.
If you want to DIY a container with unix tools, this should help: https://containers.gitbook.io/build-containers-the-hard-way/
"Signing Images with Docker Content Trust" explains how cryptographic container image signatures work w/ Docker Notary (TUF) https://docs.docker.com/engine/security/trust/#signing-image...
The TUF spec (and PyPI TUF PEPs) explains why a tar over https (with optional DNSSEC, a CA cert bundle, CRL, OCSP,) isn't sufficient for secure software distribution. "#ZeroTrust DevOps"; #DevSecOps
What's the favorite package format with content signatures, key distribution, a keyring of trusted (authorized) keys, and a cryptographically-signed manifest of per-file hashes, permissions, and extended file attributes? FWIW, ZIP at least does a CRC32.
We now have the Linux Foundation CNCF sigstore for any artifact, including OCI container images.
W3C ld-proofs is a newer web standard that unfortunately all package managers haven't yet migrated to. https://news.ycombinator.com/item?id=29355786
Because ld-proofs is RDF, it works in JSON-LD and you could merge the entire SBOM [1] and e.g. CodeMeta [2] Linked Data metadata for all of the standardized-metadata-documented components in a stack.
[1] https://github.com/google/osv/issues/55
[2] https://github.com/codemeta/codemeta
Isn't this reinventing the wheel somehow? Linux distros have been using tar.gz over HTTP (not S!) and relying on GPG for years.
But most Linux distros are not that great either. They don't implement TUF and are not safe against some vectors prevented by that. It's OK to start anew when container distribution poses a different problem anyway.
The linked page doesn't explain anything at all. It says some incredibly obvious things (remotes can be compromised and you should be prepared for that) and then goes on to say the subject fixes all that, but there's actually no explanation of how it does that or what it does to get there. It's a marketing piece, essentially.
From this about DNSSEC and software supply chain security a couple weeks ago, from this about Android packages: https://news.ycombinator.com/item?id=27700513 :
>> [...] Sigstore is a free and open Linux Foundation service for asset signatures: https://sigstore.dev/what_is_sigstore/
>> The TUF Overview explains some of the risks of asset signature systems; key compromise, there's one key for everything that we all share and can't log the revocation of in a CT (Certificate Transparency) log distributed like a DLT, https://theupdateframework.io/overview/
>> Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency
>> Yeah, there's a channel to secure there at that layer of the software supply chain as well.
>> "PEP 480 -- Surviving a Compromise of PyPI: End-to-end signing of packages" (2014-) https://www.python.org/dev/peps/pep-0480/
>>> Proposed is an extension to PEP 458 that adds support for end-to-end signing and the maximum security model. End-to-end signing allows both PyPI and developers to sign for the distributions that are downloaded by clients. The minimum security model proposed by PEP 458 supports continuous delivery of distributions (because they are signed by online keys), but that model does not protect distributions in the event that PyPI is compromised. In the minimum security model, attackers who have compromised the signing keys stored on PyPI Infrastructure may sign for malicious distributions. The maximum security model, described in this PEP, retains the benefits of PEP 458 (e.g., immediate availability of distributions that are uploaded to PyPI), but additionally ensures that end-users are not at risk of installing forged software if PyPI is compromised.
>> One W3C Linked Data way to handle https://schema.org/SoftwareApplication ( https://codemeta.github.io/user-guide/ ) cryptographic signatures of a JSON-LD manifest with per-file and whole package hashes would be with e.g. W3C ld-signatures/ld-proofs and W3C DID (Decentralized Identifiers) or x.509 certs in a CT log.
> FWIU, the Fuschia team is building package signing on top of TUF.
W3C Web Bundles + Linked Data for the SBOM and metadata could be a good solution for software supply chain security in general: https://news.ycombinator.com/item?id=29296573 :
>> Web Bundles, more formally known as Bundled HTTP Exchanges, are part of the Web Packaging proposal.
>> HTTP resources in a Web Bundle are indexed by request URLs, and can optionally come with signatures that vouch for the resources. Signatures allow browsers to understand and verify where each resource came from, and treats each as coming from its true origin. This is similar to how Signed HTTP Exchanges, a feature for signing a single HTTP resource, are handled.
How do these newer potential solutions compare to distributing packages and GPG-signed hash manifests over HTTP, with GPG public keys retrieved over HTTPS (HKP)? (Maybe with keys pinned in the GPG source codes for common key servers? Or why not?)
Are DNS (DNSSEC, DoH, DoT downgrade attacks), CA compromise (SPOF), and x.509 cert forgery still the significant potential points of failure? What of that can e.g. Web3 solve for?
Wut? I just make websites man.
I don't want to be dismissive, but I hate deploying my applications for this reason. I'm an application developer. I'm not averse to infrastructure as code, and containerization, and I'm happy to do ops for my preferred stack. But I can't learn all this stuff too.
Conveniently, Docker has abstracted 99% of what she said away from you.
Does `podman trust` also work yet? Shell commands from the first Docker docs link above:
docker trust key generate jeff
# docker trust key load key.pem --name jeff
docker trust signer add --key cert.pem jeff registry.example.com/admin/demo
docker trust sign registry.example.com/admin/demo:1
export DOCKER_CONTENT_TRUST=1
docker push registry.example.com/admin/demo:1
docker trust inspect --pretty registry.example.com/admin/demo:1
docker trust revoke registry.example.com/admin/demo:1
Light exposure during sleep impairs cardiometabolic function
Research on Blue light and sleep: https://justgetflux.com/research.html
https://en.wikipedia.org/wiki/Melanopsin :
> Melanopsin photoreceptors are sensitive to a range of wavelengths and reach peak light absorption at blue light wavelengths around 480 nanometers.[30] Other wavelengths of light activate the melanopsin signaling system with decreasing efficiency as they move away from the optimum 480 nm. For example, shorter wavelengths around 445 nm (closer to violet in the visible spectrum) are half as effective for melanopsin photoreceptor stimulation as light at 480 nm.[30]
Under Melanopsin > infobox > "Biological process", e.g. "entrainment of circadian clock by photoperiod" and "regulation of circadian rhythm" are listed.
Show HN: Instantly create a GitHub repository to take screenshots of a web page
I built a GitHub repository template which automates the process of configuring a new repository to take web page screenshots using GitHub Actions.
You can try this out at https://github.com/simonw/shot-scraper-template
Use the https://github.com/simonw/shot-scraper-template/generate interface to create a new repository using that template, and paste the URL that you want to take screenshots of in as the "description" field.
The new repository will then configure itself using GitHub Actions, take the screenshot and save it back to the repo!
This but automated visual regression testing (eg testing screenshots are the same) would be amazing. Anyone got any ideas how this might work with shot-scraper?
You could absolutely get this working with GitHub Actions with a bit of creativity.
I've been playing around with my own image-diff tool for this kind of thing, but it's not yet in a decent state: https://github.com/simonw/image-diff - there are other, better options out there such as https://github.com/mapbox/pixelmatch
Needle is an older system that did this using Selenium - updating that to work with Playwright (or Playwight via shot-scraper) would be an interesting project: https://github.com/python-needle/needle
Awesome Visual Regression Testing > lists quite a few tools and online services: https://github.com/mojoaxel/awesome-regression-testing#tools...
"visual-regression": https://github.com/topics/visual-regression
Cypress.io can run in a CI job, does Time Travel, works with DevTools debugger, can take screenshots and [headless] video, and it looks like there's a visual regression testing thing for it: https://github.com/mjhea0/cypress-visual-regression https://docs.cypress.io/guides/overview/why-cypress#Features
Lawn mowing frequency affects bee abundance and diversity (2018)
The previous owner of my house was maintening a picture perfect green. With the kids and the jobs, I didn't want to follow that practice. Too much time for a soil not made for this, lots of fertilizer, various chemicals...
I fix the dead patches every spring and autumn with clover seeds and compost made from grass clippings, egg shells and vegetables/fruits peels. Zero fertilizer or weed killers, I remove by hand the weeds I don't like once in a while. I rake the garden twice a year to keep moss under control.
5 years later, our garden no longer looks like a green, but a prairie with the usual local plants. Since 2 years poppies are colonizing the most sun exposed parts, with spectacular blooms we are eagerly waiting for. Worms, larvae, slugs and snails are plenty, so birds are always at work foraging. Whatever dies during the hottest weeks regenerates in autumn. Plenty of bees are foraging the clovers and bindweed flowers.
It's looks nice, it's alive, it's much less work and money to maintain, it's self regenerating.
That's awesome - natural and native ecosystems are what we should be aiming for, not sterile lawns. It's better for the environment and for biodiversity, lower maintainence, and you get the benefits of an ecosystem like surprise blooms.
I don't know if you're aware of this (you probably are but I'm posting it for readers anyway) but there's a movement for doing this in many aspects of our nature management called permaculture. It's pretty cool!
Rewilding (conservation biology) https://en.wikipedia.org/wiki/Rewilding_(conservation_biolog...
Climate change mitigation effects of rewilding > Carbon sinks and removal: https://en.wikipedia.org/wiki/Climate_change_mitigation#Carb...
Permaculture : https://en.wikipedia.org/wiki/Permaculture
It would also cut the number of interrupted Zoom meetings due to noise pollution in half.
Another reason for textual chat meetings with URLs, #hashTags, @atTags, an agenda at the top, headings, and a prepared, accessible transcript!
I’m not sure why this comment is grey. I know this wouldn’t necessarily work for everyone, but hey voice/video meetings don’t work well for some of us either. Folks, do what works for you and your team. Not everyone has to do the same thing.
Lasers could cut lifespan of nuclear waste from a million years to 30 minutes
I'm a practicing nuclear physicist and do work on gamma ray interactions with actinides. It's possible i'm missing something important, but i think this article is an utter mess and was clearly written by someone with absolutely no understanding of what they are saying. Attempting to learn anything from this article will be counterproductive.
Here is an explanation of chirped pulse amplification: https://www.rp-photonics.com/chirped_pulse_amplification.htm... This technique is for producing optical photons, which will have less than 10 ev of energy. In the same way that you can't focus the sun's light to a point that gets hotter than the surface of the sun (violates second law of thermodynamics), it isn't obvious how low energy laser pulses can be useful for this. The article offers no explanation whatsoever. Maybe the electric field across the nucleus can be made strong enough to induce scission?
In general, if you want to interact with the nucleus you need photons on the order of 1 Mev or more, whose wavelengths are comparable to the size of the nucleus. These are gamma rays, which are not optical photons. There are ways to boost optical photons to those energies (like inverse compton scattering), but the article says nothing about that either. I would think inverse compton scattering of a chirped pulse from an electron packet in an accelerator will completely destroy the sharp timing and reflect the distribution of the electrons instead.
Mourou's Nobel lecture [1] has brief explanation of the envisioned process. The idea is to use intense laser pulses to accelerate deuterium ions into a tritium target. The deuterium-tritium fusion then generates high-energy neutrons that can transmutate nuclei.
This is probably not impossible, but I have doubts if this could be implemented efficiently at scale.
[1] https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91....
You can do that with fusor. It does direct acceleration without laser generation losses.
uranium 235 and plutonium 239, have a half life of 24,000 years
- U235 has a half life of 700,000,000 years.
If he gets pulses 10,000 times faster, he says he can modify waste on an atomic level.
The article says nothing about how he hopes to gain 4 orders of magnitude. Is there some kind of Moore's law in the "speed" of these pulses?
I wasted my time reading this article, it explains nothing.
It's not a novel idea:
"Laser induced nuclear waste transmutation" https://arxiv.org/abs/1603.08737
"Transmutation prospect of long-lived nuclear waste induced by high-charge electron beam from laser plasma accelerator" https://arxiv.org/abs/1705.05770
"Fusion Driven Transmutation of Transuranics in a Molten Salt" https://arxiv.org/abs/2109.08741
"Generic method to assess transmutation feasibility for nuclear waste treatment and application to irradiated graphite" https://arxiv.org/abs/2201.02623
It never really occurred to me that transmutation is a reputable field of scientific study nowadays.
A sub-skill of Alchemy :-)
Show HN: Hubfs – File System for GitHub
HUBFS is a file system for GitHub and Git. Git repositories and their contents are represented as regular directories and files and are accessible by any application, without the application having any knowledge that it is really accessing a remote Git repository. The repositories are writable and allow editing files and running build operations.
Or you could, you know, clone the repository into a local working tree.
That works well for small repos or a few repos but if you want to find all cc files, at all release branch's, in your entire company and check for some exploit it is helpful to have a VFS. Makes it so you could also support N SCMs through one API. You just need to make a new VFS.
Isn't there already a good way to push computation closer to the data?
GmailFS and pyfilesystem (userspace FUSE) and rclone are neat as well.
https://stackoverflow.com/questions/1960799/how-to-use-git-a... explains about the `git push` step that git-remote-dropbox enables: https://github.com/anishathalye/git-remote-dropbox
GitHub also has a code search now: https://cs.github.com
Needing to tie into a specific API (like codesearch) couples you to the specific storage backend (Github). If you build your software to operate on a POSIX-y file system, you can support anything that shows up as a file system. For example: A local working tree of files, an NFS share, or now a remote git repository.
Running the code where the data already is saves network transfer: with data locality, you don't need to download each file before grepping.
Locality_of_reference#Matrix_multiplication explains how the cache miss penalty applies to optimizing e.g. matrix multiplication: https://en.wikipedia.org/wiki/Locality_of_reference#Matrix_m...
Physicists steer chemical reactions by magnetic fields and quantum interference
Perhaps also practical for intersecting applications: "These Superabsorbent Batteries Charge Faster the Larger They Get: In the lab, the prototype quantum batteries are charged with light" https://spectrum.ieee.org/quantum-battery :
> Previous work found that matter could act collectively in surprising ways due to quantum physics. For example, in "superradiance," a group of atoms charged up with energy can release a far more intense pulse of light than they could individually.
> In the past decade, researchers have also discovered the reverse of superradiance was possible—superabsorption, with atoms cooperating to display enhanced absorption. However, until now superabsorption was seen for only small numbers of atoms.
[…]
> The new device consists of a reflective waferlike microcavity enclosing a semiconducting organic Lumogen F orange dye, which the researchers charged with energy using a laser. Ultrafast detectors helped the team monitor the way in which this dye charged and stored light energy at femtosecond resolution. As the microcavity size and the number of dye molecules increased, the charging time decreased.
Could a combo PV photovoltaic, storage, full-spectrum e.g. LED product for outdoor and/or indoor applications be created with super absorption, , and superradiance?
Maybe also wrap the thing in thin film (and/or graphene sheets that throw off electrons) to harvest energy off the thermal gradient around the unit; and shape it like self-cleaning petals.
White noise improves learning by modulating activity in midbrain regions (2014)
White noise: https://en.wikipedia.org/wiki/White_noise
https://news.ycombinator.com/item?id=28402424 :
> https://en.wikipedia.org/wiki/Neural_oscillation#Overview says brainwaves are 1-150 Hz? IIRC compassion is acheivable on a bass guitar.
Doodling improves memory retention / learning, too. IDK how much difference the content of a doodle makes? Hypothesis: Additional "cognitive landmarky" content in the doodle or received waveforms would increase retention up to a limit.
Regarding doodling, it seems that any active productive study method is at least somewhat beneficial. I've found this in my own personal experiments. For example, testing myself with cloze deletions, creating Anki cards, generating mind maps, Cornell notes, inline annotation/marginalia, doodling, generating questions, generating mnemonics, mind palaces, etc.
An interesting one is reading and reciting out loud: https://www.tandfonline.com/doi/full/10.1080/09658211.2017.1...
Another non-intuitive method that is helping me a lot is pacing around my house slowly while I read. It goes to show that cognition is an embodied phenomenon. It's unintuitive when intelligence is viewed from the traditional split mind/body paradigm but just take a look at an image of our nervous systems. Those wires to and from our brain and guts wrap around every part of us.
Memory and retention in learning > Methods of improving memory and retention https://en.wikipedia.org/wiki/Memory_and_retention_in_learni...
Mnemosyne/Anki, (FreeMind, yEd, Gephi, AtomSpace as-moses, RDF bnodes, ONNX,) mind maps, RestructuredRext, MyST-Markdown, todo.txt (TaskWarrior,), StructuredProcrastination, ActivityWatch, Dogsheep, *-to-sqlite, awesome-quantified-self, https://github.com/woop/awesome-quantified-self
But metrics for actual memory retention? I've heard of TinCan xAPI w/ a LRS. nbgrader, Khan Academy exercises, OpenBadges to demonstrate proficiency,
Awesome. Thanks for the resources. Coincidentally, I stumbled on this research today on the effect of exercise + meditation on word recall: https://hpp.tbzmed.ac.ir/PDF/hpp-9-314.pdf
"Episodic memory enhancement versus impairment is determined by contextual similarity across events" (2021) https://www.pnas.org/doi/full/10.1073/pnas.2101509118
From "How Our Environment Affects What We Remember" https://neurosciencenews.com/environment-memory-20165/amp/
Show HN: Prepform – AI and spaced-repetition to optimize learning
Hi, I'm Eric and I'm the founder and lead developer of Prepform.
A high-quality education helped me pursue my interests and achieve my goals. I started Prepform so students of all backgrounds have access to the same kind of education.
I grew up in Southern California, surrounded by dozens of SAT prep programs, and I swear I must have gone to all of them. Different programs followed different styles and techniques, but the strategy they shared was to create a study plan and review mistakes.
A study plan is taking a diagnostic test,
setting a target score,
creating a study schedule,
identifying mistakes, and finally
reviewing those mistakes.
I wanted to take this structure and optimize it with machine learning, while accounting for elements of human learning and memory. I'm a big fan of SuperMemo, a memorization technique developed by Piotr Wozniak, where you review material just as you're about to forget it. Cognitive psychology tells us human forgetting follows a pattern, but Piotr quantified this behavior to identify the precise moment forgetting happens.
The goal was to build on his research with AI and tailor it to not only test prep but to the individual student, and make it the engine of the study plan.
The result is Blended Prep, which guides students to internalize knowledge rather than memorize material, and gives them the best chance to ace their next exam.
I'm so excited to share this with the HN community, and would love to know what you think. You can try it out at https://prepform.com. Thanks for reading.
https://news.ycombinator.com/item?id=28309645 https://westurner.github.io/hnlog/#comment-28309645 :
Re: Phonemic awareness and Phonological awareness,
> What are some of the more evidence-based (?) (early literacy,) reading curricula? OTOH: LETRS, Heggerty, PAL:
> Which traversals of a curriculum graph are optimal or sufficient?
> You can add https://schema.org/about and https://schema.org/educationalAlignment Linked Data to your [#OER] curriculum resources to increase discoverability, reusability.
https://news.ycombinator.com/item?id=24527589 https://westurner.github.io/hnlog/#comment-24527589 :
>> A bottom-up (topologically sorted) computer science curriculum (a depth-first traversal of a Thing graph) ontology would be a great teaching resource.
>> One could start with e.g. "Outline of Computer Science", add concept dependency edges, and then topologically (and alphabetically or chronologically) sort.
>> https://en.wikipedia.org/wiki/Outline_of_computer_science
>> There are many potential starting points and traversals toward specialization for such a curriculum graph of schema:Things/skos:Concepts with URIs.
> https://westurner.github.io/hnlog/ ... Ctrl-F "interview", "curriculum"
OpenBadges as Blockcerts for Q12 competencies
Why tensors? A beginner's perspective
I was happy to see that this article is actually talking about tensors, not just multidimensional arrays (which for some reasons are often called tensors by machine learning folks).
Same word, different contexts and meaning. In ML tensors are multidimensional arrays (and nothing more). Neither physicists nor ML researchers/developers are confused about what it means.
I'm pretty sure that a multidimensional array was the original implementation of tensors and the more fancy linear forms formalism came later in the twentieth century. Then the question was how do these multidimensional arrays operate on vectors and how do they transport around a manifold. (Source: first book I read on general relativity in the 1970s had a lot of pages about multidimensional arrays and also this good summary of the history: https://math.stackexchange.com/questions/2030558/what-is-the...
(Sort of like how vectors kind of got going via a list of numbers and then they found the right axioms for vector spaces and then linear algebra shifted from a lot of computation to a sort of spare and elegant set of theorems on linearity).
AFAIU, Matrices are categorically subsets of Tensors where the product operator, at least, is not the tensor product but the Dot product.
Dot product: https://en.wikipedia.org/wiki/Dot_product
Matrix multiplication > Dot product, bilinear form and inner product: https://en.wikipedia.org/wiki/Matrix_multiplication#Dot_prod...
> The dot product of two column vectors is the matrix product
Tensor > Geometric objects https://en.wikipedia.org/wiki/Tensor :
> The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms.) This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes.[24] Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles.[25][26]
Mathematicians are comfortable with the idea that a matrix can be over any set, in other words matrix is a mapping from the cartesian product of two intervals [0,N_i) to a given set S. Of course to define matrix multiplication you need at least a ring, but a matrix doesn’t have to define such an operation, and there are many alternative matrix products, for example you can define the Kronecker product with just a matrix over a group. Or no product at all for example a matrix over the set {Red,Yellow,Blue}.
Tensors require some algebraic structure, usually a vector space.
Booting ARM Linux the standard way
Fwiu, the magical path to launch Grub on an ARM board is:
/boot/efi/debian/grubaa64.efi
And that doesn't quite require tow-boot to do hw init?"DOC: How to boot into Grub" (in order to get a passphrase-protected graphical boot menu where you can modify kernel parameters like `rescue`) https://github.com/Tow-Boot/Tow-Boot/issues/110
putting a file in /boot/efi/debian/ won't do anything at all unless the specific board you have already has platform firmware onboard to actually make that work. Arm EBBR complicant server boards do this for example, this won't work for the vast majority of ARM single board computers though.
PipeWire: A year in review and a look ahead
Can anyone explain me like I'm five, why this major transition from PulseAudio to PipeWire is happening during the last year?
Pipewire was started by Wim Taymans, one of the gstreamer people. Originally it was supposed to allow video-streams in sandboxed environments like flatpak (and was known as "pulsevideo"), but then he noticed that it could be extended to do audio as well.
Since pulseaudio's architecture isn't well-suited to incorporate sandboxing, that project continued and it turned out that it could handle compatibility with both pulseaudio (the sound server for "consumerist" needs) and jack (the one for "pro audio" people where low latency is paramount).
You can even use programs like QJackCtl, which was written for Jack, to perform audio routing and stuff with pipewire!
So now, we have a sound server that
1. is much nicer to configure than pulseaudio
2. is compatible with sandboxing
3. is compatible with pulseaudio, jack and the older alsa APIs and features
4. is developed much more quickly than either of the others
See also https://fedoraproject.org/wiki/Changes/DefaultPipeWire
> 2. is compatible with sandboxing
"DOC,ENH: How to connect container pipewire, alsa, jack to host pipewire (#1612)" https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/16...
How does database indexing work? (2008)
https://en.wikipedia.org/wiki/Database_index
https://en.wikipedia-on-ipfs.org/wiki/Database_index
From "Hosting SQLite Databases on GitHub Pages" https://news.ycombinator.com/item?id=28021766 re: edgesearch, HTTP/3 QUIC UDP, :
> Serverless full-text search with Cloudflare Workers, WebAssembly, and Roaring Bitmaps https://github.com/wilsonzlin/edgesearch
>> How it works: Edgesearch builds a reverse index by mapping terms to a compressed bit set (using Roaring Bitmaps) of IDs of documents containing the term, and creates a custom worker script and data to upload to Cloudflare Workers
WebGPU – All of the cores, none of the canvas
Question for someone with gpu programming experience: is it feasible to analyze time series data (eg.: run a trading algorithm through a series of stock prices) in parallel (with the algorithms receiving different parameters in each core)?
If not then what is the limiting factor?
This depends on the algorithm. If it has lots of branching and control flow, then it's unlikely to run well. If it's based on something like FFT or any sort of linear algebra really, then it should run really well.
WebGPU is accessible enough maybe you should try it! You'll learn a lot either way, and it can be fun.
Isn't there something like SciPy which you can run on WebGPU?
https://wgpu-py.readthedocs.io/en/stable/guide.html :
> As WebGPU spec is being developed, a reference implementation is also being build. It’s written in Rust, and is likely going to power the WebGPU implementation in Firefox. This reference implementation, called wgpu-native, also exposes a C-api, which means that it can be wrapped in Python. And this is what wgpu-py does.
> So in short, wgpu-py is a Python wrapper of wgpu-native, which is a wrapper for Vulkan, Metal and DX12, which are low-level API’s to talk to the GPU hardware.
So, it should be possible to WebGPU-accelerate SciPy; for example where NumPy is natively or third-partily CUDA-accelerated
edit: Intel MKL, https://Rapids.ai,
> Seamlessly scale from GPU workstations to multi-GPU servers and multi-node clusters with Dask.
Where can WebGPU + IDK WebRTC/WebSockets + Workers provide value for multi-GPU applications that already have efficient distributed messaging protocols?
"Considerable slowdown in Firefox once notebook gets a bit larger" https://github.com/jupyterlab/jupyterlab/issues/1639#issueco... Re: the differences between the W3C Service Workers API, Web Locks API, and the W3C Web Workers API and "4 Ways to Communicate Across Browser Tabs in Realtime" may be helpful.
Pyodide compiles CPython and the SciPy stack to WASM. The WASM build would probably benefit from WebGPU acceleration?
Command-line Tools can be 235x Faster than your Hadoop Cluster (2014)
It often isn’t about the execution time, if it’s a daily job and takes 26 minutes who cares?
Using something like databricks means it is easy to schedule and manage jobs, easy to write jobs that work in good enough time, easy to troubleshoot when things go wrong.
It comes with a well documented security mode and a support contract when needed.
Developers can be onboarded quickly and work code reviewed and managed.
Logging and diagnostics are available and you can report on metrics easily.
That isn’t true with custom data pipelines written in shell scripts.
The value isn’t in the pure execution time, it is in everything around it.
Can't you just ask one of the machines in the databricks/spark cluster to run those shell commands?
Or is that more of a kubernetes thing?
That's pretty much how Dask works; though Dask doesn't throw away data types for newline-delimited strings between each failed partitioned IPC pipeline process, and then the new hire didn't appropriately trap errors in their shell script, so the log messages are chronologically non-sequential and text-only.
https://github.com/dask/dask-labextension :
> This package provides a JupyterLab extension to manage Dask clusters, as well as embed Dask's dashboard plots directly into JupyterLab panes.
Something like ml-hub allows MLops teams to create resource-quota'd containers with k8s and IAM, though even signed code can DoS an unauditable system with no logs of which processes ran which signed archive of which code at what time, with bash and ssh. https://github.com/ml-tooling/ml-hub
CPython, C standards, and IEEE 754
I don't know why GCC-12 was in the picture, but I hope all of you can understand one thing: python's source code must be compatible with GCC 6. And no so called "Threading [Optional]" features. It is all because CentOS is dead. CentOS has been sold to IBM. There is no CentOS 8 anymore. The free lunch is ended. And there wouldn't be another OSS project to replace it.
Linux kernel developers want to use C11. CPython developers want to use C11. But does your compiler support it? If you were a C++ developer, would you request C++17/C++20 also?
CPython must be aligned with the manylinux project. manylinux build every python minor release from source. Historically manylinux only use CentOS. CentOS has devtoolset, which allows you use the latest GCC in an very old Linux release. Red hat has spent much time on it. Now you can't get it for free because CentOS is dead. So the manylinux community is switching to Ubuntu. Ubuntu can only provide GCC 6 to them. If anycode in CPython is not compatible with GCC6, then manylinux will have to drop ubuntu too. And they don't have any other alternative. As I said, for now Red Hat devtoolset is the only solution. When IBM discontinued the CentOS project, most people do not understand what it means to the OSS community.
(I work for Microsoft. It doesn't stop me to love Linux.)
> for now Red Hat devtoolset is the only solution. When IBM discontinued the CentOS project, most people do not understand what it means to the OSS community.
"Compilers and Runtimes" > "CentOS sysroot for linux-* Platforms" https://conda-forge.org/docs/maintainer/infrastructure.html#...
From https://conda-forge.org/docs/user/announcements.html :
>> 2021-10-13: GCC 10 and clang 12 as default compilers for Linux and macOS
>> These compilers will become the default for building packages in conda-forge.
Conda-forge specifies enough to solve for CentOS sysroot compatibility and newer GCC is already specified.
CentOS (RHEL SRPMs with RH trademarks removed + EPEL) lives on as {Rocky Linux, Alma Linux, CentOS Stream,} and SUSE is still RHEL-compatible.
https://github.com/pypa/cibuildwheel :
>> Build Python wheels for all the platforms on CI with minimal configuration.
>> Python wheels are great. Building them across Mac, Linux, Windows, on multiple versions of Python, is not.
>> cibuildwheel is here to help. cibuildwheel runs on your CI server - currently it supports GitHub Actions, Azure Pipelines, Travis CI, AppVeyor, CircleCI, and GitLab CI - and it builds and tests your wheels across all of your platforms.
Ask HN: Books recommendations on developing critical thinking?
hoping get your recommedation on critical thinking, decision under uncertainity - other than khaneman's book.
"Ask HN: How can I “work-out” critical thinking skills as I age?" https://news.ycombinator.com/item?id=24026223
Causal_inference#Methodology https://en.wikipedia.org/wiki/Causal_inference#Methodology
Web Share API
From https://developer.mozilla.org/en-US/docs/Web/API/Web_Share_A... :
> The Web Share API allows a site to share text, links, files, and other content to user-selected share targets, utilizing the sharing mechanisms of the underlying operating system. These share targets typically include the system clipboard, email, contacts or messaging applications, and Bluetooth or WiFi channels
> Note: This API should not be confused with the Web Share Target API, which allows a website to specify itself as a share target
W3C Web Share Target API: https://w3c.github.io/web-share-target/ :
> This specification defines an API that allows websites to declare themselves as web share targets, which can receive shared content from either the Web Share API, or system events (e.g., shares from native apps).
W3C Web Share API: https://w3c.github.io/web-share/ :
> This specification defines an API for sharing text, links and other content to an arbitrary destination of the user's choice.
Ask HN: Is Kubernetes the only alternative for being cloud agnostic?
My company's management desires for us to be "cloud agnostic" - without having a firm definition of what that actually means. Lots of people in my company think "cloud agnostic" means you "containerize" your solutions and run them on Kubernetes. Then you can pick and move your application from cloud to cloud as desired. That seems extremely limited to me. To me that thinking exemplifies the cloud as being "someone else's computer" rather than seeing the cloud as an application platform in and of itself.
What does HN think about this? Is there a way of looking at "cloud agnostic" that still allows you to look at the cloud as an application platform and not just as a platform for running Kubernetes? Is there another way to think about this?
We have ansible playbooks that we wrote leveraging some of the ansible-galaxy stuff, and they do the same thing either on-prems or in cloud. I would say we're pretty cloud native, we follow best practices and deploying something to a cloud provider or internally is the same thing.
Our level of abstraction is containers, we don't use services that cloud providers sell, for example, Lambda, as they are very cloud provider specific. We prefer to use technologies like RabbitMQ which can be deployed on any cloud or on prems easily.
Ansible & k8s:
https://docs.ansible.com/ansible/latest/scenario_guides/kube...
ansible-collections/kubernetes.core: https://github.com/ansible-collections/kubernetes.core
geerlingguy/ansible-for-kubernetes book: https://github.com/geerlingguy/ansible-for-kubernetes
openshift/openshift-ansible: https://github.com/openshift/openshift-ansible
OKD is the open source (RedHat, (IBM)) OpenShift, which wraps kubernetes ; sort of like AWX : https://github.com/openshift/okd
> [OKD] Features: A fully automated distribution of Kubernetes on all major clouds and bare metal, OpenStack, and other virtualization providers;
Single-node OpenShift 4 requires at least 16GB of RAM and 8 vCPU : https://www.redhat.com/en/blog/single-node-openshift-manufac...
> Single node OpenShift offers both control and worker node capabilities in a single server and provides users with a consistent experience across the sites where OpenShift is deployed, regardless of the size of the deployment.
MicroShift requires 2Gb of RAM on e.g. an ARM64 Raspberry Pi 4 or similar SBC: https://microshift.io/docs/getting-started/
Terraform doesn't require k8s: https://github.com/hashicorp/terraform
OpenStack is somewhat EC2 API compatible, but the reverse is not true.
Bootloader Basics
Author of the post here, thanks for sharing! Happy to answer any questions, though even after the few nights spent getting these programs to work I had many questions of my own.
Excellent post! I had a lot of fun writing a bootloader game sometime ago and wrote a post about it, it's a 2048 clone: https://crocidb.com/post/bootsector-game/
That's a great one! Added it to my list of suggested reading in the repo.
https://github.com/eatonphil/bootloaders/blob/main/README.md
- uboot w/ Pinebook Pro : https://gitlab.com/pine64-org/u-boot https://wiki.pine64.org/wiki/Uboot
- pboot w/ Pinephone Pro: https://xnux.eu/p-boot/
- tow-boot: "ENH: "Chainload" to {grub,}" https://github.com/Tow-Boot/Tow-Boot/issues/110 https://github.com/rhboot/grub2/blob/master/grub-core/loader...
Automerge: A JSON-like data structure (a CRDT) that can be modified concurrently
An interesting challenge in the CRDT era today, is that there isn't an ideal CRDT for rich text yet. I mean, not just simple boundary based formattings like bold/italics but also complex block based elements like tables and lists.
While recent CRDTs like peritext has done an incredible job of focussing on this specific problem of rich text, even that project hasn't extended its types to cover table operations and list operations. I've been thinking about this problem deeply and my intuition is that anything that resembles semantic trees are tricky to deal with using a generic CRDT.
For example a generic tree/json CRDT cannot be used for an AST tree - and it will most likely fail. I wrote about this a while back - https://news.ycombinator.com/item?id=29433896
If you have ideas on how to approach this problem, join us on this thread https://github.com/inkandswitch/peritext/issues/27#issuecomm...
I can't find a link right now but I believe Kevin is in the process of implementing "move" in Yjs, so that if you copy and paste from a section or split it, it is a move rather than a delete and insert.
Agreed on tables, they are so much more complex and may need further operation types to ensure no dropped edits.
Obviously with Peritext you are implementing a very rich text focused CRDT. The way I see it is there are general purpose and domain specific CRDTs, Yjs being more general purpose but with a concept of 'marks' on strings for formatting. Ultimately we need both and I like the concept of one that covers both bases so you can have a db document that contains rich text fields along with more traditional structured data.
Without exactly domain specific CRDTs for your application you will always have to do some level of schema correction after merging documents.
Re: inlined tabular data in CRDT distributed systems for collaboration on documents that may be required to validate:
Atomicity: https://en.wikipedia.org/wiki/Atomicity_(database_systems)
> An atomic transaction is an indivisible and irreducible series of database operations such that either all occurs, or nothing occurs.[1] A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next it has already occurred in whole (or nothing happened if the transaction was cancelled in progress).
> An example of an atomic transaction is a monetary transfer from bank account A to account B. It consists of two operations, withdrawing the money from account A and saving it to account B. Performing these operations in an atomic transaction ensures that the database remains in a consistent state, that is, money is neither lost nor created if either of those two operations fail. [2]
IIRC, Apache Wave (Google Wave (2009)) solves for tables but is not built atop a CRDT, like Docs and Sheets? https://en.wikipedia.org/wiki/Google_Wave
Jupyterlab/rtc - Jupyterlab, JupyterLite (WASM), - is built upon a CRDT for .ipynb JSON, at least. https://github.com/jupyterlab/rtc
(URI-) Named Graphs as JSON-LD would work there, too. https://json-ld.org/playground/
Does Dokieli have atomic table operations for ad-hoc inlined tables as RDF Linked Data? https://github.com/linkeddata/dokieli
I think there is some confusion over "tables", the GP I believe was referring to typographic "rich text" tables, like you have in a word document, rather than a "data table" in a DB.
The issue with "rich text" tables and CRDTs is that you effectively have two overlapping "blocks", columns and rows. Current CRDTs are good at managing a list of blocks, splitting them, reordering them. Most rich rich text tables are represented as a list of rows of cells.
If you have two clients, one adds a new row (represented as a new row in rows list), and the other adds a new column (represented as a new cell in each row in the rows list), and then merge, you have a conflict where the new row and column meet. There is a missing cell, and so you end up with a row that is one cell too short.
What is needed is a CRTD that has a concept of a table, or table like structure, so that when the documents are merged it knows to add that extra cell in the new row.
This has nothing to do with Atomicity in the traditional DB transaction sense.
Edit:
Just thinking about this a little more, if you model the table as a list of maps, with each column having a unique id it would overcome the misalignment issue, and it potentially helps with other table conflicts too. You would however have to store the order of the columns somewhere and deleting columns could conflict.
> This has nothing to do with Atomicity in the traditional DB transaction sense.
Partial application of conflicting additive schema modifications is an atomicity issue as much as it is a merge issue. If the [HTML] doesn't validate before other nodes are expected to synchronize with it, that changeset shouldn't apply at all; atomicity.
It looks like Dokieli supports embedded tables.
With RDF (and triplestores, and property graphs), you can just add "rows" and "columns" (rdfs:Class instances with rdfs:Property instances) without modifying the schema or the tabular data. Online schema migration is dangerous with SQL, too, because the singular db user account for the app shouldn't have [destructive] ALTER TABLE privileges.
"CSV on the Web: A Primer" > "Validating CSVs" (CSVW Tabular Data) https://www.w3.org/TR/tabular-data-primer/#validating-csvs
A 13-year-old used my artificial nose to diagnose pneumonia
Hmmm... I'm skeptical. Not necessarily because I don't think that pneumonia could be detected by testing VOCs in breath, but because I'm currently working on a project that uses sensors to do breath analysis and my amateur research has informed me that it's fairly hard to get right (which is why my primary goal is to identify deltas rather than achieve numerical accuracy).
For one, VOCs can be present in breath for other reasons besides some sort of infection in the lung, and VOCs are incredibly hard to differentiate with just a sensor. The fact that they tend to be faint in human breath even at their highest (in contrast to O2 and CO2) doesn't help. Even the most expensive PID sensors for VOCs (they get up into the several hundreds a pop) can't really tell you whether the predominant gas is acetone or alcohol or acetaldehyde or hydrogen sulfide. So you've got to figure out whether the presence of VOCs is truly an anomaly and not just a part of ketosis. In which case you will also need to measure at least VeO2 to see whether the VOCs correspond with the Respiratory Quotient.
The "e-nose" project, as described on the MakeZine article, doesn't appear to do that. It does have an alcohol sensor. But these sensors aren't particularly sophisticated. They use semiconductors with heating elements to detect the presence of gases, and there is almost certainly some overlap between the alcohol and VOCs sensors.
If VOCs are produced by pneumonia, then yes, it's conceivable that even just the VOCs sensor alone would detect this. But can this group of sensors used in the e-nose differentiate pneumonia from catabolism?
Maybe? ¯\_(ツ)_/¯
After all, this thing uses AI. And maybe AI can recognize something that a human can't by simply looking at a line graph. I dunno... Such things should be tested against known inputs before being suggested to diagnose anything.
In my experience, "AI can extract more information from sensors" is mostly a myth.
An example is the SCIO sensor ( https://nocamels.com/2019/03/scio-kickstarter-darling-promis... ) which was a cheap handheld spectrometer that claimed to accurately determine the nutritional information of any food you pointed it at.
One good way to debunk this is to measure raw sensor output and compute Mutual Information (which incorporates sensor noise/variability). If the sensor only produces X bits of information, no algorithm will be able to extract more classes than that. In the SCIO case it was just under 8 bits total of information. So something like a poor color sensor. You could train on apples and oranges and maybe do an investor demo, but it's not actually going to do anything useful (as the Kickstarter crowd soon learned).
Is the limit: A) sensor resolution, B) NN architecture and/or algorithm, C) training sample size, D) training data (labeling, segmentation) quality, or E) it doesn't sufficiently predict the variance with low enough error?
New NN models are able to do more with the exact same sensor data.
You cannot conjure information out of thin air. Even with infinite data and a hypothetical wormhole CPU that runs everything in O(1) and solves the halting problem, you still couldn't do this. So to answer your question, the reason is effectively (A). Sensor resolution might be the wrong term but it's the general idea.
How much information content is there in DNA (and RNA,)? How do creatures know or learn what not to eat given limited available sensor data?
How much information content is there in DNA? 2 bits per base, before compression. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3220916/
How do creatures know what to eat? Evolution solved that for most creatures, so their sensors don't have to work as hard at runtime. And in other cases, some number of members of a population of creatures will die before the population learns the food is poisonous. Our sensors, and the information processing systems that manage their outputs, are remarkably efficient data processing engines that do the equivalent of approximating and predicting, often well beyond what the most advanced deep learning systems are capable of doing now.
So, sensor resolution is higher, there are multiple fields being integrated, in a massively-parallel spreading-activation Biological Neural Network, and that's how blank-slate creatures just know?
Is there enough information content - per the Shannon entropy definition or otherwise - in DNA and/or RNA to code for the survival-selected traits that
I'm not sure that the (Shannon entropy, MIC, Kolmogorov,) information content of the samples is the limit of any given network trained therefrom? Is there anything to be gained from upsampling and adding e.g. gaussian blur (noise)? Maybe it's feature engineering, maybe it's expert methods bias, maybe it's just sensor fusion; that's the magic noise.
Perhaps this is moving the goalposts a bit, but e.g. depixelation does appear to defy such a presumed limit due to apparent information content? Perhaps it is that the network reading the sensor carries additional information associated with the lower-resolution or additional-fields' sensor data?
https://github.com/krantirk/Self-Supervised-photo :
> Given a low-resolution input image, PULSE searches the outputs of a generative model (here, StyleGAN) for high-resolution images that are perceptually realistic and downscale correctly.
Maybe no amount of feature engineering can actually add information?
Lit-up fishing nets reduce catch of unwanted sharks, rays and squid: study
Is there some way to make fishing nets out of e.g. bioluminescent, biodegradable, carbon negative organic material?
LEDs that biodegrade after ___ years would be a good idea, too. MEMS-powered?
Design of a rapid transit to Mars mission using laser-thermal propulsion
WebGL 2.0 Achieves Pervasive Support from All Major Web Browsers
Why though? WebGL is already getting left behind for WebGPU.
WebGPU: https://en.wikipedia.org/wiki/WebGPU
> WebGPU is the working name for a future web standard and JavaScript API for accelerated graphics and compute, aiming to provide "modern 3D graphics and computation capabilities". It is developed by the W3C GPU for the Web Community Group with engineers from Apple, Mozilla, Microsoft, Google, and others. [1]
Uniting the Linux random-number devices
https://en.wikipedia.org/wiki//dev/random#Linux has :
> In October 2016, with the release of Linux kernel version 4.8, the kernel's /dev/urandom was switched over to a ChaCha20-based cryptographic pseudorandom number generator (CPRNG) implementation [16] by Theodore Ts'o, based on Bernstein's well-regarded stream cipher ChaCha20.
> In 2020, the Linux kernel version 5.6 /dev/random only blocks when the CPRNG hasn't initialized. Once initialized, /dev/random and /dev/urandom behave the same. [17]
Carbon Robotics new LaserWeeder with 30 lasers to autonomously eradicate weeds
From the article:
> In addition to an updated build, the 2022 LaserWeeder features 30 industrial CO2 lasers, more than 3X the lasers in Carbon Robotics’ self-driving Autonomous LaserWeeder, creating an average weeding capacity of two acres per hour. Growers who use Carbon Robotics’ implements are seeing up to 80% savings in weed management costs, with a break-even period of 2-3 years.
How does this compare to competing laser weeding robots?
Agricultural robot: https://en.wikipedia.org/wiki/Agricultural_robot
Launch HN: Pelm (YC W22) – Plaid for Utilities
Hi HN, Drew and Edmund from Pelm here (https://www.pelm.com). We are building an API that allows developers to access energy data, such as electricity usage or billing data, from utilities.
Currently, if you want to build an application on top of energy data, you have to build integrations with utilities across the country. Not only is this time-consuming, it's super frustrating given the lack of data standardization and the clunky, high-friction integration processes that energy companies use. With Pelm, you only have to integrate with one service to access utility data, and you get to use a seamless, well-documented API built by other developers.
We are software engineers (from Asana and Affirm) who are enthusiastic about sustainability and the creation of a more modern energy grid. We talked with lots of developers who were frustrated from trying to work with energy data and saw an opportunity to meet their needs while supporting the push for a more efficient energy grid.
Most companies trying to tackle this problem are energy companies, not technology companies. The products they build don't keep the user in mind and only focus on meeting bare functional requirements. Pelm, by contrast, is by-developers-for-developers. Our focus is ease of use. Our API is simple to get up and running (under ten minutes) and provides clear documentation and instructions.
It works like this: you register for our service and embed Connect (our front-end plugin) into your application. The end user uses Connect to authorize access to their data from their particular utility. We scrape electricity usage and billing data from their utility account and store it in a standardized format in a database. Your application can then query electricity interval data and billing data for the end user through our REST API. We also recently built out functionality to pay utility bills through our API.
Our API is designed for apps that do things like help consumers reduce electricity usage, charge EVs at optimal times, optimize HVAC installs, or educate on climate-friendly practices.
One example of how Pelm can currently be used is in demand response programs—that’s when utilities pay companies to get large amounts of people to reduce their electricity consumption during peak hours. Our API can help measure the reduction, which determines how much the company gets paid. Another example: solar panel installers can use us to show potential clients how much they could save on their electricity bill by installing solar. Another is community solar programs that allow people to buy into remote solar farms and get credit for the generated energy from utilities. Pelm can be used by community solar providers to calculate and bundle bills.
(By the way, an interesting little-known fact: this space is possible because of a 2012 initiative by Obama that required utilities to allow consumers more visibility into their energy consumption habits.)
Pelm is free up to 100 API calls and 10 active end users per month. After that we charge on a usage based plan: $0.10/call and $0.50/active end user up to 10,000 API calls and 1,000 active users. Past that limit, you'll move onto an enterprise plan with a flat monthly rate based on your service level and an adjusted rate for calls and active end users (we’re still figuring out the exact parameters).
We’d love to hear any of your ideas or experiences within this space! We’re always looking for creative approaches to the problem and ways we can better the developer experience we’re building. If you get a chance to test out the API, please share your feedback on how we could improve it. Thanks so much!
Two questions:
How are you connecting to the users energy supplier, I assume most don't have their own apis and so you are having to save the users login credentials? If so you should list the security measures you are taking to ensure that are safely stored. This is much like how developers had to connect to banks before they started supporting standard apis.
Which utility companies can you currently connect to? Again may be worth listing them on the site.
Great callout. Passwords are encrypted using a 256-bit AES cipher and are never persisted or shown to developers in plaintext. We currently support PG&E and SCE with a ComEd integration landing within the week.
What does it cost a utility to add e.g. read-only OAuth token support to their customer-facing app?
FWIU, it's pretty easy to add OAuth support to any HTTP API endpoint with e.g. Nginx auth_request or by integrating an OAuth library with automated tests into the application at the url routes, if nothing else.
Do you have a "Guide for utilities who want users to have a safe third-party read-only API", a Developer portal, or like a decision tree for which script to read a decision-maker or a front-line lackey who doesn't know anything about APIs?
Does Pelm integrate with Zapier? https://zapier.com/platform
Show HN: SHA-256 explained step-by-step visually
SHA2: https://en.wikipedia.org/wiki/SHA-2
https://rosettacode.org/wiki/SHA-256
Hashcat's GPU-optimized OpenCL implementation: https://github.com/hashcat/hashcat/blob/master/OpenCL/inc_ha...
Bitcoin's CPU-optimized sha256.cpp, sha256_avx2.cpp, sha256_sse4.cpp, sha256_sse41.cpp: https://github.com/bitcoin/bitcoin/blob/master/src/crypto/sh...
https://github.com/topics/sha256 https://github.com/topics/sha-256
Cryptographic_hash_function#Cryptographic_hash_algorithms: https://en.wikipedia.org/wiki/Cryptographic_hash_function#Cr...
Merkle–Damgård construction: https://en.m.wikipedia.org/wiki/Merkle%E2%80%93Damg%C3%A5rd_...
(... https://rosettacode.org/wiki/SHA-256_Merkle_tree ... Merkleized IAVL+ tree that is balanced with rotations in order to optimize lookup,: https://github.com/cosmos/iavl
Self-balancing binary search tree: https://en.wikipedia.org/wiki/Self-balancing_binary_search_t... )
Frank Rosenblatt's perceptron paved the way for AI 60 years too soon (2019)
Perceptron: https://en.wikipedia.org/wiki/Perceptron #History, #Variants
An MLP: Multilayer Perceptron can learn XOR and is considered a feed-forward ANN: Artificial Neural Network: https://en.wikipedia.org/wiki/Multilayer_perceptron
Backpropagation: 1960-, 1986, https://en.wikipedia.org/wiki/Backpropagation
1999 Repeal of Glass-Steagall was the worst deregulation enacted in US history
Yeah what was the deal with that dotcom correction in the early 2000s? Did banks invest differently after GLBA said that they can gamble against peoples' savings deposits (because they created a 'sociallist' $100b credit line, called it FDIC, and things like that don't happen anymore)
Decline of the Glass-Steagall Act: https://en.wikipedia.org/wiki/Decline_of_the_Glass%E2%80%93S...
Dot-com bubble: https://en.wikipedia.org/wiki/Dot-com_bubble
America’s Covid job-saving programme gave most of its cash to the rich
Would a [premined, escrowed] stablecoin token make it easier to answer these kinds of Transparency and Accountability questions in regards to government spending and government stimulus funding beyond intra-governmental transfers of real value?
Said stablecoin token would be hosted on more or more resilient Distributed Ledger Technologies (DLTs; 'blockchains') with a - indeed more auditable - transaction record specification that supports multiple transaction inputs and outputs and an optional identifier such as an e.g. "coinbase field" to key auxilliary (unfortunately comparably mutable, offsite-backuped by which responsible parties) database records to.
The issue here is not transparency, it's who the money was directed to. This is a policy question, and what type of currency you use is totally irrelevant. It's already very easy to see who received money (https://projects.propublica.org/coronavirus/bailouts/), what we're discussing is whether it should have been given to different people.
So, in your opinion, the "And then where did the money go?" question is entirely distinct from the "Did those investments have the desired impact for the desired audience, per the predefined criteria for success?" question? Who were we doing capital investment for, in hopes of them raising additional separately-accounted-for monies? https://westurner.github.io/hnlog/#comment-25893860 re: https://www.usaspending.gov/
Oh, are you talking about tracking the money after it's been loaned out? Like for a given dollar we can see that it was loaned to Company X, then paid to an employee, then spent on rent, then spent at a restaurant, etc. If that's what you mean then I agree it would be useful for studying the impacts of stimulus policies, although I think there are a lot of other privacy impacts we'd need to consider.
You can always launder money to defeat such a trail. I'm not going to go into detail about how, just in case my ignorant ramblings have some novel techniques in there, but it's definitely possible.
Like with banning encryption, the privacy impacts would be felt by the little people, and the big bad actors would be largely unaffected.
So we rely upon reports apparently generated from reconciled alternate accounting systems to determine whether the funds were appropriated impactfully?
TheGivingBlock specializes in handling cryptoasset donations with tax receipts. (We don't yet (?) prefill returns in the US)
When you donate to a TheGivingBlock Cause Fund, you get a tax receipt by email and the donation is privately sent to each of the multiple registered nonprofit charities included in that particular Fund.
TheGivingBlock Cause Funds map to the UN SDG Sustainable Development Goals; are categorically aligned to the #GlobalGoals Goals, but not yet (?) to specific Targets or Indicators. https://thegivingblock.com/donate/
Re: CharityNavigator nonprofit rating methodology, SDG-aligned GRI Corporate Sustainability Reports, setting up a CharityVest charitable matching fund for your employees, SDGs,: https://news.ycombinator.com/item?id=24412594 https://westurner.github.io/hnlog/#comment-24412594 CharityVest feedback: https://westurner.github.io/hnlog/#story-23907902
How should nonprofits be evaluated in order to invest impactfully?
If somebody says "Strategic Alignment" again, will less-impactful cashflows be eliminated?
When the government nonprofit invests in government services, nonprofits, NGOs, international aid, and for-profit entities, how do we evaluate per-investment/per-allocation ROI according to the predefined criteria at regular intervals, potentially with incremental commitment?
So, the specific fund allocation control adjustments being requested here are presumably due to feedback (sensor data and nonlinearity) which may or may not be traceable to an actual funding suggestion allocation task item in the near future, presumably
Most of the highest-impact charities wouldn't score highly on a “what happens to the money” metric. Think of the good stuff the Gates Foundation has been doing, or the GiveWell top charities. It's a good way of measuring the benefits of cash injection, but that's all.
no
What would be better feedback for developing a resilient economy together? https://en.wikipedia.org/wiki/Glossary_of_systems_theory#Fee...
Ask HN: Do you use TLA+?
It's surprising to me that so many crypto projects don't use TLA+. Besides the ones mentioned here (https://lamport.azurewebsites.net/tla/industrial-use.html), does your company use TLA+? Why? Why not?
## TLA+
Wikipedia: https://en.wikipedia.org/wiki/TLA%2B
Src: https://github.com/tlaplus/tlaplus
Awesome: https://github.com/tlaplus/awesome-tlaplus
### Dr TLA+
Web: https://github.com/tlaplus/DrTLAPlus
Src: https://github.com/tlaplus/DrTLAPlus :
> Dr. TLA+ series - learn an algorithm and protocol, study a specification; [Byzantine] Paxos, Raft, Cosmos,
##
"Concurrency: The Works of Leslie Lamport" ( https://g.co/kgs/nx1BaB
##
https://westurner.github.io/hnlog/#comment-27442819 :
> Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ help with that at all?
New material that can absorb and release enormous amounts of energy
Not quite the same, but got me thinking: Could we theoretically power and refuel car in the following way:
- Take a cube of X elastic material and squish it really dense with a big machine.
- Power the car via the pressure of the material trying to expand.
- Once it's nearly depleted (fully expanded), take it to another squishing station.
I imagine you couldn't store anywhere near enough power that way today, but then that's also the kind of problem the linked material is trying to solve, right?
Compressed air energy storage is already a thing!
The energy stored is limited by the tensile strength of the container. The best capacity for a unit weight is from laminated carbon fibre tanks, but this still doesn’t even approach the energy density of ordinary hydrocarbon fuels.
You’ll find that there’s lots of interesting ways to store energy — like flywheels or chemical cells — but one way or another they’re all inherently limited by chemical bond strengths.
Fundamentally all energy storage is some sort of stored “tension” in chemical bonds that can be released to do useful work.
The reason fuels are so good is that this release needs a second component (oxygen) that is kept separated. This makes high energy densities safe.
No separation — like with compressed gas — means that the energy storage is a bomb waiting to go off. It would be too dangerous to use.
There are various forms of energy and various forms of energy storage, including chemical energy and gravitational potential energy.
We probably think of energy as entropy like gases because e.g Maxwell?
Examples of gravitational potential energy:
- A water tower or a pulley with weight suspended a greater distance from the most local mass/graviton centroid.
Potential energy: https://en.wikipedia.org/wiki/Potential_energy
WebVM: Server-less x86 virtual machines in the browser
Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"? The current pyodide CPython Jupyter kernel takes like ~25s to start at present, and can load Python packages precompiled to WASM or unmodified Python packages with micropip: https://pyodide.org/en/latest/usage/loading-packages.html#lo...
Does WebVM solve for workload transparency, CPU overutilization by one tab, or end-to-end code signing maybe with W3C ld-proofs and whichever future-proof signature algorithm with a URL?
The VM cannot have full TCP/IP stack, so any data research tasks are likely to need a special code paths and support for downloads. No SQL databases, etc.
From "Hosting SQLite Databases on GitHub Pages" https://news.ycombinator.com/item?id=28021766 https://westurner.github.io/hnlog/#comment-28021766 :
DuckDB can query [and page] Parquet from GitHub, sql.js-httpvfs, sqltorrent, File System Access API (Chrome only so far; IDK about resource quotas and multi-GB datasets), serverless search with WASM workers
https://github.com/phiresky/sql.js-httpvfs :
> sql.js is a light wrapper around SQLite compiled with EMScripten for use in the browser (client-side).
> This [sql.js-httpvfs] repo is a fork of and wrapper around sql.js to provide a read-only HTTP-Range-request based virtual file system for SQLite. It allows hosting an SQLite database on a static file hoster and querying that database from the browser without fully downloading it.
> The virtual file system is an emscripten filesystem with some "smart" logic to accelerate fetching with virtual read heads that speed up when sequential data is fetched. It could also be useful to other applications, the code is in lazyFile.ts. It might also be useful to implement this lazy fetching as an SQLite VFS [*] since then SQLite could be compiled with e.g. WASI SDK without relying on all the emscripten OS emulation.
Also, I'm not sure if jupyterlab/jupyterlab-google-drive works in JupyterLite yet? Is it yet possible to save notebooks and other files from JupyterLite running in WASM in the browser to one or more cloud storage providers?
https://github.com/jupyterlab/jupyterlab-google-drive/issues...
The VM could have its own TCP/IP stack, possibly with a SLIRP layer for translation of connections to the outside. Internet connectivity can be done by limiting it to AJAX, or forwarding the packets to a proxy (something like http://artemyankov.com/tcp-client-for-browsers/), or including a Tor client that connects to a Tor bridge, etc.
StackBlitz' WebContainers have in-browser TCP/IP stack, I think from MirageOS.
How does it get raw ip packets out from inside the browser?
Does it need to reframe packet structs and then fix fragmentation issues, or can it set the initial MSS (because MTU discovery likely won't work quite right) so that it doesn't have to ~shrink and re-fragment tunneled {TCP,} packets? https://en.wikipedia.org/wiki/Maximum_segment_size#MSS_and_M...
Plant-based epoxy enables recyclable carbon fiber
FWIU flax and hemp are more sustainable and potential substitutes for carbon fiber, which is industrially hazardous to produce?
The branching structure makes e.g. hemp bast fiber ideal for supercapacitor (~battery) anodes. How does this plant-based epoxy affect conductivity and resistivity in various materials in various outdoor conditions with and without a coating?
If it's plant-based, is it safe next to plenum cable? Can it be used for Hemp structural framing lumber, such as HempWood?
> potential substitutes
When I can order sheets of hemp fiber pre-preg and a recyclable epoxy I’ll get my company to use it same day.
Here I am holding my breathe, because I’m not sure what you heard but an organic fiber will never be as thin, strong, uniform as curated carbon thread that was produced specially for this purpose.
There are Hemp alloys.
From https://motonetworks.com/worlds-greenest-car-somebody-made-c... :
> It’s actually made using hemp fibers that have been woven together and then sealed and finished with as super hard resin. And once it’s completed you have an insanely strong material that makes steel look weak and brittle. It’s reported to be as much as ten times stronger than steel, yet is lighter than fiberglass.
> The Renew Sports Car was recently featured on an episode of Jay Leno’s Garage [...]*
> Dietzen got the idea from none other than Henry Ford who back in 1941 advocated that Ford should build everything they possibly could out of plant material. Which makes sense because that’s obviously going to drastically reduce material costs which should in turn, reduce the overall price of the car. And thanks to modern technology, Dietzen managed to figure out how to make it all work. If this catches on to the mainstream it could revolutionize the automotive industry across the board. He says that it takes roughly 100lbs of cannabis plants to make a car, which sounds like a lot, but when compared to how much steel and other metals used to create current automobiles that’s just a drop in the bucket.
"Is Hemp Really Stronger Than Steel? How?" re: Tensile Strength and Compression Strength exceeded that of {steel, aluminum, } https://hempfoundation.net/is-hemp-really-stronger-than-stee...
Do high carbon steel rockers rust after a couple years of average road salt?
Which carbon fibers are least health-hazardous to produce?
> It’s actually made using hemp fibers that have been woven together and then sealed and finished with as super hard resin. And once it’s completed you have an insanely strong material that makes steel look weak and brittle.
You want to know how you can quickly tell this is all bullshit?
Steel isn’t brittle. Not in any relative sense to composites. Saying something is weak and brittle… that’s dry spaghetti. Does steel failure remind you of that? Or does composite failure seem closer?
Carbon is brittle. Composites are brittle. Hemp with epoxy is going to be brittle,all relative to steel which has plenty of pliability to it.
Are you sure you know enough about composites to repeat these hemp claims? Because if “makes steel look weak and brittle” didn’t immediately flag for you, you either know a lot more than I’ve learned in 20 years of working around composites and manufacturing or you don’t and just want hemp to be cool.
Hemp might be amazing. But you are posting nonsense that it seems like you have some desire to believe for some reason.
Sure maybe it’ll get out of the lab just around the same time graphene does.
Have you here contested claims that hemp - biocomposite with resin - has greater Tensile Strength and Compressive Strength than steel and aluminum (and carbon fiber with sustainable binder)?
If this were OT, we could reference ScholarlyArticles which describe experiments which return scalar intervals for: Tensile and Compressive Strength, Melting point / deployed heat resistance, production cost in real dollars, carbon cost (reduction in carbon tax credits because unnecessary with alternate sustainable inputs), resistance to corrosion due to sodium chloride, [in-space without water] repairability, magnetizability, water transport and filtration cost, and other factors; with a fair standard panel for relative comparison.
Fyi, Fiber Melting points:
Hemp: 150 c
Flax, saline treated: 153 c
Basalt: 1,500 c
Carbon: 3652 c
Ask HN: Is it worth it to learn C to better understand Python?
For those who learned C after learning Python, was it worth it? Did it help you to better understand Python?
In college I used to tutor students in computer science. Mid-way through my college career my school switched the curriculum to teach Python instead of Java. I found that the students who took Python lacked what I considered a basic appreciation for and understanding of how the language actually ‘does’ something.
For instance, when you only need to write two curly braces to create a “dictionary” (of course behind the scenes this is a hash table) many of the nuances of that data structure are hidden from you. You have no idea that accesses are O(1) amortized but worst case are O(n). Even if you do, maybe you don’t understand why. You have no idea that there is reallocation and collision handling code working behind the scenes constantly.
Python also lacks a number of nuances I consider it important to be aware of. You live your entire Python life blissfully unaware that everything is a pointer, and the impact that has on performance. You never come across static typing, or that it is possible to check your code for some level of syntactic correctness before it runs (i.e. Python is interpreted rather than compiled).
I think from this standpoint, it is worth learning a language like C to gain an appreciation for how truly complex a lot of the things Python allows you to do, are. I think that anything which helps you better understand what it is you are doing, how you are doing it, and why you are doing it that way, will make you better at doing that thing.
From https://news.ycombinator.com/item?id=28709239 :
> From "Ask HN: Is it worth it to learn C in 2020?" https://news.ycombinator.com/item?id=21878372 : (which discusses [bounded] memory management)
> There are a number of coding guidelines e.g. for safety-critical systems where bounded running time and resource consumption are essential. *These coding guidelines and standards are basically only available for C, C++, and Ada.*
> awesome-safety-critical > Software safety standards: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
> awesome-safety-critical > Coding Guidelines: https://awesome-safety-critical.readthedocs.io/en/latest/#co...
Rancher Desktop 1.0
From https://rancherdesktop.io/ :
> Kubernetes and Container Management on the Desktop:
> An open-source desktop application for Mac, Windows and Linux.
> Rancher Desktop runs Kubernetes and container management on your desktop. You can choose the version of Kubernetes you want to run. You can build, push, pull, and run container images using either containerd or Moby (dockerd). The container images you build can be run by Kubernetes immediately without the need for a registry.
How does Rancher Desktop compare to Docker Desktop in terms of e.g. k8s support?
I think they ship k3s, which is sort if vanilla. It works great in my (limited) experience.
A key point around Rancher Desktop's k8s (actually k3s) support that stands out is that it allows you to use any k3s version and swap between them. This is great for those that want to test their workloads on different k8s versions right on their desktop.
Systemd by Example
This is outstanding... I really wish I would have had this learning systemd as it would have saved me _hours_. Let's be real - a lot of Linux fundamental stuff is still pretty terse. Learning tools like this really smooth it out and optimize how long it takes to wrap your head around a concept. Also, I am a _huge_ advocate for hands-on in regards to retention.
Adding to my favorites and will be passing this on over the years - thank you for such a great resource.
If anything the biggest problem the systemd docs have is verbosity. It's all classic UNIX man pages, a billion pages of detail on every possible setting with no useful examples anywhere. Fortunately the core system is simple enough that the learning curve isn't too steep but I'd really hate to try and learn it from the official docs. They don't even apply CSS to the HTML versions of the docs.
Wikipedia: https://en.wikipedia.org/wiki/Systemd
Web: https://systemd.io/
Src: https://github.com/systemd/systemd
Systemd manpage index: https://www.freedesktop.org/software/systemd/man/
https://www.freedesktop.org/software/systemd/man/systemd.htm... :
man 1 systemd
man systemd
man init
...: man systemctl
man journalctl
man systemd.timer
man systemd-resolved
The Arch Linux wiki docs for systemd are succinct:
https://wiki.archlinux.org/title/systemdFedora docs > "Understanding and administering systemd" https://docs.fedoraproject.org/en-US/quick-docs/understandin...
RHEL7 System Administrator’s Guide > "Chapter 10. Managing Services with SystemD" https://access.redhat.com/documentation/en-us/red_hat_enterp...
This isn't a knock but it's exactly what I was talking about and why I find OP's learning tool to be so valuable.
Lots of folks learn by example and hands-on labs. Personally, I'd much rather learn the basic ropes by jumping into a tool like OP's vs. finding/digging through all of these resources. I'll also criticize to say you likely already know much about systemd, and were able to pull/filter these resources much easier vs someone completely new to the concepts.
To illustrate further: vim is another tool that has outstanding learning resources, everything from very quick "hey get started" examples docs all the way up to adventure games. If I had to go back and relearn vim I would absolutely do it this way vs. digging on man pages like when I was a kid in the 90's. Personally, I learn by doing.
---
Overall - OP's thingy is what I would call a "rich interactive learning tool." It's anecdotal, and obvious projection - but _for me_, interactive learning tools optimize the time it takes to fully "grok" a subject from scratch vs. jumping into a bunch of docs/man pages.
I often find the `## Usage Examples` heading in manages to be most helpful, too.
~Examples as Integration Tests as executable notebooks with output and state assertions may incur less technical debt.
How to manage containers with [MicroShift] k8s/k3d with systemd might be a good example.
Read the Source, Read the Docs, Write the Docs, post the Link to the actual Source of the docs: https://github.com/systemd/systemd/blob/main/docs/index.md
Systemd man pages source: https://github.com/systemd/systemd/tree/main/man
Also https://www.freedesktop.org/software/systemd/man/systemd.mou... https://github.com/systemd/systemd/blob/main/man/systemd-mou... :
man systemd.mount
Well, we also have a very nice manpage viewer.
I think systemd.directives(7) is an often overlooked manpage.
FWIW, the man.vim vim plugin does grep and some syntax highlighting. https://github.com/vim-utils/vim-man
Pwnkit: Local Privilege Escalation in polkit's pkexec (CVE-2021-4034)
What a glorious little bug. They're trying to scan arguments, and have a loop that starts with (effectively) argv[1]. But if argv is NULL, the loop terminates immediately --- with the maximum argument still set to 1, an out-of-bounds dereference to argv[1] that ends up pointing into the environment. Just beautiful.
I'm surprised that the Linux kernel doesn't prevent execve with argc == 0. I know a lot of programs that would break on that too.
Changing that now would break the core tenet of "never break userspace".
I actually wonder what it could actually break.
argv[0] always exists in normal means, it is either physical path of the executable or soft link position. Without accidents like run a program from code and forget about passing argv. That couldn't happen at all. And in fact, most cross platform program or shell even just assume argv[0] is the same as executable path (and don't give you a option to change it, because it is not possible on all platforms).
I have personally written (lazy) code that calls execve with argv=NULL. So it'd break that.
So, all C/C++ code on {Linux,} but not {OpenBSD,} that handles argc/argv needs to at least just exit() on the conditions added by the like 8 lines in main() in the patch for this issue?: https://gitlab.freedesktop.org/polkit/polkit/-/commit/a2bf5c...
if (argc < 1) {
exit(126);
}
if (argv[n] == NULL) {
exit(128)
}
// But this could never ever happen?
if (argv == NULL)
MicroShift
> Functionally, MicroShift repackages OpenShift core components* into a single binary that weighs in at a relatively tiny 160MB executable (without any compression/optimization).
Sorry, 160meg isn't 'relatively tiny'.
I have many thousands of machines running in multiple datacenters and even getting a ~4mb binary distributed onto them without saturating the network (100mbit) and slowing everything else down, is a bit of a challenge.
Edit: It was just over 4mb using .gz and I recently changed to use xz, which decreased the size by about 1mb and I was excited about that given the scale that I operate at.
> I have many thousands of machines running in multiple datacenters and even getting a ~4mb binary distributed onto them without saturating the network (100mbit) and slowing everything else down, is a bit of a challenge.
Maybe I suggest CAS (Content-addressable storage) or something similar for distributing it instead? I've had good success using torrents for distributing large binaries to a large fleet of servers (that were also in close proximity to each other physically, in clusters with some more distance between them) relatively easily.
Thanks for the suggestion, but as weird as this sounds, we also don't have central servers to use at the data centers for this.
The machines don't all need to be running the same version of the binary at the same time, so I took a simpler approach, which is that each machine checks for updates on a random schedule over a configureable amount of time. This distributes the load evenly and everything becomes eventually consistent. After about an hour everything is updated without issues.
I use CloudFlare workers to cache at the edge. On a push to master, the binary is built in Github CI and a release is made after all the tests pass. There is a simple json file where I can define release channels for a specific version for a specific CIDR (also on a release ci/cd as well, so I can validate the json with tests). I can upgrade/downgrade and test on subsets of machines.
The machines on their random schedule hit the CF worker which checks the cached json file and either returns a 304 or the binary to install depending on the parameters passed in on the query string (current version, ip address, etc.). Binary is downloaded, installed and then it quits and systemd restarts the new version.
Works great.
Notes re: distributed temporal Data Locality, package mirroring, and CAS such as IPFS: "Draft PEP: PyPI cost solutions: CI, mirrors, containers, and caching to scale" (2020) https://groups.google.com/g/pypa-dev/c/Pdnoi8UeFZ8 https://discuss.python.org/t/draft-pep-pypi-cost-solutions-c...
apt-transport-ipfs: https://github.com/JaquerEspeis/apt-transport-ipfs
Gx: https://github.com/whyrusleeping/gx
IPFS: https://wiki.archlinux.org/title/InterPlanetary_File_System
Systemd service sandboxing and security hardening (2020)
This is a pretty lax policy IMHO, you can go much farther. These days I usually start with this, it's much more strict:
https://news.ycombinator.com/item?id=29976096
Or simply follow whatever `systemd-analyze security` recommends, just make sure you run it on a system with recent systemd.
Which distro has the best out-of-the-box output for:?
systemd-analyze security
Is there a tool like `audit2allow` for systemd units? selinux/python/audit2allow/audit2allow: https://github.com/SELinuxProject/selinux/blob/master/python...https://stopdisablingselinux.com/
> Which distro has the best out-of-the-box output
I haven't seen any difference between distributions with the same systemd version. Anything with a recent one should do fine. More recent than RHEL8, mind you (which is on systemd 239): for example, a syscall allow/deny analysis is buggy there and asks you to enable some protections, and then disable them. The same unit is analyzed correctly on my desktop with v250 (I use the popular rolling release distribution).
I haven't seen anything like audit2allow. It's probably not especially necessary because of the difference in philosophies: SELinux is deny by default, while in systemd you're playing whack-a-mole anyway, and are expected to add directives one by one until the application stops working. Unit logs usually make it obvious if something was denied.
The usual way I've seen (and do myself) is to just let the process be killed and have its coredump taken, then `coredumpctl gdb $process_name -A '-ex "print $rax" -ex "quit"'` to get the syscall number, then check `systemd-analyze syscall-filter` for whether I want to allow just that one syscall or the whole group it's in.
> The usual way I've seen (and do myself) is to just let the process be killed and have its coredump taken, then `coredumpctl gdb $process_name -A '-ex "print $rax" -ex "quit"'` to get the syscall number, then check `systemd-analyze syscall-filter` for whether I want to allow just that one syscall or the whole group it's in.
Another approach would be to set SystemCallLog= to be the opposite of SystemCallFilter= (negate each group with ~) and then you'll see the call (and caller) in the journal.
FWIU, e.g. sysdig is justified atop whichever MAC system.
In the SELinux MAC system on RHEL and Debian, in /etc/config/selinux, you have SELINUXTYPE=minimal|targeted|mls. RHEL (CentOS and Rocky Linux) and Fedora have SELINUXTYPE=targeted out-of-the-box. The compiled rulesets in /etc/selinux/targeted are generated when [...].
With e.g gnome-system-monitor on a machine with SELINUX=permissive|enforcing, you can right-click the column header in the process table to also display the 'Security context' column that's also visible with e.g. `ps -Z`. The stopdisablingselinux video is a good SELinux tutorial.
I'm out of date on Debian/Ubuntu's policy set, which could also probably almost just be sed'ed from the current RHEL policy set.
> * SELinux is deny by default, while in systemd you're playing whack-a-mole anyway, and are expected to add directives one by one until the application stops working. Unit logs usually make it obvious if something was denied.*
DENY if not unconfined is actually the out-of-the-box `targeted` config on RHEL and Fedora. For example, Firefox and Chrome currently run as unconfined processes. While decent browsers do do their own process sandboxing, SELinux and/or AppArmor and/or 'containers' with a shared X socket file (and drop-privs and setcap and cgroups and namespaces fwtw) are advisable atop really any process sandboxing?
Given that the task is to generate a hull of rules that allow for the observed computational workload to complete with least-privileges, if you enable like every rule and log every process hitting every rung on the way down while running integration tests that approximate the workload, you should end up with enough rule violations in the log to even dumbly generate a rule/policy set without the application developer's expertise around to advise on potential access violations to allow.
From https://github.com/draios/sysdig :
> "Sysdig instruments your physical and virtual machines at the OS level by installing into the Linux kernel and capturing system calls and other OS events. Sysdig also makes it possible to create trace files for system activity, similarly to what you can do for networks with tools like tcpdump and Wireshark.
Probably also worth mentioning: "[BETA] Auditing Sysdig Platform Activities" https://docs.sysdig.com/en/docs/developer-tools/beta-auditin...
A bit of SELinux:
# /etc/selinux/config
SELINUXTYPE=targeted
SELINUX=permissive
$# touch /.autorelabel # `restorecon /` at boot
$# reboot
$# setenforce 1 # redundant
$ sudo aureport --avc
$ journalctl --system -u auditd
$ journalctl --system -o json-seq --reverse
$ journalctl --system --grep "AVC" --reverse
journalctl -fa _TRANSPORT=audit
journalctl -fa _TRANSPORT=audit --grep AVC
journalctl -a _TRANSPORT=audit --grep 'AVC avc: denied' -o json | pyline -m json 'json.dumps(json.loads(l), indent=2, sort_keys=True)'
Why isn't there a universal data format for résumés?
Looks like there's an hresume microformat but https://schema.org/Person and https://schema.org/Occupation are probably the way to mark up resumes as JSONLD.
From https://stackoverflow.com/questions/51315725/resume-work-his... :
> Schema.org's Occupation example 4, illustrates how to use Role and hasOccupation to associate an array of work history, like so:
There's already a https://schema.org/JobPosting .
LAN-port-scan forbidder, browser addon to protect private network
And also "Private Network Access: introducing preflights" (2022) https://developer.chrome.com/blog/private-network-access-pre...
> Chrome is deprecating direct access to private network endpoints from public websites as part of the Private Network Access (PNA) specification.
> Chrome will start sending a CORS preflight request ahead of any private network request for a subresource, which asks for explicit permission from the target server. This preflight request will carry a new header, `Access-Control-Request-Private-Network: true`, and the response to it must carry a corresponding header, `Access-Control-Allow-Private-Network: true`
> The aim is to protect users from cross-site request forgery (CSRF) attacks targeting routers and other devices on private networks. These attacks have affected hundreds of thousands of users, allowing attackers to redirect them to malicious servers.
What would a browser setting to just block all PWA requests (`DENY * TO *` (to {192.168.0.1, .1.1, .100.1,}) - regardless of the appropriate new HTTP headers - actually prevent a normal user from doing?
Ask HN: What are the best books for professional effectiveness?
What books have helped you be more effective at work that apply to most “knowledge work” jobs?
Getting Things Done by David Allen was very helpful to me. It's barely a book - if you skim over the cruft it's more like a pamphlet. I don't follow it religiously but some of his ideas have made a big difference in my productivity.
There's a flowchart of the GTD procedures with something like standard flowchart symbols in [1] PDF, [2] Search query (/?) [3] PNG / MP4 gif. But not yet a meme MP4 GIF, FWIU.
Given that URLs are the workflow improvement of the web, watch if these citations is most useful to you?
1: Heylighen, Francis, and Clément Vidal. "Getting things done: the science behind stress-free productivity." Long Range Planning 41, no. 6 (2008): 585-605.
2: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=Get...
3: https://www.researchgate.net/profile/Francis-Heylighen/publi...
6: https://westurner.github.io/wiki/workflow
5: https://github.com/westurner/wiki/blob/master/workflow.rest
5 is editable by the author in their - the original - fork; which may include local and remote branches that Pull Requests and ("#DevOpsSec") Continuous Integration build task automation that help collaborators collaborate on [open source] software in free public git repos.
HTTP Message Signatures
Is this intended to include the featureset of Signed HTTP Exchanges (SXG) as well?
I was wondering how much overlap the two technologies have too.
What I'd really like to see supported is a way for a web app to specify a public key and a version number, and for the browser to then prompt you every subsequent time you access that page if the key or version number has changed. This would effectively allow web apps to use the same security model as desktop apps, i.e. TOFU.
Arguably, though, auto-updating desktop apps are less secure than even TOFU, but I don't know users can be expected to judge whether v1.2.3 is more secure than v1.2.2, so perhaps the prompt doesn't provide any practical benefit.
Is this solveable by including a cache key in the URL or other client-side storage?
Well, there is the SecureBookmarks trick[0], which uses integrity hashes and Data URLs, so you're right that signatures aren't strictly needed (especially at the HTTP level). I think the difficult part is the UX, and how to handle the failure modes, which is why browsers might reasonably be reluctant to support something like this.
Are there additional HTTP status code-like error codes that need to be specified?
What [same origin, SXG,] failure modes are there?
From "HTTP 402: Payment Required" https://news.ycombinator.com/item?id=22214156 :
> The new W3C Payment Request API [4] makes it easy for browsers to offer a standard (and probably(?) already accessible) interface for the payment data entry screen, at least.
> https://www.w3.org/TR/payment-request/
The failure modes I was imagining are things like what happens if you visit a site and it is no longer serving signed content any more, or it's serving content signed by a key you haven't seen before? Either the user needs to be able to decide (based on information received out-of-band) which keys to trust, or you have to trust a site to never lose control of its keys (which reintroduces all the problems of HPKP, like ransoming).
Even in the happy path, there are questions of how you present to the user that the content has been signed (and by which key), separately from the guarantees that the TLS transport gives. Does there need to be a UI for viewing the key details, and for approving/rejecting upgrades to the web app?
You said HPKP and I misread it
Re: HKP, WKD, GPG Linked Data signatures, Keybase+Zoom and ACME https://westurner.github.io/hnlog/#comment-28814802
https://westurner.github.io/hnlog/#comment-26127879
Ctrl-F "trillian" re: CT cert grant and revocation events on a blockchain
> Does there need to be a UI for viewing the key details, and for approving/rejecting upgrades to the web app?
blockcerts/cert-verifier-js ?
> Does there need to be a UI for viewing the key details, and for approving/rejecting upgrades to the web app?
There do need to be standards for displaying such errors and requesting key verification in the browser. e.g. DNSSEC and DoH/DoT errors should also propagate up to the browser eh? How do web3 browsers handle keychains?
https://github.com/blockchain-certificates/cert-verifier-js#...
From (an obscure comment with pictures on) "Roadmap update for TUF support " https://github.com/pypa/warehouse/issues/5247#issuecomment-9... :
> Only users with package release permissions can create a new SoftwareRelease record for that project
You can log hashes to sigstore now, which is a centralized db supported by The Linux Foundation. https://sigstore.dev/ :
> How sigstore works: sigstore is a set of tools developers, software maintainers, package managers and security experts can benefit from. Bringing together free-to-use open source technologies like Fulcio, Cosign and Rekor, it handles digital signing, verification and checks for provenance needed to make it safer to distribute and use open source software.
> A standardized approach: This means that open source software uploaded for distribution has a stricter, more standardized way of checking who’s been involved, that it hasn’t been tampered with. There’s no risk of key compromise, so third parties can’t hijack a release and slip in something malicious.
> Building for future integrations: With the help of a working partnership that includes Google, the Linux Foundation, Red Hat and Purdue University, we’re in constant collaboration to find new ways to improve the sigstore technology, to make it easy to adopt, integrate and become a long-lasting standard.
But then DIDs and ld-proofs (with at least the current trust root in a trustless DLT of some sort) are even more standardized.
Software Releases, [Academic, Professional, Medical,] Credentials, Legal Documents, Server Certs, ScholarlyArticles: all of these things can be signed and may already be listed in the Use Cases documents for W3C DID Decentralized Identifiers [1] and W3C VC Verifiable Credentials [2] which are summarized in context to Keybase here: https://news.ycombinator.com/item?id=28814802
[1] https://www.w3.org/TR/did-use-cases/
[2] https://www.w3.org/TR/vc-use-cases/
Notably: DNSSEC errors do not propagate up to the browser; when DNSSEC fails, sites simply fall off the Internet as if they never existed. It's a major fault of the DNSSEC design.
Unfortunately, because all domains don't yet have DNSSEC DNS records, a strict resolver that requires DNSSEC signatures will fall to resolve all domains unless it's configured to e.g. "allow-downgrade" to non-DNSSEC-signed records for any lookup.
FWIR, the same is basically true of DoH and DoT: if it's not configured to ~allow-downgrade, DNS will fall to resolve when connected to e.g. a captive portal hotspot; and there's no indication that DoH/DoT aren't working in the browser.
How do DNS resolution APIs need to change to accommodate basic DNSSEC and DoH/DoT error handling?
From https://news.ycombinator.com/item?id=28543911 :
```
From https://blog.cloudflare.com/automatic-signed-exchanges/ :
> The broader implication of SXGs is that they make content portable: content delivered via an SXG can be easily distributed by third parties while maintaining full assurance and attribution of its origin. Historically, the only way for a site to use a third party to distribute its content while maintaining attribution has been for the site to share its SSL certificates with the distributor. This has security drawbacks. Moreover, it is a far stretch from making content truly portable.
> In the long-term, truly portable content can be used to achieve use cases like fully offline experiences. In the immediate term, the primary use case of SXGs is the delivery of faster user experiences by providing content in an easily cacheable format. Specifically, Google Search will cache and sometimes prefetch SXGs. For sites that receive a large portion of their traffic from Google Search, SXGs can be an important tool for delivering faster page loads to users.
> It’s also possible that all sites could eventually support this standard. Every time a site is loaded, all the linked articles could be pre-loaded. Web speeds across the board would be dramatically increased.
"Signed HTTP Exchanges" draft-yasskin-http-origin-signed-responses https://wicg.github.io/webpackage/draft-yasskin-http-origin-...
"Bundled HTTP Exchanges" draft-yasskin-wpack-bundled-exchanges https://wicg.github.io/webpackage/draft-yasskin-wpack-bundle... :
> Web bundles provide a way to bundle up groups of HTTP responses, with the request URLs and content negotiation that produced them, to transmit or store together. They can include multiple top-level resources with one identified as the default by a primaryUrl metadata, provide random access to their component exchanges, and efficiently store 8-bit resources.
From https://web.dev/web-bundles/ :
> Introducing the Web Bundles API. A Web Bundle is a file format for encapsulating one or more HTTP resources in a single file. It can include one or more HTML files, JavaScript files, images, or stylesheets.
> Web Bundles, more formally known as Bundled HTTP Exchanges, are part of the Web Packaging proposal.
> HTTP resources in a Web Bundle are indexed by request URLs, and can optionally come with signatures that vouch for the resources. Signatures allow browsers to understand and verify where each resource came from, and treats each as coming from its true origin. This is similar to how Signed HTTP Exchanges, a feature for signing a single HTTP resource, are handled.
```
Asmrepl: REPL for x86 Assembly Language
This could be implemented with Jupyter notebooks as a Jupyter kernel or maybe with just fancy use of explicitly returned objects that support the (Ruby-like, implicit) IPython.display.display() magic.
IRuby is the Jupyter kernel for Rubylang:
iruby/display: https://github.com/SciRuby/iruby/blob/master/lib/iruby/displ...
iruby/formatter: https://github.com/SciRuby/iruby/blob/master/lib/iruby/forma...
More links to how Jupyter kernels and implicit display() and DAP: Debug Adapter Protocol work: "Evcxr: A Rust REPL and Jupyter Kernel" https://news.ycombinator.com/item?id=25923123
"ENH: Mixed Python/C debugging (GDB,)" https://github.com/jupyterlab/debugger/issues/284
... "Ask HN: How did you learn x86-64 assembly?" https://news.ycombinator.com/item?id=23930335
On yak shaving and <md-block>, a new HTML element for Markdown
Oh dear. Markdown has no standard grammar or standard way of rendering.
This website lags while I scroll on mobile.
I remember when the internet used to be fast and responsive. Probably just getting old.
there is the commonmark specification https://spec.commonmark.org/0.30/
I work a lot with common mark and hoedown for work. There's no formal Grammer for parsing markdown unfortunately. The closest thing to what the parent comment is talking about is pandoc which most people are afraid of because Haskell is scary and the markdown format they use is strange.
Also all of the parsers for md are very complicated.
Jupyter-book depends upon MyST-NB (MyST Notebooks) which depends upon myst-parser which depends upon markdown-it-py (which is a port of markdown-it (TypeScript)) for parsing [MyST] Markdown.
https://github.com/executablebooks/myst-parser :
> MyST is a rich and extensible flavor of Markdown meant for technical documentation and publishing.
> MyST is a flavor of markdown that is designed for simplicity, flexibility, and extensibility. This repository serves as the reference implementation of MyST Markdown, as well as a collection of tools to support working with MyST in Python and Sphinx. It contains an extended CommonMark-compliant parser using markdown-it-py, as well as a Sphinx extension that allows you to write MyST Markdown in Sphinx.
https://github.com/executablebooks/markdown-it-py :
> Follows the CommonMark spec for baseline parsing; Configurable syntax: you can add new rules and even replace existing ones; Pluggable: Adds syntax extensions to extend the parser (see the plugin list), High speed (see our benchmarking tests) ; Safe by default
https://github.com/markdown-it/markdown-it :
> Follows the CommonMark spec + adds syntax extensions & sugar (URL autolinking, typographer), Configurable syntax!; You can add new rules and even replace existing ones; High speed; Safe by default; Community-written plugins and other packages on npm
Thoughts on “E-Readers” (2009)
> But MOST importantly the way the user interacts with the interface would re-create the ease of use rendered by our interaction with print media. This last part, about the user interface, is why I believe Apple is best positioned to tackle this problem.
> In one year's time we'll see. 2010 should be the year paper learns of it's destiny, the year we reach a tipping point on our way towards bits from the momentum of atoms.
This is of course a prescient prediction of the iPad. But what it also reminded me of is a device that had already been shown publicly at the time: The Microsoft Courier.
Microsoft was working on a dual-screen tablet that was essentially a notebook in computer form, complete with custom OS.[1] In the end it never happened.
But I think why this hasn't happened (although there are a few color e-ink devices available from Chinese companies) is that the B&W e-ink devices like the Kindle handle reading normal books just fine. The only cases where color would really be needed would be for comic books, art books, and to some extent scientific papers with color figures.
Small monochrome e-readers are great for reading pure text, e.g fiction.
But they’re quite hopeless for textbooks with lots of images/diagrams/tables and more complex layout. Zooming and scrolling is hopeless on an e-ink screen.
That's why we don't use ink devices for those kinds of documents. I have a range of tools that I use for various use cases. In depth reading fiction - Kindle. Casual browsing of large documents with complex layouts and images - iPad. Working on documents - laptop/desktop.
I would almost never mix linear reading with those other kinds of documents so I don't need devices that handle both kinds of documents.
https://en.m.wikipedia.org/wiki/Adam_tablet
> The Adam is set to be the first Android device marketed to contain Pixel Qi's low-power, dual-mode display
Was it that there wasn't a market for Pixel Qi dual-mode displays?
IDK how well you can scroll a (monochrome) PDF textbook on a PineNote development unit yet, given that the eInk display driver didn't work yet last I heard. https://goodereader.com/blog/electronic-readers/high-end-e-i...
Nobody reads the whole PDF on even a large phone in landscape mode. There's a new server-side liquid PDF reflowing solution that could help solve the fixed font sizes and margin issues on mobile devices. ePub is basically just HTML, by comparison. But certain walled gardens feel like it's necessary to convert epub to mobi, rather than just integrate one of a number of existing open source e-reader apps with support for multiple document, eBook, and textbook formats.
Ask HN: Why don’t startups share their cap table and/or shares outstanding?
Start-ups expect you to join as the 4th employee, work incredibly hard for years and not have a clear picture of your compensation? Am I meant to take a 409a valuation at face value?
Is there a legal reason? If not, founders who are hiring, please know that withholding information like this is the easiest way to lose a potential hire.
From "Ask HN: Cap Table Service Recommendations" https://news.ycombinator.com/item?id=27044479 :
> Here are the "409a valuation" reviews on FounderKit: https://founderkit.com/legal/409a-valuation/reviews
> https://en.wikipedia.org/wiki/Capitalization_table
And from "Stock dilution" https://en.wikipedia.org/wiki/Stock_dilution :
> Stock dilution, also known as equity dilution, is the decrease in existing shareholders' ownership percentage of a company as a result of the company issuing new equity.[1] New equity increases the total shares outstanding which has a dilutive effect on the ownership percentage of existing shareholders. This increase in the number of shares outstanding can result from a primary market offering (including an initial public offering), employees exercising stock options, or by issuance or conversion of convertible bonds, preferred shares or warrants into stock. This dilution can shift fundamental positions of the stock such as ownership percentage, voting control, earnings per share, and the value of individual shares.
It is reasonable for shareholders to request - in dated written form - Transparency with What-If scenarios for "let's just issue more shares to raise capital".
From "Ask HN: Value of “Shares of Stock options” when joining a startup" https://news.ycombinator.com/item?id=19789785 :
> There are a number of options/equity calculators:
> https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits", "~20% of companies will make you money")*
> https://comp.data.frontapp.com/ "Compensation and Equity Calculator"
> http://optionsworth.com/ "What are my options worth?"
> http://foundrs.com/ "Co-Founder Equity Calculator"
From foundrs:
> The equity numbers assume a typical 4-year vesting for all founders including the CEO, with no cliff. It also assumes that no significant salary is provided to any of the co-founders (if that is wrong, you are entering into an employee relationship, not a co-founder relationship). If a founder leaves, vesting applies and they forfeit the shares that have not vested yet.
Vesting > Ownership in startup companies: https://en.wikipedia.org/wiki/Vesting#Ownership_in_startup_c...
Toxiproxy is a framework for simulating network conditions
I wrote the initial version of Toxiproxy back in 2014, but Jacob Wirth took it way beyond during his internship at Shopify. It came out of a need for writing integration tests for resiliency work we did at Shopify back then. [1] We didn't want someone to suddenly re-introduce a hard dependency on e.g. Redis on Shopify's storefronts. The initial prototype was a shell script that used lsof(1) and gdb(1) to close the file descriptor of the various connections. But, besides being dodgy, we needed to also simulate latency and make sure it worked on everyone's MacOS laptop's (otherwise e.g. tc(1) would have been intriguing). I wrote a little bit more of the history of Toxiproxy on Twitter. [2] It's stable and has proxied everything in dev and CI at Shopify for over half a decade.
[1]: https://shopify.engineering/building-and-testing-resilient-r...
[2]: https://twitter.com/Sirupsen/status/1455622640727728137
What a useful tool for resilience engineering.
https://github.com/dastergon/awesome-chaos-engineering#notab... does list toxiproxy.
Any general pointers for handling network connectivity issues (from any OSI layer) in client and server apps?
Many apps lack 'pending in outbox' functionality that we expect from e.g. email clients.
- [ ] Who could develop a set of reference toxiproxy 'test case mutators' (?) for simulating typical #DisasterRelief connectivity issues?
(In Python, Pytest + Hypothesis + Toxiproxy-python would be useful.)
Report on Stablecoins [pdf]
It may not be immediately obvious that all banks run their own "stablecoin": LD: "Ledger Dollars". How do "internal bank ledger dollars" differ from stablecoins?
As well, very many countries of the world "peg" their currency to the USD.
From https://www.investopedia.com/terms/c/currency-peg.asp :
> Countries will experience a particular set of problems when a currency is pegged at an overly low exchange rate. On the one hand, domestic consumers will be deprived of the purchasing power to buy foreign goods. Suppose that the Chinese yuan is pegged too low against the U.S. dollar. Then, Chinese consumers will have to pay more for imported food and oil, lowering their consumption and standard of living. On the other hand, the U.S. farmers and Middle East oil producers who would have sold them more goods lose business. This situation naturally creates trade tensions between the country with an undervalued currency and the rest of the world.
> Another set of problems emerges when a currency is pegged at an overly high rate. A country may be unable to defend the peg over time. Since the government set the rate too high, domestic consumers will buy too many imports and consume more than they can produce. These chronic trade deficits will create downward pressure on the home currency, and the government will have to spend foreign exchange reserves to defend the peg. The government's reserves will eventually be exhausted, and the peg will collapse.
> When a currency peg collapses, the country that set the peg too high will suddenly find imports more expensive. That means inflation will rise, and the nation may also have difficulty paying its debts. The other country will find its exporters losing markets, and its investors losing money on foreign assets that are no longer worth as much in domestic currency.
https://en.wikipedia.org/wiki/List_of_countries_by_exchange_...
Is this the game:
(A USD / B USD) * X = (C LD / D LD)
Who decides what the monetary bases - B USD and D LD - are? Should online games just keep issuing in-game currency? Are gift cards also stablecoins?Perhaps "All of the World’s Money and Markets in One Visualization" could be updated to indicate which of the depicted assets are stablecoins and which are derivatives? https://www.visualcapitalist.com/all-of-the-worlds-money-and...
Are there some historical examples of centralized economic planning resulting in currency devaluation and subsequent directly resultant unrest?
Intel Extension for Scikit-Learn
https://github.com/intel/scikit-learn-intelex
CuML is similar to Intel Extension for Scikit-Learn in function? https://github.com/rapidsai/cuml
> cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. For large datasets, these GPU-based implementations can complete 10-50x faster than their CPU equivalents. For details on performance, see the cuML Benchmarks Notebook.
The Metaverse Was Lame Even Before Facebook
Weren't BBS (Bulletin Board Systems) lame before Facebook, too?
BBS > List of features https://en.wikipedia.org/wiki/Bulletin_board_system
Metaverse > History https://en.wikipedia.org/wiki/Metaverse
TIL there's a VR version of Flight Simulator 2020 and it has the best Earth model of any game. Is that a metaverse? https://en.wikipedia.org/wiki/Microsoft_Flight_Simulator_(20...
> Flight Simulator simulates the topography of the entire Earth using data from Bing Maps. Microsoft Azure's artificial intelligence (AI) generates the three-dimensional representations of Earth's features, using its cloud computing to render and enhance visuals, and real-world data to generate real-time weather and effects. Flight Simulator has a physics engine to provide realistic flight control surfaces, with over 1,000 simulated surfaces, as well as realistic wind modelled over hills and mountains
AFAIU, e.g. Microsoft Planetary Computer data is not yet integrated into any Virtual Game World Metaverses? An in-game focus on real world sustainability would help us understand that online worlds are very much connected to the real world. https://planetarycomputer.microsoft.com/applications
https://github.com/TomAugspurger/scalable-sustainability-pyd... describes how that can be done with Python code.
You are being downvoted but I agree with your point. Just because past attempts at something were “lame” does not mean the next attempt will be lame. You could just as well say that smartphones were lame before the iPhone. After all, Nokia had Symbian smartphones for some time. HTC had Windows Mobile smartphones with a touchscreen before the iPhone. But when the iPhone happened, everything fell into place and it changed the world. I am a massive skeptic of the meta verse concept, and definitely wary of any big technology company having a lot of influence/control over it (particularly Facebook). But I also don’t think a backwards-looking retrospective on something that hasn’t been done before in the same way is useful or a convincing dismissal.
Is college worth it? A return-on-investment analysis
It makes my flesh crawl to see college reduced purely to ROI, but at least they're honest that that's what they're doing.
I just hope people at least consider all of the other "outputs" of going to college when reading something like this. The ROI analysis is good data, it answers an important question. But there are many, many other important questions worth answering. (And most of them aren't quantifiable, so it's not like you could do a study on them if you wanted to.)
College can be an awful experience for some people even if they end up making good money. And it can be an excellent life experience for people who end up making dirt. I got lucky -- I had a great experience, I learned a lot of important non-academic things, I broke out of my shell, and it's pretty directly responsible for a large portion of my earnings. (And yes, the numbers for my university and major from the study match my current income and age quite well.)
But now I have a kid in high school, and I'm facing all the questions about what futures to keep open and what ones to sacrifice in the service of others. College is much more expensive now. My family was dirt poor and so I had massive financial aid; any financial aid my kid gets will have to be merit-based. Degree inflation has sucked away a lot of the value of having a degree. I can afford to support a less secure (low-risk) path through life if I think it would be better for my kid as a human being. These are not easy questions to be facing.
Increasingly I see the idea of thinking of college in any other terms as a deliberate meme designed to make people pay extremely high prices and put them in a debt trap, and that the sources of that meme like it that way.
Before college can be creating well-rounded citizenry or provide "excellent life experiences" or any of the other things people want of it, it must first provide a good ROI. This is a necessary foundation. If it is not doing that, then all the other fancy things are merely digging the debt trap in deeper because you're paying for all these things while failing to obtain a method of paying them back.
As a culture, in decades past we were used to college generally being so cheap that providing a good ROI wasn't that big a deal. It's much easier to get a good ROI out of something cheap. So we focused on the higher layers and were able to neglect the fact that the higher layers were always built on a foundation of the fact that college in general was a good ROI. Consequently we have come to misinterpret those higher level things as the purpose of college.
But it is necessary that these aspirational benefits be setting on a good foundation of good ROI to not be abusive to the customer.
It is not and must not be considered some sort of betrayal to be worried about ROI for college. It must simply be seen as an understanding of the fact that as the costs have changed, the way we must analyze college has also changed. It was always true, it's just now it has manifested in a larger way.
College wouldn’t need to provide a good roi if it were much mess expensive. Just like, strictly speaking, going to a movie doesn’t provide a good roi.
It’s one thing to take four years of your life deepening your thinking ability and stretching your mind without a good roi if you come out the other end without crushing debt and prospects for employment that can sustain you, even if those prospects have nothing to do with your college experience. This is what college was like until the last four decades or so.
It’s entirely another to come out the other side in crippling debt and no prospects for any kind of sustaining job.
If it provides the same value, but costs less, then by definition, the ROI is better. OTOH, if it puts you in crippling debt, but you have no prospects of a job to support that - the ROI becomes bad or even negative. So doesn't it capture all of that already?
In some sense, twice the cost for twice the benefit might be worth it to those who can afford it, because the time-wise investment (4 years or so of your life) remain constant.
No, it doesn't capture all of that already.
A "better" ROI doesn't necessarily mean a positive ROI.
Getting negative 10% return on a $1,000 investment doesn't matter as much as getting a negative 10% return on a $100,000 investment.
Magnitude certainly is relevant to vector comparisons; but, if we define ROI as nominal rate of return, gross returns are not relevant to a comparison by that metric.
Return on Investment: https://en.wikipedia.org/wiki/Return_on_investment
From https://en.wikipedia.org/wiki/Vector_(mathematics_and_physic... :
> A Euclidean vector is thus an equivalence class of directed segments with the same magnitude (e.g., the length of the line segment (A, B)) and same direction (e.g., the direction from A to B).[3] In physics, Euclidean vectors are used to represent physical quantities that have both magnitude and direction, but are not located at a specific place, in contrast to scalars, which have no direction.[4] For example, velocity, forces and acceleration are represented by vectors
Quantitatively and Qualitatively quantify the direct and external benefits of {college, other alternatives} with criteria in additional to real monetary ROI?
From https://en.wikipedia.org/wiki/Welfare_economics
> Welfare economics also provides the theoretical foundations for particular instruments of public economics, including cost–benefit analysis,
Except that the grandparent post that you responded to was all about the magnitude of the loss rather than just ROI. If the magnitude of the loss is manageable, then the ROI becomes less important for life decisions.
So sure! If all we are looking at is ROI then you are right! By definition! As long as you restrict your refutation ("by that metric") in a way that ignores the additional metric I had been trying to add to the conversation. I was trying to point out that there are other metrics that matter.
Direct or External Loss?
Is the unique loss you identify not accounted for in the traditional ROI expression?
[deleted]
From https://news.ycombinator.com/item?id=18833730 :
>> Why would people make an investment with insufficient ROI (Return on Investment)?
> Insufficient information.
> College Scorecard [1] is a database with a web interface for finding and comparing schools according to a number of objective criteria. CollegeScorecard launched in 2015. It lists "Average Annual Cost", "Graduation Rate", and "Salary After Attending" on the search results pages. When you review a detail page for an institution, there are many additional statistics; things like: "Typical Total Debt After Graduation" and "Typical Monthly Loan Payment".
> The raw data behind CollegeScorecard can be downloaded from [2]. The "data_dictionary" tab of the "Data Dictionary" spreadsheet describes the data schema.
> [1] https://collegescorecard.ed.gov/
> [2] https://collegescorecard.ed.gov/data/
> Khan Academy > "College, careers, and more" [3] may be a helpful supplement for funding a full-time college admissions counselor in a secondary education institution
> [3] https://www.khanacademy.org/college-careers-more
https://www.khanacademy.org/college-careers-more/college-adm... :
- [ ] Video & exercise / Jupyter notebook under Exploring college options for Return on Investment (according to e.g. CollegeScorecard data)
Notes from the Meeting on Python GIL Removal Between Python Core and Sam Gross
Just came in here briefly to opine that there is a very real risk of fork if the Python core community does not at least offer a viable alternative expediently.
The economic pressures surrounding the benefits of gross’s changes will likely influence this more than any tears shed over subtle backwards incompatibility.
I believe it was Dropbox that famously released their own private internal Python build a while back and included some concurrency patches.
Many teams might go the route of working from Sam Gross’ work and if we see subtle changes in underlying runtime concurrency semantics or something else backwards incompatible that’s it- either that adoption will roll downhill to a new standard or Python core will have to answer with a suitable GIL-less alternative.
I for one do not want to think about “ANSI Python” runtimes or give the MSFTs etc of the world an opening to divide the user base.
> I believe it was Dropbox that famously released their own private internal Python build a while back and included some concurrency patches.
Google also had their Unladen Swallow version, but it seems they lost interest at some point.
"PEP 3146 -- Merging Unladen Swallow into CPython" > Future Work (2010) https://www.python.org/dev/peps/pep-3146/#future-work
Perhaps Google/Grumpy could be updated to compile Python 3.x+ to Go with e.g. the RustPython version of the CPython Python Standard Library modules?
"Inside cpyext: Why emulating CPython C API is so Hard" (2018) https://news.ycombinator.com/item?id=18040664
Today, conda-forge compiles CPython to relocatable platform+architecture-specific binaries with LLVM. https://github.com/conda-forge/python-feedstock/blob/master/...
conda-forge also compiles PyPy Python to relocatable platform+architecture-specific binaries with LLVM. conda-forge/pypy3.6-feedstock (3.7) https://github.com/conda-forge/pypy3.6-feedstock/blob/master...
https://github.com/conda-forge/pypy-meta-feedstock/blob/mast... :
> summary: Metapackage to select pypy as python implementation
Pyodide (JupyterLite) compiles CPython to WASM (or LLVM IR?) with LLVM/emscripten IIRC. Hopefully there's a clear way to implement the new GIL-less multithreading support with Web Workers in WASM, too?
The https://rapids.ai/ org has a bunch a fast Python for HPC and Cloud; with Dask and pick a scheduler. Less process overhead and less need for interprocess locking of memory handles that transgress contexts due to a new GIL removal approach would be even faster than debuggable one process per core Python.
Grumpy is unmaintained.There is similar work (py2many) that transpiles python3 to 8 different languages.
The approach focuses on functional programming, does away with extensions completely.
For that approach to be successful, a pure python implementation of stdlib in the transpileable subset of python 3 would be super helpful.
Show HN: OtterTune – Automated Database Tuning Service for RDS MySQL/Postgres
Yo. OtterTune is a database optimization service. It uses machine learning to automatically tune your MySQL and Postgres configuration (i.e., RDS parameter groups) to improve performance and reduce costs. It does this by only looking at your database's runtime metrics (e.g., INNODB_METRICS, pg_stat_database, CloudWatch). We don't need to examine sensitive queries or user tables. We spun this project out of my research group at Carnegie Mellon University in 2020.
This week we've announced that OtterTune is now available to the public. We are offering everyone a starter account to try it out on their Postgres RDS or MySQL RDS databases (all versions, AWS US AZs only). We have seen OtterTune achieve 2-4x performance improvements and 50% cost reductions for these databases compared to using Amazon's default RDS configuration.
I am happy to answer any questions that you may have about how OtterTune works here.
-- Andy
================
More Info:
* 5min Demo Video: https://ottertune.com/blog/ottertune-explained-in-five-minutes
* Free Account Sign-up: https://ottertune.com/try
I know this Show HN is about RDS specifically, but the site suggests this works elsewhere, too; any issues with pointing OtterTune at DBs making heavy use of extensions (e.g. Timescale, Citus) or nonstandard deployment approaches (k8s, patroni, vitess)?
We have deployed OtterTune for on-prem databases (baremetal, containers). But we are not offering that to everyone right now because organizations manage their configurations for these deployments in a bunch of different ways (Github actions, Chef/Puppet, Terraform). The nice thing about RDS is that they have a single API for updating configurations (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...).
As for the Postgres-derivates that you mentioned (Timescale, Citus), we have not tested OtterTune for them yet. But there is no reason they shouldn't also work because they expose the same metrics API (pg_stat_database) and config knobs. There are just way more people using Postgres (RDS, Aurora), so we are focusing on them right now.
What about OpenStack Trove DBaaS? OpenStack Trove is like an open source self-hosted Amazon RDS or Google CloudSQL. https://docs.openstack.org/trove/latest/
FWIU, Trove supports 10+ databases including MySQL and PostgreSQL.
AFAIU, there are sound reasons to host containers with e.g. OpenStack VMs instead of a k8s scheduler with a proper SAN and just figure out how to redundantly and resiliently sync - replicate, synchronize, primary/secondary, nodeprocd - and tune given the CAP theorem and the given DB implementation(s)?
Here's the official Ansible role for Trove, which provisions various e.g. SQL databases on an OpenStack cloud: https://github.com/openstack/openstack-ansible-os_trove
Despite having just 5.8% sales, over 38% of bug reports come from Linux
Notably, of those bug reports, fewer than 1% (only 3 bugs) were specific to the Linux version of the game. That is, over 99% of the bugs reported by Linux gamers also affected players in other platforms. Moreover (quoting from the OP):
> The report quality [from Linux users] is stellar. I mean we have all seen bug reports like: “it crashes for me after a few hours”. Do you know what a developer can do with such a report? Feel sorry at best. You can’t really fix any bug unless you can replicate it, see it with your own eyes, peek inside and finally see that it’s fixed. And with bug reports from Linux players is just something else. You get all the software/os versions, all the logs, you get core dumps and you get replication steps. Sometimes I got with the player over discord and we quickly iterated a few versions with progressive fixes to isolate the problem. You just don’t get that kind of engagement from anyone else.
Are there any good articles on writing helpful bug reports? Whenever I submit a bug I try to give as much detail and be as specific as possible but I always imagine some dev somewhere reading it rolling their eyes at the unnecessary paragraphs I’ve submitted.
From "Post-surgical deaths in Scotland drop by a third, attributed to a checklist" https://westurner.github.io/hnlog/#comment-19684376 https://news.ycombinator.com/item?id=19686470 :
> GitHub and GitLab support task checklists in Markdown and also project boards [...]
> GitHub and GitLab support (multiple) Issue and Pull Request templates:
> Default: /.github/ISSUE_TEMPLATE.md || Configure in web interface
> /.github/ISSUE_TEMPLATE/Name.md || /.gitlab/issue_templates/Name.md
> Default: /.github/PULL_REQUEST_TEMPLATE.md || Configure in web interface
> /.github/PULL_REQUEST_TEMPLATE/Name.md || /.gitlab/merge_request_templates/Name.md
> There are template templates in awesome-github-templates [1] and checklist template templates in github-issue-templates [2].
Arrow DataFusion includes Ballista, which does SIMD and GPU vectorized ops
From the Ballista README:
> How does this compare to Apache Spark? Ballista implements a similar design to Apache Spark, but there are some key differences.
> - The choice of Rust as the main execution language means that memory usage is deterministic and avoids the overhead of GC pauses.
> - Ballista is designed from the ground up to use columnar data, enabling a number of efficiencies such as vectorized processing (SIMD and GPU) and efficient compression. Although Spark does have some columnar support, it is still largely row-based today.
> - The combination of Rust and Arrow provides excellent memory efficiency and memory usage can be 5x - 10x lower than Apache Spark in some cases, which means that more processing can fit on a single node, reducing the overhead of distributed compute.
> - The use of Apache Arrow as the memory model and network protocol means that data can be exchanged between executors in any programming language with minimal serialization overhead.
Previous article from when Ballista was a separate repo from arrow-datafusion: "Ballista: Distributed compute platform implemented in Rust using Apache Arrow" https://news.ycombinator.com/item?id=25824399
Parsing gigabytes of JSON per second
The project described in the paper is https://simdjson.org/.
Source: https://github.com/simdjson/simdjson
PyPI: https://pypi.org/project/pysimdjson/
There's a rust port: https://github.com/simd-lite/simd-json
... From ijson https://pypi.org/project/ijson/#id3 which supports streaming JSON:
> Ijson provides several implementations of the actual parsing in the form of backends located in ijson/backends: [yajl2_c, yajl2_cffi, yajl2, yajl, python]
Fed to ban policymakers from owning individual stocks
The relatively low salaries compared to the huge amounts of power is almost begging for insider trading, influence peddling, etc.
> The salary of a Congress member varies based on the job title of the congressman or senator. Most senators, representatives, delegates and the resident commissioner from Puerto Rico make a salary of $174,000 per year.
https://www.indeed.com/career-advice/pay-salary/congressman-...
Then again, some people have near insatiable greed and even at a $1M/year salary, some would be looking for ways to further boost their income at the boundaries of ethics, or beyond.
This is about the Fed banning it's own policymakers from owning them, not about congress.
Would the Fed actually have the authority to even do that, regulate what securities Congress members are allowed to hold?
“The Fed” as in the Federal Reserve: no.
“The Fed” as in the Federal Government: yes.
"Blind Trust" > "Use by US government officials to avoid conflicts of interest" https://en.wikipedia.org/wiki/Blind_trust
... If you want to help, you must throw all of your startup equity away.
... No, you may not co-brand with that company (which is not complicit with your agenda).
... Besides, I'm not even eligible for duty: you can't hire me.
... Maybe I could be more helpful from competitive private industry.
... How can a government hire prima donna talent like Iron Man?
... Is it criminal to start a solvent, sustainable business to solve government problems, for that one customer?
... Which operations can a government - operating with or without competition - solve most energy-efficiently and thus cost-effectively? Looks like single-payer healthcare and IDK what else?
(Edit)
US Digital Services Playbook: https://github.com/usds/playbook
From https://www.nist.gov/itl/applied-cybersecurity/nice/nice-fra... :
> "NIST Special Publication 800-181 revision 1, the Workforce Framework for Cybersecurity (NICE Framework), provides a set of building blocks for describing the tasks, knowledge, and skills that are needed to perform cybersecurity work performed by individuals and teams. Through these building blocks, the NICE Framework enables organizations to develop their workforces to perform cybersecurity work, and it helps learners to explore cybersecurity work and to engage in appropriate learning activities to develop their knowledge and skills.
From "NIST Special Publication 800-181 Revision 1: Workforce Framework for Cybersecurity (NICE Framework)" (2020) https://doi.org/10.6028/NIST.SP.800-181r1:
> 3.1 Using Existing Task, Knowledge, and Skill (TKS) Statements
(Edit) FedNOW should - like mCBDC - really consider implementing Interledger Protocol (ILP) for RTGS "Real-Time Gross Settlement" https://interledger.org/developer-tools/get-started/overview...
From https://interledger.org/rfcs/0032-peering-clearing-settlemen... :
> Peering, Clearing and Settling; The Interledger network is a graph of nodes (connectors) that have peered with one another by establishing a means of exchanging ILP packets and a means of paying one another for the successful forwarding and delivery of the packets.
Fed or no, wouldn't you think there'd be money in solving for the https://performance.gov Goals ( https://www.usaspending.gov/ ) and the #GlobalGoals (UN Sustainable Development Goals) -aligned GRI Corporate Sustainability Report? #CSR #ESG #SustyReporting
Hardened wood as a renewable alternative to steel and plastic
From "Hemp Wood: A Comprehensive Guide" https://www.buildwithrise.com/stories/hempwood-the-sustainab... :
> HempWood is priced competitively to similar cuts of black walnut. You can purchase 72" HempWood boards for between $13 and $40 as of the date of publishing. HempWood also sells carving blocks, cabinets, and kits to make your own table. Prices for table kits range from $175 to $300. Jul 5, 2021 […]
> Is Hemp Wood Healthy? Due to its organic roots and soy-based adhesive, hemp wood is naturally non-toxic and doesn't contain VOCs, making it a healthier choice for interior building.
> Hemp wood has also been tested to have a decreased likelihood of warping and twisting. Its design is free of any of the knots common in other hardwoods to reduce wood waste.
FWIU, hempcrete - hemp hurds and sustainable limestone - must be framed; possibly with Hemp Wood, which is stronger than spec lumber of the same dimensions.
FWIU, Hemp batting insulation is soaked in sodium to meet code.
Hopefully the production and distribution processes for these carbon sinks keeps net negative carbon in the black.
I want to like hempwood but the price needs to come down. Hopefully it will as production increases.
What are the limits? Input costs, current economy of scale?
Scale, reportedly. https://www.youtube.com/watch?v=3Qy6awPeric
Investors use AI to analyse CEOs’ language patterns and tone
This might be the best NewsArticle headline on HN I've ever seen.
Why, what does it say? Can you log that in a reproducible Notebook with Docs and Test assertions please?
Or are we talking about maybe a ScholarlyArticle CreativeWork with a https://schema.org/funder property or just name and url.
Graph of Keybase commits pre and post Zoom acquisition
Oh boy. Is it too early to say RIP?
The github repo looks ripe for a fork.
The servers are not FOSS and would need reimplementing.
Likely easy enough for a client based on E2E encryption principles; the backend is in many ways a (fancy) dumb pipe. (It could still require complex infrastructure, but at least there'd be relative little "feature" code on the backend to be rewritten.)
FWIU, Cyph does Open Source E2E chat, files, and unlimited length social posts to circles or to public; but doesn't yet do encrypted git repos that can be solved with something like git-crypt. https://github.com/cyph/cyph
It would be wasteful to throw away the Web of Trust (people with handles to keys) that everyone entered into Keybase. Hopefully, Zoom will consider opening up the remaining pieces of Keybase if not just spinning the product back out to a separate entity?
From https://news.ycombinator.com/item?id=19185998 https://westurner.github.io/hnlog/#comment-19185998 :
> There's also "Web Key Directory"; which hosts GPG keys over HTTPS from a .well-known URL for a given user@domain identifier: https://wiki.gnupg.org/WKD
> GPG presumes secure key distribution
> Compared to existing PGP/GPG keyservers [HKP], WKD does rely upon HTTPS.
Blockcerts can be signed when granted to a particular identity entity:
> Here are the open sources of blockchain-certificates/cert-issuer and blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates
CT Certificate Transparency logs for key grants and revocations may depend upon a centralized or a decentralized Merkleized datastore: https://en.wikipedia.org/wiki/Certificate_Transparency
How do I specify the correct attributes of my schema.org/Person record (maybe on my JAMstack site) in order to approximate the list of identities that e.g. Keybase lets one register and refer to a cryptographic proof of?
Do I generate a W3C DID and claim my identities by listing them in a JSON-LD document signed with W3C ld-proofs (ld-signatures)? Which of the key directory and Web of Trust features of Keybase are covered by existing W3C spec Use Cases?
From https://news.ycombinator.com/item?id=28701355:
> "Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/
>> 2. Use Cases: Online shopper, Vehicle assemblies, Confidential Customer Engagement, Accessing Master Data of Entities, Transferable Skills Credentials, Cross-platform User-driven Sharing, Pseudonymous Work, Pseudonymity within a supply chain, Digital Permanent Resident Card, Importing retro toys, Public authority identity credentials (eIDAS), Correlation-controlled Services
> And then, IIUC W3C Verifiable Credentials / ld-proofs can be signed with W3C DID keys - that can also be generated or registered centrally, like hosted wallets or custody services. There are many Use Cases for Verifiable Credentials: https://www.w3.org/TR/vc-use-cases/ :
>> 3. User Needs: Education, Retail, Finance, Healthcare, Professional Credentials, Legal Identity, Devices
>> 4. User Tasks: Issue Claim, Assert Claim, Verify Claim, Store / Move Claim, Retrieve Claim, Revoke Claim
>> 5. Focal Use Cases: Citizenship by Parentage, Expert Dive Instructor, International Travel with Minor and Upgrade
>> 6. User Sequences: How a Verifiable Credential Might Be Created, How a Verifiable Credential Might Be Used
Is there an ACME-like thing to verify online identity control like Keybase still does?
Hopefully, Zoom will consider opening up the remaining pieces of Keybase if not just spinning the product back out to a separate entity?
> Is there an ACME-like thing to verify online identity control like Keybase still does?
From https://news.ycombinator.com/item?id=28926739 :
> NIST SP 800-63 https://pages.nist.gov/800-63-3/ :
> SP 800-63-3: Digital Identity Guidelines https://doi.org/10.6028/NIST.SP.800-63-3
> SP 800-63A: Enrollment and Identity Proofing https://doi.org/10.6028/NIST.SP.800-63a
FWIU, NIST SP 800-63A Enrollment and Identity Proofing specifies a spec sort of like ACME but for offline identity.
"Key server (cryptographic)" https://en.wikipedia.org/wiki/Key_server_(cryptographic)
> The last IETF draft for HKP also defines a distributed key server network, based on DNS SRV records: to find the key of someone@example.com, one can ask it by requesting example.com's key server.
> Keyserver examples: These are some keyservers that are often used for looking up keys with `gpg --recv-keys`.[6] These can be queried via https:// (HTTPS) or hkps:// (HKP over TLS) respectively: keys.openpgp.org , pgp.mit.edu , keyring.debian.org , keyserver.ubuntu.com ,
"Linked Data Signatures for GPG" https://gpg.jsld.org/
npm i @transmute/lds-gpg2020 -g
gpg2020 sign -u "3BCAC9A882DEFE703FD52079E9CB06E71794A713" $(pwd)/docs/example/doc.json did:btcr:xxcl-lzpq-q83a-0d5#yubikey
From https://gpg.jsld.org/contexts/#GpgSignature2020 :> GpgSignature2020: A JSON-LD Document has been signed with GpgSignature2020, when it contains a proof field with type GpgSignature2020. The proof must contain a key signatureValue with value defined by the signing algorithm described here. Example:
{
"@context": [
"https://gpg.jsld.org/contexts/lds-gpg2020-v0.0.jsonld",
{
"schema": "http://schema.org/",
"name": "schema:name",
"homepage": "schema:url",
"image": "schema:image"
}
],
"name": "Manu Sporny",
"homepage": "https://manu.sporny.org/",
"image": "https://manu.sporny.org/images/manu.png",
"proof": {
"type": "GpgSignature2020",
"created": "2020-02-16T18:21:26Z",
"verificationMethod": "did:web:did.or13.io#20a968a458342f6b1a822c5bfddb584bdf141f95",
"proofPurpose": "assertionMethod",
"signatureValue": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEIKlopFg0L2sagixb/dtYS98UH5UFAl5JiCYACgkQ/dtYS98U\nH5U8TQf/WS92hXkdkdBQ0xJcaSkoTsGspshZ+lT98N2Dqu6I1Q01VKm+UMniv5s/\n3z4VX83KuO5xtepFjs4S95S4gLmr227H7veUdlmPrQtkGpvRG0Ks5mX7tPmJo2TN\nDwm1imm+zvJ+MXr3Ld24qaRJA9dI+AoZ5HXqNp96Yncj3oWD+DtVIZmC/ZiUw43a\nLpMYy94Hie7Ad86hEoqsdRxrwq7O6KZ29TAKi5T/taemayyXY7papU28mGjVEcvO\na7M3XNBflMcMEB+g6gjrANsgFNO6tOuvOQ2+4v6yMfpJ0ji4ta7q2d4QKqGi5YhE\nsRUORN+7HJrkmSTaT7gBpFQ+YUnyLA==\n=Uzp1\n-----END PGP SIGNATURE-----\n"
}
}
Single sign-on: What we learned during our identity alpha
Reading documents like passports with an NFC reader does indeed work, and does indeed produce verifiable material. Specifically the passport has proof (via a digital signature) that it was issued by a specific authority, and in turn, proof that the contents of the passport (name, date of birth, a picture and so on) are as issued.
But, the problem here is that the issuer is the British government, so, what are you proving? "Here, you issued this passport". "Oh yes, so we did". I presume the British government does own a database of the passports they issued, so this isn't news to them.
A modestly smart device, such as a Yubico device, is capable of providing fresh proof of its identity. My Security Key doesn't prove "The security key that enrolled me with GitHub in fact exists" which is redundant - but "I am still the same security key that you enrolled". However the passport can't do that, your passport is inert, and the fact that Sarah Smith existed isn't the thing you presumably want to prove to a single-sign-on service. You want to prove that you are Sarah Smith, something the passport doesn't really do.
I think the GDS ignores this problem, which is to be fair no worse than lots of other systems, but the result isn't actually what it seems to be, all the digital technology isn't actually proving anybody's identity in this space.
It reminds me of the bad old days of the Web PKI where it was found that the "email validation" being used would accept automated "virus checking" of email. A CA sends the "Are you sure you want to issue a cert for mycorp.example?" message to somebody@mycorp.example and even though Somebody is on vacation in Barbados for two weeks, the automatic "virus" check reads the URL out of the email, follows it, ignores the page saying "Success, your certificate has been issued" and passes it to Somebody's inbox... All the "security" is doing what it was designed to do, but, what it was designed to do isn't what it should have been designed to do, and so it's futile.
Isn't there two things going on here? A single sign on service, and also an identity check.
A yubico device can say "I'm still the same security key that was enrolled" but it doesn't say at enrolment "I am being used by Fred Jones of <address> with passport no 123, NiNo ABC".
GDS verify / government gateway do support 'authenticators' (typically password + totp) to provide a level of assurance that the person logging into the account is the person the account belongs to.
The "document check" is part of the identity bit. That is, how confident are we that the person creating this account / performing this action is who they say they are and can do this thing or see that information. The document check is part of their solution but other parts are supposed to layer on top of it to provide a proportionate level of assurance. e.g. a video of you holding up the passport to check that it's you using it or asking you to provide answers to details you already have like 'how much tax did you pay last year' or checking multiple documents.
> I presume the British government does own a database of the passports they issued, so this isn't news to them.
The passport office (part of the Home Office) own that database. Part of this work is basically a web service to that database for the rest of the government to use.
This is the difference between 'identity assurance' (at enrollment) and 'authentication assurance' in the relevant NIST standard, SP 800-63 https://pages.nist.gov/800-63-3/
A remote passport check might be suitable to claim an identity assurance level of 2, say, while getting to identity assurance level 3 would need an in-person visit with multiple forms of other government identification records.
In that way you might then constrain some actions to be taken remotely only by users who have provided strong assurance of their identity at enrollment (even if they login with strong non-phishable authentication... doesn't matter that the auth is ironclad if the user's identity is muddy).
Thx. NIST SP 800-63* https://pages.nist.gov/800-63-3/ :
> SP 800-63-3: Digital Identity Guidelines https://doi.org/10.6028/NIST.SP.800-63-3
> SP 800-63A: Enrollment and Identity Proofing https://doi.org/10.6028/NIST.SP.800-63a
> SP 800-63B: Authentication and Lifecycle Management https://doi.org/10.6028/NIST.SP.800-63b
> SP 800-63C: Federation and Assertions https://doi.org/10.6028/NIST.SP.800-63c
Five things we still don’t know about water
Also interesting:
https://sciencenordic.com/chemistry-climate-denmark/the-eart...
> The Earth has lost a quarter of its water
> In its early history, the Earth's oceans contained significantly more water than they do today. A new study indicates that hydrogen from split water molecules has escaped into space.
But where did the water come from? Neptune? Europa? Comet(s)? Is it just the distance to our nearest star in our habitable zone here that results in liquid water being likely?
From the article:
> But the exact mechanism for how water evaporates isn’t completely understood. The evaporation rate is traditionally represented in terms of a rate of collision between molecules, multiplied by a fudge factor called the evaporation coefficient, which varies between zero and one. Experimental determination of this coefficient, spanning several decades, has varied over three orders of magnitude.
From https://en.wikipedia.org/wiki/Evaporation :
> Evaporation is a type of vaporization that occurs on the surface of a liquid as it changes into the gas phase.[1] The surrounding gas must not be saturated with the evaporating substance. When the molecules of the liquid collide, they transfer energy to each other based on how they collide with each other. When a molecule near the surface absorbs enough energy to overcome the vapor pressure, it will escape and enter the surrounding air as a gas.[2] When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling.[3]
New Optical Switch Up to 1000x Faster Than Transistors
Does this work at normal temperatures? Or does it need impractical cooling setups?
Show HN: I built a sonar into my surfboard
I wonder if there is some way you could pull data from the sound of the waves breaking instead of an active ping? That might be the only reasonable way to get accurate readings in that zone.
FWIU, EM backscatter can be used for e.g. gesture recognition, heartbeat detection, and metal detection. https://en.wikipedia.org/wiki/Backscatter
Entropy of wave noises may or may not be the issue.
Edit: (NASA spinoff) "Radar Device Detects Heartbeats Trapped under Wreckage" https://spinoff.nasa.gov/Spinoff2018/ps_1.html
> The Edgewood, Maryland-based company is developing a line of such remote sensing devices to aid search and rescue teams, based on advanced radar technologies developed by NASA and refined for this purpose at the Agency’s Jet Propulsion Laboratory (JPL).
> NASA has long analyzed weak radio signals to identify slight physical movements, such as seismic activity seen from low-Earth orbit or minor alterations in a satellite’s path around another planet that might indicate gravity fluctuations, explains Jim Lux, JPL’s task manager for the FINDER project. However, to pick out such faint patterns in the data, these devices must cancel out huge amounts of noise. “The core technology here is measuring a small signal in the context of another larger signal that’s confusing you,” Lux says.
(FWIW, some branches may have helicopters with infrared that they can cost over for disaster relief.)
Cortical Column Networks
Hey, Cortical Columns!
From "Jeff Hawkins Is Finally Ready to Explain His Brain Research" https://news.ycombinator.com/item?id=18214707 https://westurner.github.io/hnlog/#comment-18218504
What does (parallel) spreading activation have to do with Cortical Column Networks maybe and redundancy? https://en.wikipedia.org/wiki/Spreading_activation
I got the impression that anything transformers could be applied to, that would be an advantage. So now I want everything to use them.
Doesn't sound like these CCNs necessarily reuse a lot of information across object categories.
From https://medium.com/syncedreview/google-replaces-bert-self-at... :
> New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs."
Would Transformers (with self-attention) make what things better? Maybe QFT? There are quantum chemical interactions in the brain. Are they necessary or relevant for what fidelity of emulation of a non-discrete brain?
Startup Ideas
Bring back the dumb terminal. Place mobile phone on a docking pad and see a standard PC OS on the screen - open apps, access files or use a browser. When done just grab the phone and go.
IIUC, in 2021, you can dock a PineTab or a PinePhone with a USB-C PD hub that has HDMI, USB, and Ethernet and use any of a number of Linux Desktop operating systems on a larger screen with full size keyboard and mouse.
The PineTab has a backlit keyboard and IIUC the PinePhone has a keyboard & aux battery case that doesn't yet also include the fingerprint sensor or wireless charging. https://www.pine64.org/blog/
It is easier to educate a Do-er than to motivate the educated
~ "Imagine that one could give you a copy of all of their knowledge. If you do not choose to apply and learn on your own, you can never."
This is about regimen, this is about stamina, this is about sticktoitiveness; and if you don't want it, you don't need it, you'll never. And I mean never.
The Grit article on Wikipedia mentions persistence and tenacity and stick-to-it-tiveness as roughly synonymous; and that grit may not be that distinct from other Big Five personality traits, but we're not about to listen to that, we're not going with that, because Grit is predictor of success. https://en.wikipedia.org/wiki/Grit_(personality_trait)
To the original point,
> In psychology, grit is a positive, non-cognitive trait based on an individual's perseverance of effort combined with the passion for a particular long-term goal or end state (a powerful motivation to achieve an objective). This perseverance of effort promotes the overcoming of obstacles or challenges that lie on the path to accomplishment and serves as a driving force in achievement realization. Distinct but commonly associated concepts within the field of psychology include "perseverance", "hardiness", "resilience", "ambition", "need for achievement" and "conscientiousness". These constructs can be conceptualized as individual differences related to the accomplishment of work rather than talent or ability.
Are software engineering “best practices” just developer preferences?
"The parallel he drew was to another friend who’s a Civil Engineer. His friend had to be state certified and build everything to certain codes that stand up to specific stressors and inspections.
I gave him the usual answer about how Software Engineers deal with low stakes and high iterability compared to Civil Engineers, but, honestly, he has a point."
I've argued for awhile now that Software Engineering with a big E should be licensed and regulated the same as any other Engineering discipline. Not all software is low stakes and fast changing. In fact I'd argue the most important software is never that. software for control systems, avionics, cars etc are very high stakes and have no reason to iterate beyond what is needed to interface with changing hardware. I think if software engineers had to be licensed to work on such things then those 737s wouldn't have fallen out of the sky and Tesla wouldn't be allowed to beta test self driving cars on public roads.
To those who ask what do you now call software engineers who don't work on those things, you are programmers. Or if you prefer a less formal term, coders. Engineer is a powerful word and I don't like how the we the IT industry have appropriated it for less critical tasks.
In a way there are standards for software but those standards are not expressed in software terms.
Firstly, most software is harmless. If it goes wrong people may be annoyed, but no one is harmed. But if I write software to control an aircraft, then it would have to abide by aviation standards. If I write financial software then I would have financial regulations to follow. Same for medical devices. So, there are standards for software but they are indirect and expressed in terms of the wider domain in which it runs.
Critical systems: https://en.wikipedia.org/wiki/Critical_system :
> There are four types of critical systems: safety critical, mission critical, business critical and security critical.
Safety-critical systems > "Software engineering for safety-critical systems" https://en.wikipedia.org/wiki/Safety-critical_system#Softwar... :
> By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements.[11] All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.
awesome-safety-critical lists very many resources for safety critical systems: https://awesome-safety-critical.readthedocs.io/en/latest/
There are many ['Engineering'] certification programs for software and other STEM fields. One test to qualify applicants does not qualify as a sufficient set of controls for safety critical systems that must be resilient, fault-tolerant, and redundant.
A real Engineer knows that there are insufficient process controls from review of very little documentation; it's just process wisdom from experience. An engineer starts with this premise: "There are insufficient controls to do this safely" because [test scenario parameter set n] would result in the system state - the output of probably actually a complex nonlinear dynamic system - being unacceptable: outside of acceptable parameters for safe operation.
Are there [formal] Engineering methods that should be requisite to "Computer Science" degrees? What about "Applied Secure Coding Practices in [Language]"? Is that sufficient to teach theory and formal methods?
From "How We Proved the Eth2 Deposit Contract Is Free of Runtime Errors" https://news.ycombinator.com/item?id=28513922 :
>> From "Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification" https://news.ycombinator.com/item?id=27442273 :
>> [Coq, VST, CompCert]
>> Formal methods: https://en.wikipedia.org/wiki/Formal_methods
>> Formal specification: https://en.wikipedia.org/wiki/Formal_specification
>> Implementation of formal specification: https://en.wikipedia.org/wiki/Anti-pattern#Software_engineer...
>> Formal verification: https://en.wikipedia.org/wiki/Formal_verification
>> From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :
>>> Which universities teach formal methods?
>>> - q=formal+verification https://www.class-central.com/search?q=formal+verification
>>> - q=formal+methods https://www.class-central.com/search?q=formal+methods
>>> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs? https://news.ycombinator.com/item?id=28513922
From "Ask HN: Is it worth it to learn C in 2020?" https://news.ycombinator.com/item?id=21878372 :
> There are a number of coding guidelines e.g. for safety-critical systems where bounded running time and resource consumption are essential. These coding guidelines and standards are basically only available for C, C++, and Ada.
awesome-safety-critical > Software safety standards: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
awesome-safety-critical > Coding Guidelines: https://awesome-safety-critical.readthedocs.io/en/latest/#co...
Major Quantum Computing Strategy Suffers Serious Setbacks
Inevitably, this article (and implications about quantum computing) is going to be vastly misinterpreted due to the title. It would be like saying “Major Shuttle Strategy Suffers Serious Setbacks” but the strategy is using slingshots to get us into orbit, ignoring the work of, say, NASA and SpaceX.
This quantum computing strategy neither was nor is practiced by the vast majority of quantum institutions—commercial or otherwise. It was attempted by a group at Microsoft (and a small collection of other university groups) and was known from the start that it would be a search for essentially fundamentally new observations.
Other quantum computing players, like Rigetti, Google, HRL Laboratories, IBM, Amazon, Honeywell, and others are doing an approach that is nothing like the article, and have demonstrated significant results.
It should really be "A quantum computing strategy suffers..."
Or: "One idea in quantum computing bears no fruit"
"Quantized Majorana conductance not actually observed within indium antimonide nanowires"
"Quantum qubit substrate found to be apparently insufficient" (Given the given methods and probably available resources)
And then - in an attempt to use terminology from Constructor Theory https://en.m.wikipedia.org/wiki/Constructor_theory :
> In constructor theory, a transformation or change is described as a task. A constructor is a physical entity which is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible- and impossible tasks. Counterfactuals are thus fundamental statements and the properties of information may be described by physical laws.[4] If a system has a set of attributes, the set of permutations of these attributes is seen as a set of tasks. A computation medium is a system whose attributes permute to always produce a possible task. The set of permutations, and hence of tasks, is a computation set. If it is possible to copy the attributes in the computation set, the computation medium is also an information medium.
> Information, or a given task, does not rely on a specific constructor. Any suitable constructor will serve. This ability of information to be carried on different physical systems or media is described as interoperability, and arises as the principle that the combination of two information media is also an information medium.[4] Media capable of carrying out quantum computations are called superinformation media, and are characterised by specific properties. Broadly, certain copying tasks on their states are impossible tasks. This is claimed to give rise to all the known differences between quantum and classical information.[4]
"Subsequent attempts to reproduce [Quantized Majorana conductance (topological qubits of arranged electrons) within indium antimonide nanowires] eventually as a (quantum) computation medium for the given tasks failed"
"Quantum computation by Majorana zero-mode (MZM) quasiparticles in indium antimonide nanowires not actually apparently possible"
... "But what about in DDR5?" Which leads us to a more generally interesting: "Rowhammer for qubits", which is already an actual Quantum on Silicon (QoS) thing.
Attempts to scientifically “rationalize” policy may be damaging democracy
First, not having read the article:
#EvidenceBasedPolicy is a worthwhile objective even if only because the alternative is to just blow money without measuring ROI at all [because government expenditures are the actual key to feeding the beast, the economic beast, the...].
What are some examples of policy failures where Systematic review and Meta-analysis could have averted loss, harms, waste, catastrophe, long-term costs? Is that cherry picking? The other times we can just throw a dart and that's better than, ahem, these idiots we afford trying to do science?
Wouldn't it be fair to require that constituent ScholarlyArticles (and other CreativeWorks) be kept on file with e.g. the Library of Congress?
Non-federal governments usually have very similar IT and science policy review needs. Should adapting one system for non-federal governments be more complex than specifying a different String or URL in the token_name field in a transaction?
When experts review ScholarlyArticles on our behalf, they should share their structured and unstructured annotations in such a way that their cryptographically signed reviews - and highlights to identify and extract structured facts like summary statistics like sample size and IRB-reviewed study controls - become part of a team-focused collaborative systematic meta-analysis that is kept on file and regularly reviewed in regards to e.g. retractions, typical cognitive biases, failures in experimental design and implementation, and general insufficiencies that should cause us to re-evaluate our beliefs given all available information which meets our established inclusion criteria.
We have a process for peer review of PDFs - and hopefully datasets with locality for reproducibility and unitarity which purportedly helps us work through something like this sequence:
Data / Information / Knowledge / Experience / Wisdom
We often have gaps in our processes to support such progress in developing wisdom from knowledge that should be predicated upon sound information and data and then experience, bias, creeps in.
Basic principles restricting the powers of the government should prevent the government - us, we - from specifically violating the protected rights of persons; but we have allowed "Science" to cloud our judgement in application of our most basic principles of justice - i.e. Life, Liberty, and the pursuit of Happiness; and Equality and Equitability - and should we chalk the unintended consequences up to ignorance or malice?
More science all around: more Data Literacy - awareness of how many bad statistical claims are made all day around the world everywhere - is good and necessary and essential to Media Literacy, which is how we would be forming our opinions if we didn't have better tools for truth and belief for science.
"What does it mean to know?" etc.
Logic, Inference, Reasoning and Statistics probably predicated upon classical statistical mechanics are supposed to bring us closer to knowing: to bring our beliefs closer to the most widely observed truths.
Which Verifiable Claims do we trust? What studies do we admit into our personal and community meta-analyses according to our shared inclusion criteria?
"Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)" is one standard for meta-analyses, for example. http://www.prisma-statement.org/ . Could the bad guys or the dumb good guys lie with that control in place, too? Can knowing our rights - and upholding oaths to uphold values - protect us from meta-analytical group failure?
Perhaps STEM (Science, Technology, Engineering, art, and Medicine/Math) majors and other interested parties can help develop solutions for #EvidenceBasedPolicy?
This one fell flat. Maybe it was the time of day? The question should be asked every year, at least, eh? "Ask HN: Systems for supporting Evidence-Based Policy?" https://news.ycombinator.com/item?id=22920613
>> What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?
>> Are they open source? Do they work with linked open data?
> I suppose I should clarify that citizens, consumers, voters, and journalists are not acceptable answers
"#LinkedMetaAnalyses", "#StructuredPremises"; Ctrl-F "linkedmeta", "linkedrep", "#LinkedResearch": https://westurner.github.io/hnlog/
Alright, my fair biases disclosed, on to the reading the actual article: /1
Response to 'Call for Review: Decentralized Identifiers (DIDs) v1.0'
Better imho to ask: what user problem is DID solving?
Because: if it's truly decentralized then there's no need to publish it. But publication is a core aspect of DID. That it must be published is a jedi mind trick that allows in "on a blockchain". Now we see the real problem being solved (not a user problem): need something published on a blockchain.
The user problem- for those who see it that way- is that right now, digital stuff related to a person, is tied up primarily with the person's email address, or in some cases with the person's phone number.
(For the purposes of this discussion, call the email address or phone number an "identifier".)
Why is this a problem? Two reasons:
1. People don't "control" those identifiers
Many email addresses used in this context are controlled by employers, and the person's right to use them ends when they leave employment. Or they are controlled through expensive commercial arrangement between the person and the platform, and the person may lose access if they are no longer able to afford the platform. Or, they are free, offered by large data harvesting/advertising platforms, who mine data stored on the platform, and as below, linked to those identifiers from other platforms, to create advertising and propaganda targeting profiles.
2. Those identifiers are used by others to contact the person, and are therefore long-lived, which means they are also a vehicle for correlating an individual's activity across internet platforms who are necessarily presented with those identifiers by the person when they engage with the platform.
DIDs are an attempt to have identifiers that are controlled by people that:
* are inexpensive
* can be short-lived and "rotated"
* can be specific to the relationship between a person and a particular platform
* can support more tailored association of personal data to identifier
* can better support the person's management and correlation of their platform relationships, while minimizing if desired that the correlation of identifiers back to a person by the platforms themselves
* and support other use cases
In terms of "decentralization" and "publishing"- there is definitely a need to publish identifiers in some cases. People want to find others, and want to be found. Whether that publishing constitutes centralization is nuanced.But the key issue is that right now it is hard to impossible for a normal person to engage with a small or large platform that does not involve a widely used identifier.
[EDIT: whether normal users consider this to be a problem is an open question, and as is whether they would if a solution existed to the problem...]
Somebody introduces a new technology to address these concerns every couple years and it doesn't go anywhere. These aren't actually problems to a lot of users. That's the real problem that needs to be solved - awareness. And that's a lot harder than taking the identity solutions we came up with in the Identity 2.0 days and adding a blockchain.
> Somebody introduces a new technology to address these concerns every couple years and it doesn't go anywhere. These aren't actually problems to a lot of users.
"Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/
> 2. Use Cases: Online shopper, Vehicle assemblies, Confidential Customer Engagement, Accessing Master Data of Entities, Transferable Skills Credentials, Cross-platform User-driven Sharing, Pseudonymous Work, Pseudonymity within a supply chain, Digital Permanent Resident Card, Importing retro toys, Public authority identity credentials (eIDAS), Correlation-controlled Services
And then, IIUC W3C Verifiable Credentials / ld-proofs can be signed with W3C DID keys - that can also be generated or registered centrally, like hosted wallets or custody services. There are many Use Cases for Verifiable Credentials: https://www.w3.org/TR/vc-use-cases/ :
> 3. User Needs: Education, Retail, Finance, Healthcare, Professional Credentials, Legal Identity, Devices
> 4. User Tasks: Issue Claim, Assert Claim, Verify Claim, Store / Move Claim, Retrieve Claim, Revoke Claim
> 5. Focal Use Cases: Citizenship by Parentage, Expert Dive Instructor, International Travel with Minor and Upgrade
> 6. User Sequences: How a Verifiable Credential Might Be Created, How a Verifiable Credential Might Be Used
IIRC DHS funded some of the W3C DID and Verified Credentials specification efforts. See also: https://news.ycombinator.com/item?id=26758099
There's probably already a good way to bridge between sub-SKU GS1 schema.org/identifier on barcodes and QR codes and with DIDs. For GS1, you must register a ~namespace prefix and then you can use the rest of the available address space within the barcode or QR code IIUC.
DIDs can replace ORCIDs - which you can also just generate a new one of - for academics seeking to group their ScholarlyArticles by a better identifier than a transient university email address.
The new UUID formats may or may not be optionally useful in conjunction with W3C DID, VC, and Verifiable News, etc. https://news.ycombinator.com/item?id=28088213
When would a DID be a better choice than a UUID?
Apple didn't revolutionize power supplies; new transistors did (2012)
While obviously Jobs’ claim was false, I will say that Apple is the only company I am aware of that manufacturers power supplies which are reliably completely free of perceptible inductor whine. I have very acute high-frequency hearing and I often have to replace non-Apple USB(-C) switching power supplies with Apple ones so I don’t go crazy from the whining. Teardowns of Apple PSUs typically reveal very favorable electronic and industrial design as well.
It’s extremely frustrating, and I was always surprised when I returned mid-range USB chargers because of whine only to receive a replacement with the same problem and hundreds of reviews that failed to mention it. I’ve never had an issue with Apple chargers, and the extra cost is money we’ll spent.
Buy genuine Apple chargers, if not for you, then for your dog.
What does my engineering manager do all day?
I’ve been in those meetings, we’ve all been with different responsibility levels, we know that people are slacking at least 80% of the time and that most of those meetings could be efficiently replaced by emails.
So if your work is 90% meetings I am sorry but you have a bullshit job, nothing more, nothing less.
I kind of agree.
In my last role I ended up in a fantastic place where the teams under me could operate fairly autonomously and were happy. My only recurring meetings were:
- 1-1s fortnightly and only with senior members
- My 1-1 with my boss, again fortnightly
- Senior leadership team, once a week
I kept 1-1s to 30mins unless someone had an important issue, in which case I was completely flexible.
Most of my time went on planning/strategy, customer research, and acting as an ad-hoc coach for the teams beneath me.
The key things I found were:
- many meetings can be replaced by an update email
- decisions often work better through docs + feedback than big meetings
- you don't need frequent contact with the team if the goals and constraints are communicated very clearly
I realise none of this is groundbreaking. But it seems quite rare, and also hard to achieve in practice.
> - many meetings can be replaced by an update email
Highlights from the feed(s); GitLab has the better activity view IMHO but I haven't tried the new GitHub Issues beta yet.
3 questions from 5 minute Stand-Up Meetings (because everyone's actually standing there trying to leave) for Digital Stand Up Meetings: Since, Before, Obstacles:
## 2021-09-28
### @teammembername
#### Since
#### Before
#### Obstacles
Since: What have you done since last reporting back? Before: What do you plan to do before our next meeting? Obstacles: What needs which other team resources in order to solve the obstacles?You can do cool video backgrounds for any video conferencing app with pipewire.
You can ask team members to prep a .txt with their 3 questions and drop it in the chat such that the team can reply to individual #fragments of your brief status report / continued employment justification argument
> - decisions often work better through docs + feedback than big meetings
SO, ah, asynchronous communication doesn't require transcripting for the "Leader Assistant" that does the Mando quarterly minutes from the team chat logs, at least
6 Patterns of Collaboration: GRCOEB: Generate, Reduce, Clarify, Organize, Evaluate, Build Consensus [Six Patterns]; voting on specific Issues, and ideally Chat - [x] lineitems, and https://schema.org/SocialMediaPosting with emoji reactions
[Six Patterns]: http://wrdrd.github.io/docs/consulting/team-building#six-pat... , Text Templates, Collaboration Checklist: Weighted Criteria, Ranked-choice Voting.
Docs and posts with URLs and in-text pull-quotes do better than another list of citations at the end.
> - you don't need frequent contact with the team if the goals and constraints are communicated very clearly
Metrics: OKRs, KPIs, #GlobalGoals Goals Targets and Indicators
Tools / Methods; Data / Information / Knowledge / Experience / Wisdom:
- Issues: Title, - [ ] Description, Labels, Assignee, - [ ] Comments, Emoji Reactions;
- Pull Requests, - [ ] [Optional] [Formal] Reviews, Labels & "Codelabels", label:SkipPreflight, CI Build Logs, and Signed Deployed Documented Applications; code talks, the tests win again, docs sell
- Find and Choose - with Consensus - a sufficiently mature Component that already testably does: unified Email notifications (with inbound replies,) and notifications on each and every Chat API and the web standard thing finally, thanks: W3C Web Notifications.
- Contribute Tests for [open source] Components.
- [ ] Create a workflow document with URLs and Text Templates
- [ ] Create a daily running document with my 3 questions and headings and indented markdown checkbox lists; possibly also with todotxt/todo.txt / TaskWarrior & BugWarrior -style lineitem markup.
What does an engineering manager do all day?
A polite answer would be, continuously reevaluate the tests of the product and probably also the business model if anyone knew what they were up to in there
Using two keyboards at once for pain relief
> I tried a few of those Kinesis split keyboards. Too squishy for me. Not far enough apart. The CherryMX Kinesis split keyboard was is too clickey for calls and screenshares. Muscle memory made it difficult to switch.
This is a really cool hack, and I’m happy that the author found a solution for their pain that works for them, but this bit confused me.
Kinesis are keyboards with separated key clusters, but not split keyboards. When one says split keyboard I think they are normally talking about things like the Ergodox EZ/Moonlander which have two physically separate bodies, one for each hand. There are many different models of these with various shapes and sizes, and you can separate them as much as you like. The normal advice is to set them up around shoulder width apart so you aren’t rounding your back to bring your arms together.
Most of these kinds of keyboards also support whatever key switches you prefer, and there are plenty of options that are sufficiently quiet for zoom (pretty much anything linear should do the trick)
I have been using a Moonlander for a couple of years now, and an EZ before that. They are expensive at around $400 but I don’t think I can ever go back. Most of these split keyboards also run QMK so you can setup binds, layers, and generally configure them however you like.
It took me years to find a split, ergo-style keyboard with mechanical keys, but I finally did. The freestyles have them, but I don't like the super flat layout.
I am loving the split, angled setup with my preferred Cherry MX Blues.
The MS Natural split keyboards are easy to find but aren't satisfyingly clicky mechanical keys just like olden times.
How long do these last?
Edit: "Ergonomic keyboard" https://en.wikipedia.org/wiki/Ergonomic_keyboard > #Split_keyboard:
> Split keyboards group keys into two or more sections. Ergonomic split keyboards can be fixed, where you cannot change the positions of the sections, or adjustable. Split keyboards typically change the angle of each section, and the distance between them. On an adjustable split keyboard, this can be tailored exactly to the user. People with a broad chest will benefit from an adjustable split keyboard's ability to customize the distance between the two halves of the board. This ensures the elbows are not too close together when typing. [2]
Waydroid – Run Android containers on Ubuntu
Doubts:
- Is it better/faster/more compatible than anbox?
- Runs on ARM?
- Does it allows me to watch DRM streaming services on my linux box?
- Can I install google play on it?
Waydroid has Android use your actual Linux kernel, so on an x86-64 host you’ll run x86-64 Android, and on an ARM host, ARM Android. This means that there will be some apps that won’t run on your Intel/AMD computer. I have no idea at all how common it is for Android apps to be tied to ARM, but I imagine that ARM64 will have helped with architecture-neutrality.
> This means that there will be some apps that won’t run on your Intel/AMD computer.
Should be able to still run them if you use binfmt_misc. Of course it will be slower but it's possible.
> binfmt_misc
https://en.wikipedia.org/wiki/Binfmt_misc
> binfmt_misc can also be combined with QEMU to execute programs for other processor architectures as if they were native binaries.[9]
QEMU supported [ARM guest] machines: https://wiki.qemu.org/Documentation/Platforms/ARM#Supported_...
Edit: from "Running and Building ARM Docker Containers on x86" (which also describes how to get CUDA working) https://www.stereolabs.com/docs/docker/building-arm-containe... :
sudo apt-get install qemu binfmt-support qemu-user-static # Install the qemu packages
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes # Execute the registering scripts
docker run --rm -t arm64v8/ubuntu uname -m # Test the emulation environment
https://github.com/multiarch/qemu-user-static :> multiarch/qemu-user-static is to enable an execution of different multi-architecture containers by QEMU [1] and binfmt_misc [2]. Here are examples with Docker [3].
Why the heck isn't there just an official Android container and/or a LineageOS container?
It's not a certified device, so.
There are a number of ways to build "multi-arch docker images" e.g. for both x86 and ARM: OCI, docker build, podman build, buildx, buildah.
Containers are testable.
Here's this re: whether the official OpenWRT container should run /sbin/init in order to run procd, ubusd,: https://github.com/docker-library/official-images/pull/7975#...
AFAIU, from a termux issue thread re: repackaging everything individually, latest Android requires binaries to be installed from APKs to get the SELinux context label necessary to run?
Biologists Rethink the Logic Behind Cells’ Molecular Signals
I think (hope) that the deterministic model known as lock and key has been known to be a flawed view for quite some time. Books published in the early 00s (notably "Ni Dieu ni gène", by Kupiec and Sonigo) were already making this point in a popular science format, and explaining that even the concept of cells exchanging signals was flawed.
A cell has no evolutionary reason to transmit signals. It will however eat molecules it can use, and excrete molecules that are no longer needed, because this allows the cell to survive and reproduce. A white blood cell eating a bacteria doesn't do so with an intention to protect some organ somewhere in the body, it does so like predator eats prey. Leukocytes who eat well, ie face a bacteria they can eat, then multiply, and end up eating all the bacteria, before dying off when there's nothing more to eat. So they protect the body and then remove themselves, not because they have the intention to do it for the greater good, or because they received a signal, but because they have evolved to prey on bacteria within the ecosystem of the body.
Thinking of the body as a well ordered mechanism is a flawed view, there are no locks and keys, and most likely very few signals if any. Thinking of the body as a dynamically balanced ecosystem seems much closer to how cells behave and to the fantastically complex feedback systems that have evolved over eons and are now balancing the populations of cells in our bodies.
We may be ecosystems of individually oblivious and dumb little cells, but isn't it wondrous that from this emerges a complexity that can say "I" and has consciousness of self?
But reproduction is a fundamental aspect of evolution, and (most?) cells in the body don't reproduce on their own, but rather they are manufactured. T cells for example are manufactured by bone marrow and the thymus multiplies them if I'm understanding correctly. So I'm not sure how that fits in with the view you explained in your post.
In other words the T cell doesn't get feedback on its own fitness. The fitness feedback is at the level of the reproductive success of a human being (healthy humans can have more kids than sick ones), not the reproductive success of a T cell (because it has no reproductive capabilities and therefore cannot be subject to selection pressures). Correct me if I'm wrong.
Most cells or matter in the body?
From https://www.nature.com/articles/nature.2016.19136 :
> A 'reference man' (one who is 70 kilograms, 20–30 years old and 1.7 metres tall) contains on average about 30 trillion human cells and 39 trillion bacteria, […] Those numbers are approximate — another person might have half as many or twice as many bacteria, for example — but far from the 10:1 ratio commonly assumed.
Symbiosis https://en.wikipedia.org/wiki/Symbiosis :
> Symbiosis […] is any type of a close and long-term biological interaction between two different biological organisms, be it mutualistic, commensalistic, or parasitic. […]
> Symbiosis can be obligatory, which means that one or more of the symbionts depend on each other for survival, or facultative (optional), when they can generally live independently. […]
> Symbiosis is also classified by physical attachment. When symbionts form a single body it is called conjunctive symbiosis, while all other arrangements are called disjunctive symbiosis.[3] When one organism lives on the surface of another, such as head lice on humans, it is called ectosymbiosis; when one partner lives inside the tissues of another, such as Symbiodinium within coral, it is termed endosymbiosis.
Endosymbiont: https://en.wikipedia.org/wiki/Endosymbiont :
> Two major types of organelle in eukaryotic cells, mitochondria and plastids such as chloroplasts, are considered to be bacterial endosymbionts.[6] This process is commonly referred to as symbiogenesis.
Symbiogenesis: https://en.wikipedia.org/wiki/Symbiogenesis #Secondary_endosymbiosis ... Viral eukaryogenesis: https://en.wikipedia.org/wiki/Viral_eukaryogenesis :
> A number of precepts in the theory are possible. For instance, a helical virus with a bilipid envelope bears a distinct resemblance to a highly simplified cellular nucleus (i.e., a DNA chromosome encapsulated within a lipid membrane). In theory, a large DNA virus could take control of a bacterial or archaeal cell. Instead of replicating and destroying the host cell, it would remain within the cell, thus overcoming the tradeoff dilemma typically faced by viruses. With the virus in control of the host cell's molecular machinery, it would effectively become a functional nucleus. Through the processes of mitosis and cytokinesis, the virus would thus recruit the entire cell as a symbiont—a new way to survive and proliferate.
T-Cell # Activation: https://en.wikipedia.org/wiki/T_cell#Activation
> Both are required for production of an effective immune response; in the absence of co-stimulation, T cell receptor signalling alone results in anergy. […]
> Once a T cell has been appropriately activated (i.e. has received signal one and signal two) it alters its cell surface expression of a variety of proteins.
T-cell receptor § Signaling pathway: https://en.wikipedia.org/wiki/T-cell_receptor#Signaling_path...
Co-stimulation : https://en.wikipedia.org/wiki/Co-stimulation :
> Co-stimulation is a secondary signal which immune cells rely on to activate an immune response in the presence of an antigen-presenting cell.[1] In the case of T cells, two stimuli are required to fully activate their immune response. During the activation of lymphocytes, co-stimulation is often crucial to the development of an effective immune response. Co-stimulation is required in addition to the antigen-specific signal from their antigen receptors.
Anergy: https://en.wikipedia.org/wiki/Clonal_anergy :
> [Clonal] Anergy is a term in immunobiology that describes a lack of reaction by the body's defense mechanisms to foreign substances, and consists of a direct induction of peripheral lymphocyte tolerance. An individual in a state of anergy often indicates that the immune system is unable to mount a normal immune response against a specific antigen, usually a self-antigen. Lymphocytes are said to be anergic when they fail to respond to their specific antigen. Anergy is one of three processes that induce tolerance, modifying the immune system to prevent self-destruction (the others being clonal deletion and immunoregulation ).[1]
Clonal deletion: https://en.wikipedia.org/wiki/Clonal_deletion :
> There are millions of B and T cells inside the body, both created within the bone marrow and the latter matures in the thymus, hence the T. Each of these lymphocytes express specificity to a particular epitope, or the part of an antigen to which B cell and T cell receptors recognize and bind. There is a large diversity of epitopes recognized and, as a result, it is possible for some B and T lymphocytes to develop with the ability to recognize self.[4] B and T cells are presented with self antigen after developing receptors while they are still in the primary lymphoid organs.[3][4] Those cells that demonstrate a high affinity for this self antigen are often subsequently deleted so they cannot create progeny, which helps protect the host against autoimmunity.[2][3] Thus, the host develops a tolerance for this antigen, or a self tolerance.[3]
"DNA threads released by activated CD4+ T lymphocytes provide autocrine costimulation" (2019) https://www.pnas.org/content/116/18/8985
> A growing body of literature has shown that, aside from carrying genetic information, both nuclear and mitochondrial DNA can be released by innate immune cells and promote inflammatory responses. Here we show that when CD4+ T lymphocytes, key orchestrators of adaptive immunity, are activated, they form a complex extracellular architecture composed of oxidized threads of DNA that provide autocrine costimulatory signals to T cells. We named these DNA extrusions “T helper-released extracellular DNA” (THREDs).
FWIU, there's also a gut-brain pathway? Or is that also this "signaling method" for feedback in symbiotic complex dynamic systems?
From https://en.wikipedia.org/wiki/Complex_system :
> Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.
Graph, Hypergraph, Property graph, Linked Data, AtomSpace, RDF* + SPARQL*, ONNX, {...}
> The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment.[1] The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.
A multi-digraph of probably nonlinear relations may not be the best way to describe the fields of even just a few electroweak magnets?
> As an interdisciplinary domain, complex systems draws contributions from many different fields, such as the study of self-organization and critical phenomena from physics, that of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.
... Glossary of Systems Theory: https://en.wikipedia.org/wiki/Glossary_of_systems_theory
The Shunting-yard algorithm converts infix notation to RPN
RosettaCode has examples of the Shunting-Yard algorithm for parsing infix notation ((1+2)*3)^4 to an AST or just a stack of data and operators such as RPN: [ ]
Parsing/Shunting-yard algorithm: https://rosettacode.org/wiki/Parsing/Shunting-yard_algorithm
Parsing/RPN to infix conversion: https://rosettacode.org/wiki/Parsing/RPN_to_infix_conversion...
Applications: testing all combinations of operators with and without term grouping; parentheses; such as evolutionary algorithms or universal function approximaters that explore the space.
For example: https://github.com/westurner/notebooks/blob/gh-pages/maths/b... :
> This still isn't the complete set of possible solutions
[deleted]
How should logarithms be taught?
As one shape of a curve; in a notebook that demonstrates multiple methods of curve fitting with and without a logarithmic transform.
Logarithm: https://simple.wikipedia.org/wiki/Logarithm ; https://en.wikipedia.org/wiki/Logarithm :
> In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x.
List of logarithmic identities: https://en.wikipedia.org/wiki/List_of_logarithmic_identities
List of integrals of logarithmic functions: https://en.wikipedia.org/wiki/List_of_integrals_of_logarithm...
As functions in a math library or a CAS that should implement the correct axioms correctly:
Sympy Docs > Functions > Contents: https://docs.sympy.org/latest/modules/functions/index.html#c...
sympy.functions.elementary.exponential. log(x, base=e) == log(x)/log(e), exp(), LambertW(), exp_polar() https://docs.sympy.org/latest/modules/functions/elementary.h...
"Exponential, Logarithmic and Trigonometric Integrals" sympy.functions.special.error_functions. Ei: exponential integral, li: logarithmic integral, Li: offset logarithmic integral https://docs.sympy.org/latest/modules/functions/special.html...
numpy.log. log() base e, log2(), log10(), log1p(x) == log(1 + x) https://numpy.org/doc/stable/reference/generated/numpy.log.h...
numpy.exp. exp(), expm1(x) == exp(x) - 1, exp2(x) == 2*x https://numpy.org/doc/stable/reference/generated/numpy.exp.h...
Khan Academy > Algebra 2 > Unit: Logarithms: https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:...
Khan Academy > Algebra (all content) > Unit: Exponential & logarithmic functions https://www.khanacademy.org/math/algebra-home/alg-exp-and-lo...
3blue1brown: "Logarithm Fundamentals | Lockdown math ep. 6", "What makes the natural log "natural"? | Lockdown math ep. 7" https://www.youtube.com/playlist?list=PLZHQObOWTQDP5CVelJJ1b...
Feynmann Lectures 22-6: Algebra > Imaginary Exponents: https://www.feynmanlectures.caltech.edu/I_22.html#Ch22-S6
Power law functions: https://en.wikipedia.org/wiki/Power_law#Power-law_functions
In a two-body problem, of the 4-5 fundamental interactions: Gravity, Electroweak interaction, Strong interaction, Higgs interaction, a fifth force; which have constant exponential terms in their symbolic field descriptions? https://en.wikipedia.org/wiki/Fundamental_interaction#The_in...
Natural logs in natural systems:
Growth curve (biology) > Exponential growth: https://en.wikipedia.org/wiki/Growth_curve_(biology)#Exponen...
Basic reproduction number: https://en.wikipedia.org/wiki/Basic_reproduction_number
(... Growth hacking; awesome-grwoth-hacking: https://github.com/bekatom/awesome-growth-hacking )
Metcalf's law: https://en.wikipedia.org/wiki/Metcalfe%27s_law
Moore's law; doubling time: https://en.wikipedia.org/wiki/Moore's_law
A block reward halving is a doubling of difficulty. What block reward difficulty schedule would be a sufficient inverse of Moore's law?
A few queries:
logarithm cheatsheet https://www.google.com/search?q=logarithm+cheatsheet
logarithm on pinterest https://www.pinterest.com/search/pins/?q=logarithm
logarithm common core worksheet https://www.google.com/search?q=logarithm+common+core+worksh...
logarithm common core autograded exercise (... Khan Academy randomizes from a parametrized (?) test bank for unlimited retakes for Mastery Learning) https://www.google.com/search?q=logarithm+common+core+autogr...
If only I had started my math career with a binder of notebooks or at least 3-hole-punched notes.
- [ ] Create a git repo with an environment.yml that contains e.g. `mamba install -y jupyter-book jupytext jupyter_contrib_extensions jupyterlab-git nbdime jupyter_console pandas matplotlib sympy altair requests-html`, build a container from said repo with repo2docker, and git commit and push changes made from within the JupyterLab instance that repo2docker layers on top of your reproducible software dependency requirement specification ("REES"). {bash/zsh, git, docker, repo2docker, jupyter, [MyST] markdown and $$ mathTeX $$; Google Colab, Kaggle Kernels, ml-workspace, JupyterLite}
"How I'm able to take notes in mathematics lectures using LaTeX and Vim" https://news.ycombinator.com/item?id=19448678
Here's something like MyST Markdown or Rmarkdown for Jupyter-Book and/or jupytext:
## Log functions
Log functions in the {PyData} community
### LaTeX
#### sympy2latex
What e.g. sympy2latex parses that LaTeX into, in terms of symbolic objects in an expression tree:
### numpy
see above
### scipy
### sympy
see above
### sagemath
### statsmodels
### TensorFlow
### PyTorch
## Logarithmic and exponential computational complexity
- Docs: https://www.bigocheatsheet.com/
- [ ] DOC: Rank these with O(1) first: O(n log n), O(log n), O(1), O(n), O(n*2) +growthcurve +exponential
## Combinatorics, log, exp, and Shannon classical entropy and classical Boolean bits
https://www.google.com/search?q=formula+for+entropy :
S=k_{b}\ln\Omega
Entropy > Statistical mechanics: https://en.wikipedia.org/wiki/Entropy#Statistical_mechanicsSI unit for [ ] entropy: joules per kelvin (J*K*-1)
*****
In terms of specifying tasks for myself in order to learn {Logarithms,} I could use e.g. todo.txt markup to specify tasks with [project and concept] labels and contexts; but todo.txt doesn't support nested lists like markdown checkboxes with todo.txt markup and/or codelabels (if it's software math)
- [ ] Read the Logarithms wikipedia page <url> and take +notes +math +logarithms @workstation
- [o] Read
- [x] BLD: mathrepo: generate from cookiecutter or nbdev
- [ ] DOC: mathrepo: logarithm notes
- [ ] DOC,ART: mathrepo: create exponential and logarithmic charts +logarithms @workstation
- [ ] ENH,TST,DOC: mathrepo: logarithms with stdlib math, numpy, sympy (and *pytest* or at least `assert` assertion expressions)
- [ ] ENH,TST,DOC: mathrepo: logarithms and exponents with NN libraries (and *pytest*)
Math (and logic; ultimately thermodynamics) transcend disciplines. To bikeshed - to worry about a name that can be sed-replaced later - but choose a good variable name now,
Is 'mathrepo' the best scope for this project? Smaller dependency sets (i.e. simpler environment.yml) seem to result in less version conflicts. `conda env export --from-history; mamba env export --from-history; pip freeze; pipenv -h; poetry -h`### LaTeX
$$ \log_{b} x = (b^? = x) $$
$$ 2^3 = 8 $$
$$ \log_{2} 8 = 3 $$
$$ \ln e = 1 $$
$$ \log_b(xy)=\log_b(x)+\log_b(y) $$
$ \begin{align}
\textit{(1) } \log_b(xy) & = \log_b(x)+\log_b(y)
\end{align} $
Sources: https://en.wikipedia.org/w/index.php?title=List_of_logarithm... ,#### sympy2latex
What e.g. sympy2latex parses that LaTeX into, in terms of symbolic objects in an expression tree:
# install
#!python -m pip install antlr4-python3-runtime sympy
#!mamba install -y -q antlr-python-runtime sympy
import sympy
from sympy.parsing.latex import parse_latex
def displaylatexexpr(latex):
expr = parse_latex(latex)
display(str(expr))
display(expr)
return expr
displaylatexexpr('\log_{2} 8'))
# 'log(8, 2)'
displaylatexexpr('\log_{2} 8 = 3'))
# 'Eq(log(8, 2), 3)'
displaylatexexpr('\log_b(xy) = \log_b(x)+\log_b(y)'))
# 'Eq(log(x*y, b), log(x, b) + log(y, b))'
displaylatexexpr('\log_{b} (xy) = \log_{b}(x)+\log_{b}(y)')
# 'Eq(log(x*y, b), log(x, b) + log(y, b))'
displaylatexexpr('\log_{2} (xy) = \log_{2}(x)+\log_{2}(y)')
# 'Eq(log(x*y, 2), log(x, 2) + log(y, 2))'
### python standard libraryhttps://docs.python.org/3/library/operator.html#operator.pow
https://docs.python.org/3/library/math.html#power-and-logari...
math. exp(x), expm1(), log(x, base=e), log1p(x), log2(x), log10(x), pow(x, y) : float, assert sqrt() == pow(x, 1/2)
## scipy
https://docs.scipy.org/doc/scipy/reference/generated/scipy.s... scipy.special. xlog1py()
https://docs.scipy.org/doc/scipy/reference/generated/scipy.s...
### sagemath
https://doc.sagemath.org/html/en/reference/functions/sage/fu...
### statsmodels
### TensorFlow https://www.tensorflow.org/api_docs/python/tf/math tf.math. log(), log1P(), log_sigmoid(), exp(), expm1()
https://keras.io/api/layers/activations/
SmoothReLU ("softplus") adds ln to the ReLU activation function, for example: https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#So...
E.g. Softmax & LogSumExp also include natural logarithms in their definitions: https://en.wikipedia.org/wiki/Softmax_function
### PyTorch
https://pytorch.org/docs/stable/generated/torch.log.html torch. log(), log10(), log1p(), log2(), exp(), exp2(), expm1(); logaddexp() , logaddexp2(), logsumexp(), torch.special.xlog1py()
***
Regarding this learning process and these tools, Now I have a few replies to myself (!) in not-quite-markdown and with various headings: I should consolidate this information into a [MyST] markdown Jupyter Notebook and re-lead the whole thing. If this was decent markdown from the start, I'd have less markup work to do to create a ScholarlyArticle / Notebook.
Automatic cipher suite ordering in Go’s crypto/tls
Summary: The Golang team is deciding what ranked order TLS cipher suites should be used in. You are not able to decide what cipher suites to use, the Golang team sets that in the code and will update it as they see fit.
My take on this is that Filippo is taking a heavy handed approach here. This works for the majority of "dev write code fast ship it over the wall" scenarios. But it also falls apart in a couple scenarios:
* Companies/govt agencies which mandate the use of certain algorithms, such as RSA, and both sides of the connection are running Golang code. Now you need to vendor the TLS library to allow for what the company wants.
* If an issue is discovered with an algorithm it would be really nice to be able to turn off the algorithm in production until the issue can be patched. I'm not sure if this is possible with the current set of changes.
Any government agency that mandates use of specific TLS algorithms (like whitelist) almost certainly requires FIPS cryptography (or classified cryptography), so you won’t be using crypto/TLS anyway.
As a security/cryptography engineer, I love this change: for cryptography, it’s clear that more knobs == more problems.
As a developer, however, I dislike it: more knobs it’s easier to get the code to do what I need it to do.
Personally, I think it’s a good choice by Filippo.
From "Go Crypto and Kubernetes — FIPS 140–2 and FedRAMP Compliance" (2021) https://gokulchandrapr.medium.com/go-crypto-and-kubernetes-f... :
> If a vendor wants to supply cloud-based services to the US Federal Government, then they have to get FedRAMP approval. This certification process covers a whole host of security issues, but is very specific about its requirements on cryptography: usage of FIPS 140–2 validated modules wherever cryptography is needed, these encryption standards protect the cryptographic module from being cracked, altered, or otherwise tampered with. FIPS 140–2 validated encryption is a prerequisite for FedRAMP. [...]
> [...] Go Cryptography and Kubernetes — FIPS 140–2 Kubernetes is a Go project, as are most of the Kubernetes subcomponents and ecosystem. Golang has a crypto standard library, Golang Crypto which fulfills almost all the application crypto needs (TLS stack implementation for HTTPS servers and clients all the way to HMAC or any other primitive that are needed to make signatures to verify hashes, encrypt messages.). Go has made a different choice compared to most languages, which usually come with links or wrappers for OpenSSL or simply don’t provide any cryptography in the standard library (Rust doesn’t have standard library cryptography, JavaScript only has web crypto, Python doesn’t come with a crypto standard library). [...]
> The native go crypto is not FIPS compliant and there are few open proposals to facilitate Go code to meet FIPS requirements. Users can use prominent go compilers/toolsets backed by FIPS validated SSL libraries provided by Google or Redhat which enables Go to bypass the standard library cryptographic routines and instead call into a FIPS 140–2 validated cryptographic library. These toolsets are available as container images, where users can use the same to compile any Go based applications. [...]
> When a RHEL system is booted in FIPS mode, Go will instead call into OpenSSL via a new package that bridges between Go and OpenSSL. This also can be manually enabled by setting `GOLANG_FIPS=1`. The Go Toolset is available as a container image that can be downloaded from Red Hat Container Registry. Red Hat mentions that this as a new feature built on top of existing upstream work (BoringSSL). [...]
> To be FIPS 140–2 compliant, the module must use FIPS 140–2 complaint algorithms, ciphers, key establishment methods, and other protection profiles.
> FIPS-approved algorithms do change at times; not extremely frequently, but more often than they come out with a new version of FIPS 140. [...]
> Some of the fundamental requirements (not limited to) are as follows:
> [...] Support for TLS 1.0 and TLS 1.1 is now deprecated (only allowed in certain cases). TLS 1.3 is the preferred option, while TLS 1.2 is only tolerated.
> [...] DSA/RSA/ECDSA are only approved for key generation/signature.
> [...] The 0-RTT option in TLS 1.3 should be avoided.
Was there lag between the release of TLS 1.3 and an updated release of FIPS 140? @18f @DefenseDigital Can those systems be upgraded as easily?
NIST just moves slowly is all.
Common Criteria (NIAP) still only allows TLSv1.2 and TLSv1.1.
Scikit-Learn Version 1.0
There are scikit-learn (sklearn) API-compatible wrappers for e.g. PyTorch and TensorFlow.
Skorch: https://github.com/skorch-dev/skorch
tf.keras.wrappers.scikit_learn: https://www.tensorflow.org/api_docs/python/tf/keras/wrappers...
AFAIU, there are not Yellowbrick visualizers for PyTorch or TensorFlow; though PyTorch abd TensorFlow work with TensorBoard for visualizing CFG execution.
> Many machine learning libraries implement the scikit-learn `estimator API` to easily integrate alternative optimization or decision methods into a data science workflow. Because of this, it seems like it should be simple to drop in a non-scikit-learn estimator into a Yellowbrick visualizer, and in principle, it is. However, the reality is a bit more complicated.
> Yellowbrick visualizers often utilize more than just the method interface of estimators (e.g. `fit()` and `predict()`), relying on the learned attributes (object properties with a single underscore suffix, e.g. `coef_`). The issue is that when a third-party estimator does not expose these attributes, truly gnarly exceptions and tracebacks occur. Yellowbrick is meant to aid machine learning diagnostics reasoning, therefore instead of just allowing drop-in functionality that may cause confusion, we’ve created a wrapper functionality that is a bit kinder with it’s messaging.
Looks like there are Yellowbrick wrappers for XGBoost, CatBoost, CuML, and Spark MLib; but not for NNs yet. https://www.scikit-yb.org/en/latest/api/contrib/wrapper.html...
From the RAPIDS.ai CuML team: https://docs.rapids.ai/api/cuml/stable/ :
> cuML is a suite of fast, GPU-accelerated machine learning algorithms designed for data science and analytical tasks. Our API mirrors Sklearn’s, and we provide practitioners with the easy fit-predict-transform paradigm without ever having to program on a GPU.
> As data gets larger, algorithms running on a CPU becomes slow and cumbersome. RAPIDS provides users a streamlined approach where data is intially loaded in the GPU, and compute tasks can be performed on it directly.
CuML is not an NN library; but there are likely performance optimizations from CuDF and CuML that would accelerate performance of NNs as well.
Dask ML works with models with sklearn interfaces, XGBoost, LightGBM, PyTorch, and TensorFlow: https://ml.dask.org/ :
> Scikit-Learn API
> In all cases Dask-ML endeavors to provide a single unified interface around the familiar NumPy, Pandas, and Scikit-Learn APIs. Users familiar with Scikit-Learn should feel at home with Dask-ML.
dask-labextension for JupyterLab helps to visualize Dask ML CFGs which call predictors and classifiers with sklearn interfaces: https://github.com/dask/dask-labextension
well, this is realy the reason i asked my question.
It is a pain to move in and out of scikit and write all these wrappers and converters. I would prefer to do more in one framework.
For example, doing hyperparameter optimization in pytorch using scikit can be a bit painful sometimes
Ctrl-F automl https://westurner.github.io/hnlog/
> /? hierarchical automl "sklearn" site:github.com : https://www.google.com/search?q=hierarchical+automl+%22sklea...
https://westurner.github.io/hnlog/#comment-18798244
> Dask-ML works with {scikit-learn, xgboost, tensorflow, TPOT,}. ETL is your responsibility. Loading things into parquet format affords a lot of flexibility in terms of (non-SQL) datastores or just efficiently packed files on disk that need to be paged into/over in RAM. (Edit)
scale-scikit-learn https://examples.dask.org/machine-learning/scale-scikit-lear... -> dask.distributed parallel predication: https://examples.dask.org/machine-learning/parallel-predicti...
"Hyperparameter optimization with Dask" https://examples.dask.org/machine-learning/hyperparam-opt.ht...
> Sklearn.pipeline.Pipeline API: {fit(), transform(), predict(), score(),} https://scikit-learn.org/stable/modules/generated/sklearn.pi... : ```
decision_function(X) # Apply transforms, and decision_function of the final estimator
fit(X[, y]) # Fit the model
fit_predict(X[, y]) # Applies fit_predict of last step in pipeline after transforms.
fit_transform(X[, y]) # Fit the model and transform with the final estimator
get_params([deep]) # Get parameters for this estimator.
predict(X, *predict_params) # Apply transforms to the data, and predict with the final estimator
predict_log_proba(X) # Apply transforms, and predict_log_proba of the final estimator
predict_proba(X) # Apply transforms, and predict_proba of the final estimator
score(X[, y, sample_weight]) # Apply transforms, and score with the final estimator
score_samples(X) # Apply transforms, and score_samples of the final estimator.
set_params(**kwargs) # Set the parameters of this estimator
```
> https://docs.featuretools.com can also minimize ad-hoc boilerplate ETL / feature engineering :
>> Featuretools is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning
From https://featuretools.alteryx.com/en/stable/guides/using_dask... :
> Creating a feature matrix from a very large dataset can be problematic if the underlying pandas dataframes that make up the entities cannot easily fit in memory. To help get around this issue, Featuretools supports creating Entity and EntitySet objects from Dask dataframes. A Dask EntitySet can then be passed to featuretools.dfs or featuretools.calculate_feature_matrix to create a feature matrix, which will be returned as a Dask dataframe. In addition to working on larger than memory datasets, this approach also allows users to take advantage of the parallel and distributed processing capabilities offered by Dask
Signed Exchanges on Google Search
From https://blog.cloudflare.com/automatic-signed-exchanges/ :
> The broader implication of SXGs is that they make content portable: content delivered via an SXG can be easily distributed by third parties while maintaining full assurance and attribution of its origin. Historically, the only way for a site to use a third party to distribute its content while maintaining attribution has been for the site to share its SSL certificates with the distributor. This has security drawbacks. Moreover, it is a far stretch from making content truly portable.
> In the long-term, truly portable content can be used to achieve use cases like fully offline experiences. In the immediate term, the primary use case of SXGs is the delivery of faster user experiences by providing content in an easily cacheable format. Specifically, Google Search will cache and sometimes prefetch SXGs. For sites that receive a large portion of their traffic from Google Search, SXGs can be an important tool for delivering faster page loads to users.
> It’s also possible that all sites could eventually support this standard. Every time a site is loaded, all the linked articles could be pre-loaded. Web speeds across the board would be dramatically increased.
"Signed HTTP Exchanges" draft-yasskin-http-origin-signed-responses https://wicg.github.io/webpackage/draft-yasskin-http-origin-...
"Bundled HTTP Exchanges" draft-yasskin-wpack-bundled-exchanges https://wicg.github.io/webpackage/draft-yasskin-wpack-bundle... :
> Web bundles provide a way to bundle up groups of HTTP responses, with the request URLs and content negotiation that produced them, to transmit or store together. They can include multiple top-level resources with one identified as the default by a primaryUrl metadata, provide random access to their component exchanges, and efficiently store 8-bit resources.
From https://web.dev/web-bundles/ :
> Introducing the Web Bundles API. A Web Bundle is a file format for encapsulating one or more HTTP resources in a single file. It can include one or more HTML files, JavaScript files, images, or stylesheets.
> Web Bundles, more formally known as Bundled HTTP Exchanges, are part of the Web Packaging proposal.
> HTTP resources in a Web Bundle are indexed by request URLs, and can optionally come with signatures that vouch for the resources. Signatures allow browsers to understand and verify where each resource came from, and treats each as coming from its true origin. This is similar to how Signed HTTP Exchanges, a feature for signing a single HTTP resource, are handled.
AlphaGo documentary (2020) [video]
I'm curious to ask what's next for this series of AI at Deepmind? Is there other more challenging problems they are tackling at the moment? I read they have already master StarCraft 2 even.
Is it the case that they have stopped this series of AI and going all in on protein folding at the moment?
This blog post mentions testing on Atari and being applied to chemistry and quantum physics
https://deepmind.com/blog/article/muzero-mastering-go-chess-...
AlphaFold 2 solved the CASP protein folding problem that AFAIU e.g. Folding@home et. al have been churning at for awhile FWIU. From November 2020: https://deepmind.com/blog/article/alphafold-a-solution-to-a-...
https://en.wikipedia.org/wiki/AlphaFold#SARS-CoV-2 :
> AlphaFold has been used to a predict structures of proteins of SARS-CoV-2, the causative agent of COVID-19 [...] The team acknowledged that though these protein structures might not be the subject of ongoing therapeutical research efforts, they will add to the community's understanding of the SARS-CoV-2 virus.[74] Specifically, AlphaFold 2's prediction of the structure of the ORF3a protein was very similar to the structure determined by researchers at University of California, Berkeley using cryo-electron microscopy. This specific protein is believed to assist the virus in breaking out of the host cell once it replicates. This protein is also believed to play a role in triggering the inflammatory response to the infection (... Berkeley ALS and SLAC beamlines ... S309 & Sotrovimab: https://scitechdaily.com/inescapable-covid-19-antibody-disco... )
Is there yet an open implementation of AlphaFold 2? edit: https://github.com/search?q=alphafold ... https://github.com/deepmind/alphafold
How do I reframe this problem in terms of fundamental algorithmic complexity classes (and thus the Quantum Algorithm Zoo thing that might optimize the currently fundamentally algorithmically computationally hard part of the hot loop that is the cost driver in this implementation)?
To cite in full from the MuZero blog post from December 2020: https://deepmind.com/blog/article/muzero-mastering-go-chess-... :
> Researchers have tried to tackle this major challenge in AI by using two main approaches: lookahead search or model-based planning.
> Systems that use lookahead search, such as AlphaZero, have achieved remarkable success in classic games such as checkers, chess and poker, but rely on being given knowledge of their environment’s dynamics, such as the rules of the game or an accurate simulator. This makes it difficult to apply them to messy real world problems, which are typically complex and hard to distill into simple rules.
> Model-based systems aim to address this issue by learning an accurate model of an environment’s dynamics, and then using it to plan. However, the complexity of modelling every aspect of an environment has meant these algorithms are unable to compete in visually rich domains, such as Atari. Until now, the best results on Atari are from model-free systems, such as DQN, R2D2 and Agent57. As the name suggests, model-free algorithms do not use a learned model and instead estimate what is the best action to take next.
> MuZero uses a different approach to overcome the limitations of previous approaches. Instead of trying to model the entire environment, MuZero just models aspects that are important to the agent’s decision-making process. After all, knowing an umbrella will keep you dry is more useful to know than modelling the pattern of raindrops in the air.
> Specifically, MuZero models three elements of the environment that are critical to planning:
> * The value: how good is the current position?
> * The policy: which action is the best to take?
> * The reward: how good was the last action?
> These are all learned using a deep neural network and are all that is needed for MuZero to understand what happens when it takes a certain action and to plan accordingly.
> Illustration of how Monte Carlo Tree Search can be used to plan with the MuZero neural networks. Starting at the current position in the game (schematic Go board at the top of the animation), MuZero uses the representation function (h) to map from the observation to an embedding used by the neural network (s0). Using the dynamics function (g) and the prediction function (f), MuZero can then consider possible future sequences of actions (a), and choose the best action.
> MuZero uses the experience it collects when interacting with the environment to train its neural network. This experience includes both observations and rewards from the environment, as well as the results of searches performed when deciding on the best action.
> During training, the model is unrolled alongside the collected experience, at each step predicting the previously saved information: the value function v predicts the sum of observed rewards (u), the policy estimate (p) predicts the previous search outcome (π), the reward estimate r predicts the last observed reward (u). This approach comes with another major benefit: MuZero can repeatedly use its learned model to improve its planning, rather than collecting new data from the environment. For example, in tests on the Atari suite, this variant - known as MuZero Reanalyze - used the learned model 90% of the time to re-plan what should have been done in past episodes.
FWIU, from what's going on over there:
AlphaGo => AlphaGo {Fan, Lee, Master, Zero} => AlphaGoZero => AlphaZero => MuZero
AlphaGo: https://en.wikipedia.org/wiki/AlphaGo_Zero
AlphaZero: https://en.wikipedia.org/wiki/AlphaZero
MuZero: https://en.wikipedia.org/wiki/MuZero
AlphaFold {1,2}: https://en.wikipedia.org/wiki/AlphaFold
IIRC, there is not an official implementation of e.g. AlphaZero or MuZero with e.g. openai/gym (and openai/retro) for comparing reinforcement learning algorithms? https://github.com/openai/gym
What are the benchmarks for Applied RL?
From https://news.ycombinator.com/item?id=28499001 :
> AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU? [...]
> To be 200% green - to put a 200% green footer with search-discoverable RDFa on your site - I think you need PPAs and all directly sourced clean energy.
> (Energy efficiency is very relevant to ML/AI/AGI, because while it may be the case that the dumb universal function approximator will eventually find a better solution, "just leave it on all night/month/K12+postdoc" in parallel is a very expensive proposition with no apparent oracle; and then to ethically filter solutions still costs at least one human)
I'm seeing a lot of cool stuff that people start to build on AlphaFold lately:
- ChimeraX: https://www.youtube.com/watch?v=le7NatFo8vI
Libraries.io indexes software dependencies; but no Dependent packages or Dependent repositories are yet listed for the pypi:alphafold package: https://libraries.io/pypi/alphafold
The GitHub network/dependents view currently lists one repo that depends upon deepmind/alphafold: https://github.com/deepmind/alphafold/network/dependents
(Linked citations for science: How to cite a schema:SoftwareApplication in a schema:ScholarlyArticle , How to cite a software dependency in a dependency specification parsed by e.g. Libraries.io and/or GitHub. e.g. FigShare and Zenodo offer DOIs for tags of git repos, that work with BinderHub and repo2docker and hopefully someday repo2jupyterlite. https://westurner.github.io/hnlog/#comment-24513808 )
/?gscholar alphafold: https://scholar.google.com/scholar?q=alphafold
On a Google Scholar search result page, you can click "Cited by [ ]" to check which documents contain textual and/or URL citations gscholar has parsed and identified as indicating a relation to a given ScholarlyArticle.
/?sscholar alphafold: https://www.semanticscholar.org/search?q=alphafold
On a Semantic Scholar search result page, you can click the "“" to check which documents contain textual and/or URL citations Semantic Scholar has parsed and identified as indicating a relation to a given ScholarlyArticle.
/?smeta alphafold: https://www.meta.org/search?q=t---alphafold
On a Meta.org search result page, you can click the article title and scroll down to "Citations" to check which documents contain textual and/or URL citations Meta has parsed and identified as indicating a relation to a given ScholarlyArticle.
Do any of these use structured data like https://schema.org/ScholarlyArticle ? (... https://westurner.github.io/hnlog/#comment-28495597 )
Interpretable Model-Based Hierarchical RL Using Inductive Logic Programming
I don't work in the field, but I sort of passively follow it.
A year ago I made this comment, in another ML thread:
https://news.ycombinator.com/item?id=23315739
"I often wonder about whether neural networks might need to meet at a crossroads with other techniques."
"Inductive Logic/Answer Set Programming or Constraints Programming seems like it could be a good match for this field. Because from my ignorant understanding, you have a more "concrete" representation of a model/problem in the form of symbolic logic or constraints and an entirely abstract "black box" solver with neural networks. I have no real clue, but it seems like they could be synergistic?"
I can't interpret the paper -- is this roughly in this vein?I've been thinking along the same lines, it seems like logic + ML would complement each other well. Acquiring trustworthy labeled data is "THE" problem in ML, and figuring out which predicates to string together is "THE" problem in logic programming, seems like a perfect match.
A logic program can produce a practically infinite number of perfectly consistent test cases for the ML model to learn from, and the ML model can predict which problem should be solved. I'd like to see a conversational interface that combines these two systems, ML generates logic statements and observes the results, repeat. That might help to keep it from going off the rails like a long GPT-3 session tends to do.
AutoML is RL? The entire exercise of publishing and peer review is an exercise in cybernetics?
https://en.wikipedia.org/wiki/Probabilistic_logic_network :
> The basic goal of PLN is to provide reasonably accurate probabilistic inference in a way that is compatible with both term logic and predicate logic, and scales up to operate in real time on large dynamic knowledge bases.
> The goal underlying the theoretical development of PLN has been the creation of practical software systems carrying out complex, useful inferences based on uncertain knowledge and drawing uncertain conclusions. PLN has been designed to allow basic probabilistic inference to interact with other kinds of inference such as intensional inference, fuzzy inference, and higher-order inference using quantifiers, variables, and combinators, and be a more convenient approach than Bayesian networks (or other conventional approaches) for the purpose of interfacing basic probabilistic inference with these other sorts of inference. In addition, the inference rules are formulated in such a way as to avoid the paradoxes of Dempster–Shafer theory.
Has anybody already taught / reinforced an OpenCog [PLN, MOSES] AtomSpace hypergraph agent to do Linked Data prep and also convex optimization with AutoML and better than grid search so gradients?
Perhaps teaching users to bias analyses with e.g. Yellowbrick and the sklearn APIs would be a good curriculum traversal?
opening/baselines "Logging and vizualizing learning curves and other training metrics" https://github.com/openai/baselines#logging-and-vizualizing-...
https://en.wikipedia.org/wiki/AlphaZero
There's probably an awesome-automl by now? Again, the sklearn interfaces.
TIL that SymPy supports NumPy, PyTorch, and TensorFlow [Quantum; TFQ?]; and with a Computer Algebra System something for mutating the AST may not be necessary for symbolic expression trees without human-readable comments or symbol names? Lean mathlib: https://github.com/leanprover-community/mathlib , and then reasoning about concurrent / distributed systems (with side channels in actual physical component space) with e.g. TLA+.
There are new UUID formats that are timestamp-sortable; for when blockchain cryptographic hashes aren't enough entropy. "New UUID Formats – IETF Draft" https://news.ycombinator.com/item?id=28088213
... You can host online ML algos through SingularityNet, which also does PayPal now for the RL.
Our visual / auditory biological neural networks do appear to be hierarchical and relatively highly plastic as well.
If you're planning to mutate, crossover, and select expression trees, you'll need a survival function (~cost function) in order to reinforce; RL.
Blockchains cost immutable data storage with data integrity protections by the byte.
Smart contracts cost CPU usage with costed opcodes. eWASM (Ethereum WebAssembly) has costed opcodes for redundantly-executed smart contracts (that execute on n nodes of a shard) https://ewasm.readthedocs.io/en/mkdocs/determining_wasm_gas_...
AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU?
To be 200% green - to put a 200% green footer with search-discoverable RDFa on your site - I think you need PPAs and all directly sourced clean energy.
(Energy efficiency is very relevant to ML/AI/AGI, because while it may be the case that the dumb universal function approximator will eventually find a better solution, "just leave it on all night/month/K12+postdoc" in parallel is a very expensive proposition with no apparent oracle; and then to ethically filter solutions still costs at least one human)
> Perhaps teaching users to bias analyses with e.g. Yellowbrick and the sklearn APIs would be a good curriculum traversal?
Yellowbrick > Third Party Estimaters: (yellowbrick.contrib.wrapper: https://www.scikit-yb.org/en/latest/api/contrib/wrapper.html
From https://www.scikit-yb.org/en/latest/quickstart.html#using-ye... :
> The Yellowbrick API is specifically designed to play nicely with scikit-learn. The primary interface is therefore a Visualizer – an object that learns from data to produce a visualization. Visualizers are scikit-learn Estimator objects and have a similar interface along with methods for drawing. In order to use visualizers, you simply use the same workflow as with a scikit-learn model, import the visualizer, instantiate it, call the visualizer’s fit() method, then in order to render the visualization, call the visualizer’s show() method.
> For example, there are several visualizers that act as transformers, used to perform feature analysis prior to fitting a model. The following example visualizes a high-dimensional data set with parallel coordinates:
from yellowbrick.features import ParallelCoordinates
visualizer = ParallelCoordinates()
visualizer.fit_transform(X, y)
visualizer.show()
> As you can see, the workflow is very similar to using a scikit-learn transformer, and visualizers are intended to be integrated along with scikit-learn utilities. Arguments that change how the visualization is drawn can be passed into the visualizer upon instantiation, similarly to how hyperparameters are included with scikit-learn models.IIRC, some automl tools - which test various combinations of, stacks of, ensembles of e.g. Estimators - do test hierarchical ensembles? Are those 'piecewise' and ultimately not the unified theory we were looking for here either (but often a good enough, fast enough, sufficient approximate solution with a sufficiently low error term)?
/? hierarchical automl "sklearn" site:github.com : https://www.google.com/search?q=hierarchical+automl+%22sklea...
Ship / Show / Ask: A modern branching strategy
> The reason you’re reliant on a lot of “Asking” might be that you have trust issue. “All changes must be approved” or “Every pull request needs 2 reviewers” are common policies, but they show a lack of trust in the development team.
I'm not following. We do reviews because a second pair of eyes can spot things the author missed.
I've been labelled controversial (amongst other things..) in the past for stating this but hey ho, I'll take another leap:
Aside from the "small changes, fast fixes" type of mantra, and "pairing is better than reviewing" that I suspect Martin Fowler is leaning on (and that I both support strongly):
There is a difference between having reviews, and enforcing reviews.
In my multi-decade career I'm yet to have a signle instance of the thought "thank god we prohibit changes that haven't been reviewed by two other people"
But I have lost count the number of times I've had the thought "I need to bypass this policy because shit's on fire and I've got a fix"
Then comes the question of quality of review when they are mandated.
I'll interject with: Code reviews are no inherently bad. Feedback is good. Collaboration is good. There are better methods of providing feedback than code reviews (I strongly object to the post-facto nature of code reviews)
Mandated peer reviews less so. Feedback is hurried, if not entirely absent, as it becomes another task that people _have_ to do. The time spent reviewing is rarely accounted for. There's a lot of knowledge transfer required for any non-trivial review to be effective, further increasing the demand on the participants.
Code reviews are another tool in the box, but they should not be used for every job.
Where I currently work, we have "skip review" and "skip preflight" labels for this. The mergers have the power to merge anything anyway, the labels are only to make it an official request.
> Where I currently work, we have "skip review" and "skip preflight" labels for this. The mergers have the power to merge anything anyway, the labels are only to make it an official request.
From the OP:
> Changes are categorized as either Ship (merge into mainline without review), Show (open a pull request for review, but merge into mainline immediately), or Ask (open a pull request for discussion before merging).
Right, its like saying checklists are a sign of distrust. No, its an acknowledgement that good intentions dont fix human nature. Have an agreed upon process and stick to it.
Checklists are often a good thing; and an opportunity to optimize processes with team feedback!
"Post-surgical deaths in Scotland drop by a third, attributed to a checklist" https://news.ycombinator.com/item?id=19684376 https://westurner.github.io/hnlog/#comment-19684376
Show HN: TweeView – A Tree Visualisation of Twitter Conversations
It would be more useful if one could either follow a link to the replies, or read the full tweet text of the corresponding node by clicking on it.
Also the node div should have a background color, for when an image doesn't load.
As an aside, even though @paulgb says he's not bothered with the use of his library that way, it's so similar that I think common OSS etiquette would've been to acknowledge the original project more prominently.
Cheers. It is all open source on https://github.com/edent/TweeView - where I also credit Paul.
The main 2D view is based on his work - but the interactive view (and 3D views) are based on other projects.
Wireless Charging Power Side-Channel Attacks
The severity and number of side-channel attacks is slowly becoming cripplingly ridiculous. At what point do we just give up with side-channel attacks on consumer devices and assume the mentality that all consumer devices connected to the internet should be treated as insecure by default.
> assume the mentality that all consumer devices connected to the internet should be treated as insecure by default.
"Zero trust security model" https://en.wikipedia.org/wiki/Zero_trust_security_model :
> The main concept behind zero trust is that devices should not be trusted by default, even if they are connected to a managed corporate network such as the corporate LAN and even if they were previously verified.
Checked the power draw a couple years ago on reddit's new design, which was a huge increase from the old design. Should have written a paper with lots of clever words in it, could have been my first scientific contribution!
I didn't because this research has been done long ago and with rsa key operations, which are both more severe if you can capture it and much more difficult to measure. That you can see the difference between multi-megabyte JavaScript pages is a given at that point.
From https://planetfriendlyweb.org/mental-model :
> When you think about how a digital product or website creates an environmental impact, you can think of it creating it in three main ways - through the Packets of data it sends to users, the Platform the product runs on, and the Process used to make the product itself.
From https://sustainableux.com/talks/2018/how-to-build-a-planet-f... :
> SustainableUX: design vs. climate change. Online, Worldwide, Free. The online event for UX, front-end, and product people who want to make a positive impact—on climate-change, social equality, and inclusion
How We Proved the Eth2 Deposit Contract Is Free of Runtime Errors
You know, I'm not really into cryptocurrency, but it does seem like it's contributing to a resurgence of interest in formal methods. So - thank you, cryptocurrency community!
From "Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification" https://news.ycombinator.com/item?id=27442273 :
> [Coq, VST, CompCert]
> Formal methods: https://en.wikipedia.org/wiki/Formal_methods
> Formal specification: https://en.wikipedia.org/wiki/Formal_specification
> Implementation of formal specification: https://en.wikipedia.org/wiki/Anti-pattern#Software_engineer...
> Formal verification: https://en.wikipedia.org/wiki/Formal_verification
> From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :
>> Which universities teach formal methods?
>> - q=formal+verification https://www.class-central.com/search?q=formal+verification
>> - q=formal+methods https://www.class-central.com/search?q=formal+methods
>> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?
To clarify, they implemented the algorithm in Dafny, and then proved that version correct. They did not verify code that will actually run in production.
From the paper:
> Dafny is a practical option for the verification of mission-critical smart contracts, and a possible avenue for adoption could be to extend the Dafny code generator engine to support Solidity … or to automatically translate Solidity into Dafny. We are currently evaluating these options
https://github.com/dafny-lang/dafny
Dafny Cheat Sheet: https://docs.google.com/document/d/1kz5_yqzhrEyXII96eCF1YoHZ...
Looks like there's a Haskell-to-Dafny converter.
Physics-Based Deep Learning Book
"Physics-based" Deep Learning seems like a misnomer. From the abstract "Deep Learning Applications for Physics" sounds more apt. There definitely is value in transferring standard terminology and methods from physics to deep learning. But from the preview it's unclear if that is the focus.
Indeed, “Deep Learning Based Physics” seems a little more correct.
The common terminology is Physics Informed Neural Networks (PINNs)
"Physics-informed neural networks" https://en.wikipedia.org/wiki/Physics-informed_neural_networ...
But what about statistical thermodynamics and information theory? What about thin film?
What are some applications for PINNs and for {DL, RL,} in physics?
Ask HN: Books that teach you programming languages via systems projects?
Looking for a book/textbook that teaches you a programming language through systems (or vice versa). For example, a book that teaches modern C++ by showing you how to program a compiler; a book that teaches operating systems and the language of choice in the book is Rust; a book that teaches database internals through Golang; etc. Basically, looking for a fun project-based book that I can walk through and spend my free time working through.
Any recommendations?
From "Ask HN: What are some books where the reader learns by building projects?" https://news.ycombinator.com/item?id=26042447 :
> "Agile Web Development with Rails [6]" (2020) teaches TDD and agile in conjunction with a DRY, CoC, RAD web application framework: https://g.co/kgs/GNqnWV
And:
> "ugit – Learn Git Internals by Building Git in Python" https://www.leshenko.net/p/ugit/
How you can track your personal finances using Python
> We take the output of the previous step, pipe everything over to our .beancount file, and "balance" transactions.
> Recall that the flow of money in double-entry accounting is represented using transactions involving at least two accounts. When you download CSVs from your bank, each line in that CSV represents money that's either incoming or outgoing. That's only one leg of a transaction (credit or debit). It's up to us to provide the other leg.
> This act is called "balancing".
Balance (accounting) https://en.wikipedia.org/wiki/Balance_(accounting)
Are unique record IDs necessary for this [financial] application? FWICS, https://plaintextaccounting.org/ just throws away the (probably per-institution) transaction IDs; like a non-reflexive logic that eschews Law of identity? Just grep and wc?
> What does the ledger look like?
> I wrote earlier that one of the main things that Beancount provides is a language specification for defining financial transactions in a plain-text format.
> What does this format look like? Here's a quick example:
option "title" "Alice"
option "operating_currency" "EUR"
; Accounts
2021-01-01 open Assets:MyBank:Checking
2021-01-01 open Expenses:Rent
2021-01-01 * "Landlord" "Thanks for the rent"
Assets:MyBank:Checking -1000.00 EUR
Expenses:Rent 1000.00 EUR
What does the `*` do?The star is just an 1-digit field for "flag". I don't think there are any defined semantics for the field but by convention 'star' means something 'no special flags on this transaction'.
People use other flags to indicate, for instance, whether a transaction has been reconciled, or cleared their bank, or whatnot.
a REST API such as: https://plaid.com/
Plaid has serious privacy and security concerns, so I would be careful:
From https://news.ycombinator.com/item?id=28203393 :
> No, your personal data is not sold or rented or given away or bartered to parties that are not Plaid, your bank, or the connected app. We talk about all of this in our privacy policy, including ways that data could be used — for example, with data processors/service providers (like AWS which hosts our services) for the purposes of running Plaid’s services or for a user’s connected app to provide their services.
>> I saw that. Thank you for your patience and persistence in responding to so many pointed questions.
>> For any interested, here is a link to relevant section of the referenced privacy policy: https://plaid.com/legal/#consumers
>> I am also impressed by the Legal Changelog on the same page that clearly lays out a log of changes made to privacy & other published legal documents.
The comments in those threads were more negative than positive, and the fact that Plaid paid a $58 million settlement for allegedly sharing personal banking data with third parties without consent is telling enough. I am not going to give my banking usernames and passwords to Plaid in plaintext, when their employee is arguing on HN over what the word "sold" means:
Are you making claims without evidence? Settling is not admission of guilt.
Banks should implement read-only OAuth APIs, so that users are not required to store their u/p/sqa answers.
From "Canada calls screen scraping ‘unsecure,’ sets Open Banking target for 2023" https://news.ycombinator.com/item?id=28229957 :
> AFAIU, there are still zero (0) consumer banking APIs with Read-Only e.g. OAuth APIs in the US as well?
Looks like there may be less than 3 so far.
> Banks could save themselves CPU, RAM, bandwidth, and liability by implementing read-only API tokens and methods that need only return JSON - instead of HTML or worse, monthly PDF tables for a fee - possibly similar to the Plaid API: https://plaid.com/docs/api/
> There is competition in consumer/retail banking, but still the only way to do e.g. budget and fraud analysis with third party apps is to give away all authentication factors: u/p/sqa; and TBH that's unacceptable.
> Traditional and distributed ledger service providers might also consider W3C ILP: Interledger Protocol (in starting their move to quantum-resistant ledgers by 2022 in order to have a 5 year refresh cycle before QC is a real risk by 2027, optimistically, for science) when reviewing the entropy of username+password_hash+security_question_answer strings in comparison to the entropy of cryptoasset account public key hash strings: https://interledger.org/developer-tools/get-started/overview...
> Are you making claims without evidence?
No. Plaid did agree to pay the $58 million, and the lawsuit was for alleged data sharing with third parties without user consent. I don't care if they admit guilt or not. They agreed to pay $58 million to end the lawsuit, and that does not engender trust. Shifting the blame to banks doesn't make Plaid any more reputable.
Providing usernames and passwords of sensitive accounts to a third party is a privacy and security risk, and Plaid has not earned enough trust from me to justify the risk I would need to assume to use their services.
How did their policies change before and after said settlement?
From https://my.plaid.com/help/360043065354-does-plaid-have-acces... :
> Does Plaid have access to my credentials?
> The type of connection Plaid has to your financial institution determines whether or not we have access to the login credentials for your financial account: your username and password.
> In many cases, when you link a financial institution to an app via Plaid, you provide your login credentials to us and we securely store them. We use those credentials to access and obtain information from your financial institution in order to provide that information, at your direction, to the apps and services you want to use. For more information on how we use your data, please refer to our End User Privacy Policy.
> In other cases, after you request that we link your financial institution to an app or service you want to use, you will be prompted to provide your login credentials directly to your financial institution––not to Plaid––and, upon successful authentication, your financial institution will then return your data to Plaid. In these cases, Plaid does not access or store your account credentials. Instead, your financial institution provides Plaid with a type of security identifier, which permits Plaid to securely reconnect to your financial institution at regularly scheduled intervals to keep your apps and services up-to-date.
> Regardless of which type of connection is made, we do not share your credentials with the apps or services you’ve linked to your financial institution via Plaid. You can read more about how Plaid handles data here.
What do you think this should say instead?
Do you think they use the same key to securely store all accounts, like ACH? Or no key, like the bank ledger that you're downloading a window of as CSV through hopefully a read-only SQL account, hopefully with data encrypted at rest and in motion.
When you download a CSV or a OFX to a local file, is the data then still encrypted at rest?
Again, US Banks can eliminate the need for {Plaid, Mint, } as the account data access middlemen by providing a read-only OAuth API. Because banks do not have a way to allow users to grant read-only access to their account ledgers, the only solution is to securely store the u/p/sqa. If you write a script to fetch your data and call it from cron, how can you decrypt the account credentials after an unattended reboot? When must a human enter key material to decrypt the stored u/p/sqa?
Here, we realize that banks should really have people that do infosec - that comprehend symmetric and assymetric cryptography - audits to point out these sorts of vulnerabilities and risks. And if they had kept current with the times, we would have a very different banking and finance information system architecture with fewer single points of failure.
I'm not interested in what Plaid puts in a help page, since Plaid's $58 million settlement is for alleged data sharing with third parties without consent, meaning that Plaid is accused of not properly communicating the alleged data sharing to its users or obtaining permission.
And Plaid's terms of service (https://plaid.com/legal/#how-we-use-your-information) contains vague catch-alls such as:
> We share your End User Information for a number of business purposes:
> With our data processors and other service providers, partners, or contractors in connection with the services they perform for us or developers
Sure, it would be great if banks offered different authentication systems, but that has nothing to do with my lack of trust for Plaid. A different authentication system wouldn't eliminate the data sharing concerns I have with Plaid.
CISA Lays Out Security Rules for Zero Trust Clouds
"Cloud Security Technical Reference Architecture (TRA)" (2021) https://cisa.gov/publication/cloud-security-technical-refere...
> The Cloud Security TRA provides agencies with guidance on the shared risk model for cloud service adoption (authored by FedRAMP), how to build a cloud environment (authored by USDS), and how to monitor such an environment through robust cloud security posture management (authored by CISA).
> Public Comment Period - NOW OPEN! CISA is releasing the Cloud Security TRA for public comment to collect critical feedback from agencies, industry, and academia to ensure the guidance fully addresses considerations for secure cloud migration. The public comment period begins Tuesday, September 7, 2021 and concludes on Friday, October 1, 2021. CISA is interested in gathering feedback focused on the following key questions: […]
"Zero Trust Maturity Model" (2021) https://cisa.gov/publication/zero-trust-maturity-model
> CISA’s Zero Trust Maturity Model is one of many roadmaps for agencies to reference as they transition towards a zero trust architecture. The goal of the maturity model is to assist agencies in the development of their zero trust strategies and implementation plans and present ways in which various CISA services can support zero trust solutions across agencies.
> The maturity model, which include five pillars and three cross-cutting capabilities, is based on the foundations of zero trust. Within each pillar, the maturity model provides agencies with specific examples of a traditional, advanced, and optimal zero trust architecture.
> Public Comment Period – NOW OPEN! CISA drafted the Zero Trust Maturity Model in June to assist agencies in complying with the Executive Order. While the distribution was originally limited to agencies, CISA is excited to release the maturity model for public comment.
> CISA is releasing the Zero Trust Maturity Model for public comment beginning Tuesday, September 7, 2021 and concludes on Friday, October 1, 2021. CISA is interested in gathering feedback focused on the following key questions: […]
"Zero trust security model": https://en.wikipedia.org/wiki/Zero_trust_security_model
Show HN: Heroku Alternative for Python/Django apps
I have been following this space alot. Heroku like deployment spaces have exploded recently partly because people miss that experience, partly because buildpack is opensource[1], partly because we have gone through a techshift where it is pretty easy to emulate that. There are two broad categories - 1. Dokku's of the world, one VM (very recently tried adding multi node) and then more recently getporter.dev[2] and okteto[3] of the world that is trying to replicate heroku on kubernetes.
One of things, I noticed is that, It looks really cool at first look but every organisation has their own complexity in deploying stuff. It is an integration hell, you have to provide white glove treatment to all customers and eventually you have a hard time scaling. We tried it too, and what we noticed if you have to go an enterprise and sell, you start doing leaky abstraction. It works really well for early stage startups but once they have the money you prefer something more customizable and robust solution.
This project looks cool. I do not want to discourage anyone but again, it is a red ocean out there.
The problem is the layering of the abstractions. Having PaaS on top of Kubernetes in a way that you move down to a more powerful primitive is going to be the most scalable.
What happened in the public cloud space was a disparate set of services which made the choices really costly. Azure went in heavy on PaaS while AWS was focused on VMs. The easy adoption and migration was on AWS. K8s seems like a reasonable abstraction that reduces the cost of these decisions because people seem to agree that K8s is a reasonable base.
Being able to slide on a compatible PaaS layer shouldn't negate that K8s investment, and make it easier to adopt for some organizations, or parts of those organizations. But, I wouldn't argue that we are the point where that layer is mature.
dokku-scheduler-kubernetes https://github.com/dokku/dokku-scheduler-kubernetes#function...
> The following functionality has been implemented: Deployment and Service annotations, Domain proxy support via the Nginx Ingress Controller, Environment variables, Letsencrypt SSL Certificate integration via CertManager, Pod Disruption Budgets, Resource limits and reservations (reservations == kubernetes requests), Zero-downtime deploys via Deployment healthchecks, Traffic to non-web containers (via a configurable list)
SPDX Becomes Internationally Recognized Standard for Software Bill of Materials
From OP:
> Between eighty and ninety percent (80%-90%) of a modern application is assembled from open source software components. An SBOM accounts for the software components contained in an application — open source, proprietary, or third-party — and details their provenance, license, and security attributes. SBOMs are used as a part of a foundational practice to track and trace components across software supply chains. SBOMs also help to proactively identify software issues and risks and establish a starting point for their remediation.
> SPDX results from ten years of collaboration from representatives across industries, including the leading Software Composition Analysis (SCA) vendors – making it the most robust, mature, and adopted SBOM standard.
https://en.wikipedia.org/wiki/Software_Package_Data_Exchange
Show HN: Arxiv.org on IPFS
Ah cool… I also took a stab at something similar several years ago: https://github.com/ecausarano/heron
Also at the time I was considering IPFS.
But I guess the real trick is implementing a WOT to implement peer review and filter out the inevitable junk that will be published
"Help compare Comment and Annotation services: moderation, spam, notifications, configurability" executablebooks/meta#102 https://github.com/executablebooks/meta/discussions/102 :
> jupyter-comment supports a number of commenting services [...]. In helping users decide which commenting and annotation services to include on their pages and commit to maintaining, could we discuss criteria for assessment and current features of services?
> Possible features for comparison:
> * Content author can delete / hide
> * Content author can report / block
> * Comments / annotations are screened by spam-fighting service
> * Content / author can label as e.g. toxic
> * Content author receives notification of new comments
> * Content author can require approval before user-contributed content is publicly-visible
> * Content author may allow comments for a limited amount of time (probably more relevant to BlogPostings)
> * Content author may simultaneously denounce censorship in all it's forms while allowing previously-published works to languish
#ForScience
FWIW, archiving repo2docker-compatible git repos with a DOI attached to a git tag, is possible with JupyterLite:
> JupyterLite is a JupyterLab distribution that runs entirely in the browser built from the ground-up using JupyterLab components and extensions
With JupyterLite, you can build a static archive of a repo2docker-like environment so that the ScholarlyArticle notebook or computer modern latex css, its SoftwareRelease dependencies, and possibly also the Datasets can be run in a browser tab with WASM. HTML + JS + WASM
New Texas Abortion Law Likely to Unleash a Torrent of Lawsuits Against Education
All this talk about Austin becoming the new SV - but then you see regressive laws like this getting pushed through. No thanks.
Migration of people from SV to Austin will change the political landscape and stop stuff like this from being passed.
You don't have to pay prisoners for work in Texas.
Unfortunately, I find it necessary to help elucidate some moral issues that recent legal developments in one state of the Union have brought to light.
Such a God is correctly assessed as hostile to the patrons of Earth.
Such a creation myth involving seven (7) days to create Heaven, Hell Lite, and Hell (Heaven, Earth, and Hell) is what we must interpret.
At issue is whether the singular, supernatural God in Heaven is Benevolent (Good) and Omnipotent (All-knowing). Why would a loving God create Heaven without suffering on the first day, and then create Earth etc. with suffering on the succeeding days? Why not create Heaven for all? Why send infants into a life of suffering on Earth (Hell Lite) when they could be sent directly to Heaven with no suffering?
Ergo, God's plan is to force babies to suffer on Earth for awhile; but if they die sinless, then they go to Heaven where parents aren't needed. Ergo, Heaven is full of parentless dead babies.
You've made asses of yourselfs in trying to manipulate others into your golden candlestick tax districts, where the whole congregation does not have healthcare.
God created a world in need of healthcare, and isn't getting healthcare to it.
Ergo, because God created a world of suffering for us - and babies - god is not Friendly. And if god is not friendly, god is a very poor basis for our morality, and for our legal system.
Case in point: did god ask for Mary's (or even Joseph's) consent?
A benevolent god would design in required consent.
In the Christian bible, furthermore, god tells them to kill even the enemies' women and unborn children. Ergo, god does not value the sanctity of infants' lives over other concerns.
For citations of scripture, see e.g. Freedom from Religion Foundation > What Does the Bible Really Say About Abortion? https://ffrf.org/component/k2/item/25602-abortion-rights
A very poor basis for morality and law, indeed.
What is "Original Sin" if not a suggestion that we should choose very wisely?
And when they rolled that [silver] rock from the tomb, behold what did they find? No dead baby; for the dead baby of god was in heaven, gentlemen. Sinless dead babies all go to heaven, gentlemen.
I think we can all agree that preventing unintended pregnancy is a worthwhile objective; though we have very differing views on whether abstinence-based education works. States with higher rates of religiosity (by church attendance) have higher rates of teen pregnancy: there is a somewhat strong positive correlation.
A standing prayer request: Pray that we might get healthcare to our congregation.
IDK, what do we say here? We're going to start needing to be making some changes?
Roman society context on this one:
Vestal virgins: https://en.wikipedia.org/wiki/Vestal_Virgin
Baiae: https://en.wikipedia.org/wiki/Baiae
https://pbsinternational.org/programs/underwater-pompeii/ :
> Baiae: an ancient Roman city lost to the same volcanoes that entombed Pompeii. But unlike Pompeii, Baiae sits under water, in the Bay of Naples. Nearly 2,000 years ago, the city was an escape for Rome’s rich and powerful elite, a place where they were free of the social restrictions of Roman society. But then the city sank into the ocean, to be forgotten in the annals of history. Now, a team of archaeologists is mapping the underwater ruins and piecing together what life was like in this playground for the rich. What made Baiae such a special place? And what happened to it?
Woe! Woe unto the obviously promiscuous.
DARPA grant to work on sensing and stimulating the brain noninvasively [video]
Won't this mean that you cannot think of anything else while operating the equipment?
Do you need to not think of anything else to move your arms?
Ex-BCI research student here.
The nerves that control your arm are wired directly into your central nervous system and can active your muscles in direct response to action potentials fanned out from a single neuron.
These noninvasive technologies pick up aggregate signals from hundreds of thousands of neurons all firing pseudo-randomly, filtered spatially by the dura (thick layer of membrane that protects the brain), skull, flesh, skin and hair.
A more "HN" analogy would be like comparing a directly wired Ethernet connection (nerves connected to the CNS) to a side channel attack on an air-gapped system (noninvasive EEG). While randomly running background processes (normally) won't affect the data transmitted to the network, they will absolutely foil most side channel attacks.
What about with realtime NIRS with an (inverse?) scattering matrix? From https://www.openwater.cc/technology :
> Below are examples of the image quality we have achieved with our breakthrough scanning systems that use just red and near-infrared light and ultrasound pings.
https://en.wikipedia.org/wiki/Near-infrared_spectroscopy
Another question: is it possible to do ah molecular identification similar to idk quantum crystallography with photons of any wavelength, such as NIRS? Could that count things in samples?
https://twitter.com/westurner/status/1239012387367387138 :
> ... quantum crystallography: https://en.wikipedia.org/wiki/Quantum_crystallography There's probably some limit to infrared crystallography that anyone who knows anything about particles and lattices would know about ?
> What about with realtime NIRS with an (inverse?) scattering matrix?
I've purged a lot of the knowledge from that time from my brain, but from what I recall fNIRS takes a long time (on the order of multiple seconds) to take a single reading.
It also only really shows which regions of the brain are receiving more blood supply. While a huge improvement in spacial precision over EEG, it's still not anywhere near the same level of precision that a directly-wired nerve has.
That doesn't mean it's useless technology, just probably not viable for the low-latency, high-accuracy control you'd want for a military drone.
>Another question: is it possible to do ah molecular identification similar to idk quantum crystallography with photons of any wavelength, such as NIRS? Could that count things in samples?
I have absolutely no idea about molecular identification or quantum crystallography, so probably can't give you a good answer on that one.
The newer systems can sample much faster (10-100 Hz), but they're still limited by the fact that they're measuring a hemodynamic signal that lags behind neural activity.
Which other strong and weak forces could [photonic,] sensors detect?
IIUC, they're shooting for realtime MRI resolution with NIRS; to be used during surgery to assist surgery in realtime.
edit: https://en.wikipedia.org/wiki/Neural_oscillation#Overview says brainwaves are 1-150 Hz? IIRC compassion is acheivable on a bass guitar.
A couple of studies linked in Wikipedia for this use (one retracted):
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4049706/
Retracted: https://journals.plos.org/plosbiology/article?id=10.1371/jou...
Table with resolution differences between different techniques:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8116487/table/T...
Most NIRS systems use continuous-wave (CW) systems detecting blood oxygenation/chromophore concentration. I say this because, in the time domain, you're looking at 5-7 seconds to see changes in cerebral oxygenation.
It's also noteworthy that you're dealing with huge amounts of noise, very low resolution, and are limited to the cortical surface. This limits the applications in the BCI-domain.
I'm currently researching the feasability of fast optical imaging. This has to be done in the frequency domain, but may yield temporal resolution in the milliseconds. The downside is finding an incoherent light source that's able to be modulated fast enough to detect the scattering changes.
You mentioned "time-domain", and I recalled "time-polarization".
From https://twitter.com/westurner/status/1049860034899927040 :
https://web.archive.org/web/20171003175149/https://www.omnis...
"Mind Control and EM Wave Polarization Transductions" (1999)
> To engineer the mind and its operations directly, one must perform electrodynamic engineering in the time * domain, not in the 3-space EM energy density domain.*
Could be something there.
Topological Axion antiferromagnet https://phys.org/news/2021-07-layer-hall-effect-2d-topologic... :
> Researchers believe that when it is fully understood, TAI can be used to make semiconductors with potential applications in electronic devices, Ma said. The highly unusual properties of Axions will support a new electromagnetic response called the topological magneto-electric effect, paving the way for realizing ultra-sensitive, ultrafast, and dissipationless sensors, detectors and memory devices.
Optical topological antennas https://engineering.berkeley.edu/news/2021/02/light-unbound-... :
> The new work, reported in a paper published Feb. 25 in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted light waves that could be directly multiplexed.
Rydberg sensor https://phys.org/news/2021-02-quantum-entire-radio-frequency... :
> Army researchers built the quantum sensor, which can sample the radio-frequency spectrum—from zero frequency up to 20 GHz—and detect AM and FM radio, Bluetooth, Wi-Fi and other communication signals.
> The Rydberg sensor uses laser beams to create highly-excited Rydberg atoms directly above a microwave circuit, to boost and hone in on the portion of the spectrum being measured. The Rydberg atoms are sensitive to the circuit's voltage, enabling the device to be used as a sensitive probe for the wide range of signals in the RF spectrum.
> "All previous demonstrations of Rydberg atomic sensors have only been able to sense small and specific regions of the RF spectrum, but our sensor now operates continuously over a wide frequency range for the first time,"
Sometimes people make posters or presentations for new tech, in medicine.
The xMed Exponential Medicine conference / program is in November this year: https://twitter.com/ExponentialMed
Space medicine also presents unique constraints that more rigorously select from possible solutions: https://en.wikipedia.org/wiki/Space_medicine
There is no progress in medicine without volunteers for clinical research trials. https://en.wikipedia.org/wiki/Phases_of_clinical_research
New Ways to Be Told That Your Python Code Is Bad
> Python programmers in general have an irrational aversion to if-expressions, a.k.a. the “ternary” operator.
Because the Python ternary operator is fucking backwards! Why on earth would you put the condition in the middle!?
x = 4 if condition() else 5
vs.: condition() ? 4 : 5
vs. the author's own lisp example: (setq x (if (condition) 4 5))
I love ternary operators, and Lisp/ML-style if else blocks that return things, but Python puts the 4 and the 5 so far away from each other that even I hate using it, especially when condition is a chained mess rather than just a function call.As I recall, object? and object?? are and work in IPython because the Python mailing list said that the ternary operator was not reserved. (IIRC there was yet no formal grammar or collections.abc or maybe even datetime or json yet at the time).
Ternary expressions on one line require branch coverage to be enabled in your e.g. pytest; otherwise it'll look like the whole line is covered by tests when each branch on said line hasn't actually been tested.
.get() -> Union[None, T]
Web-based editor
It looks like this is actually just the editor from Visual Studio Online unless I've missed something.
It's great, but if it were really Visual Studio Code that would be awesome and I'd be pleasantly surprised.
(The difference being that if it's just the editor it misses out on all the compiler/analysis integrations. If Github were providing Linux containers in the cloud for working on projects - essentially the SSH feature from Visual Studio Code - it would be absolutely brilliant.)
I think what you might want is GitHub Codespaces?: https://github.com/features/codespaces. Was on HN recently too.
Hmm. $0.18 per hour for 4GB of RAM is a rather steep. A c6g.large on AWS would run you less than a half of that, for the same number of CPUs and memory. A quarter if you're happy to use spot instances. Might be neat to have an OSS tool that spins up an EC2 (or Azure or GCP) instance running Code-Server[1] (VS Code running natively on the server, but presenting the UI in the browser), with a given git repo and credentials.
Admittedly, the storage costs would be higher.
The ml-workspace docker image includes Git, Jupyter, VS Code, SSH, and "many popular data science libraries & tools" https://github.com/ml-tooling/ml-workspace
docker run -p 8080:8080 -v "${PWD}:/workspace" mltooling/ml-workspace
Cocalc-docker also includes Git, Jupyter, SSH, a collaborative LaTeX editor, a time slider, but no code-server or VScode out of the box:
https://github.com/sagemathinc/cocalc-docker docker run --name=cocalc -d -v ~/cocalc:/projects -p 443:443 sagemathinc/cocalc
GitHub Copilot Generated Insecure Code in 40% of Circumstances During Experiment
For comparison, what percentage of human-generated code is secure?
> For comparison, what percentage of human-generated code is secure?
Yeah how did they measure? Did static and dynamic analysis find design bugs too?
Maybe - as part of a Copilot-assisted DevSecOps workflow involving static and dynamic analysis run by GitHub Actions CI - create Issues with CWE "Common Weakness Enumeration" URLs from e.g. the CWE Top 25 in order to train the team, and Pull Requests to fix each issue?: https://cwe.mitre.org/top25/
Which bots send PRs?
AAS Journals Will Switch to Open Access
Publication charges for authors and students will probably rise for a factor of 2 or more... open access journals in technical fields are approaching $500 per page.
This will basically lock out people in the developing world from publishing in these journals.
> JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:
>> Assuming a publication rate of 200 papers per year this works out at ~$4.75 per paper
> [joss_costs]: https://joss.theoj.org/about#costs
^^ from https://news.ycombinator.com/item?id=24517711 & this log of my non- markdown non- W3C Web Annotation threaded comments with URIs: https://westurner.github.io/hnlog/#comment-24517711
Does anyone here have thoughts on JOSS? I've reviewed for them once, had the impression the editors take their job seriously, and think I'll review again in the future. The review approach that the journal facilitates has a strong focus on engineering aspects, i.e. it addresses a weakness of other venues, where it often does not matter how messy, unstable, and poorly documented the code is (or even if it compiles). On the other hand, the JOSS reviewers are typically not experts on the problem that the software is solving.
I'm currently reviewing for JOSS, and have done so before. In many ways they're a very strange journal: the paper is nearly an afterthought, and the review is focused on the code. But I like them. As you say, the editors take their role seriously. And it seems to have two valuable contributions.
Firstly, encouraging and structuring code review in academia. My own code is almost entirely solo (and messy), so a venue for structured review and an incentive to robustify public code is good. Secondly, the culture in some disciplines is that code is not citable, only papers - and JOSS is an end-run around this. I hope this second situation is changing, but we're not there yet so JOSS has a valuable role for the moment in simply being a 'journal' assigning DOIs basically for code packages.
[Scholarly] Code review tools; criteria and implementations?
Does JOSS specify e.g. ReviewBoard, GitHub Pull Request reviews, or Gerrit for code reviews?
The reviews for JOSS happen on github[0] but the journal's not prescriptive about how you develop your package as long as the code is public. The criteria for the JOSS review are very clear[1].
I don't want to oversell the depth of the code review possible; not all of the reviewers will be fully expert in whatever tiny cutting-edge area the package is for (making correctness checks difficult beyond the test suite), and most of us are academics-who-code rather than research software engineers. But the fact it's happening at all is a great step forward.
[0]: https://github.com/openjournals/joss-reviews/issues [1]: https://joss.readthedocs.io/en/latest/review_criteria.html
Thanks for the citations. Looks like Wikipedia has "software review" and "software peer review":
https://en.wikipedia.org/wiki/Software_review
https://en.wikipedia.org/wiki/Software_peer_review
I'd add "Antipatterns" > "Software" https://en.wikipedia.org/wiki/Anti-pattern#Software_design
and "Code smells" > "Common code smells" https://en.wikipedia.org/wiki/Code_smell#Common_code_smells
and "Design smells" for advanced reviewers: https://en.wikipedia.org/wiki/Design_smell
and the CWE "Common Weakness Enumeration" numbers and thus URLs for Issues from the CWE Top 25 and beyond: https://cwe.mitre.org/top25/
FWIW, many or most scientists are not even trying to be software engineers: they just write slow code without reusing already-tested components and expect someone else to review Pull Requests after their PDF is considered impactful. They know enough coding to push the bar for their domain a bit higher each time.
Are there points for at least in-writing planing for the complete lifecycle and governance of an ongoing thesis defense of open source software for science; after we publish, what becomes of this code?
From https://joss.theoj.org/about#costs :
> Income: JOSS has an experimental collaboration with AAS publishing where authors submitting to one of the AAS journals can also publish a companion software paper in JOSS, thereby receiving a review of their software. For this service, JOSS receives a small donation from AAS publishing. In 2019, JOSS received $200 as a result of this collaboration.
A low cost of $1/page just shows how much of a scam even prestigious journals have become for all involved parties.
Referees work for free. Submitting authors pay through the nose for presumably typesetting and proofs whose true cost is closer to $1/page rather than hundreds. Editors are overworked.
Sci-Hub was supposed to disrupt that industry, but it seems all it's done is shift burden to authors and created pay-for publishing.
Moderation costs money, too.
Additional ScholarlaryArticle "Journal" costs: moderation, BinderHub / JupyterLite white label SaaS?, hosting data and archived reproducible container images on IPFS and academictorrents and Git LFS, hosting {SQL, SPARQL, GraphQL,} queries and/or a SOLID HTTPS REST API and/or RSS feeds with dynamic content but static feed item URIs and/or ActivityStreams and/or https://schema.org/Action & InteractAction & https://schema.org/ReviewAction & ClaimReview fact check reviews, W3C Web Notifications, CRM + emailing list, keeping a legit cohort of impactful peer reviewers,
#LinkedData for #LinkedResearch: Dokieli, parsing https://schema.org/ScholarlyArticle citation styles,
> keeping a legit cohort of impactful peer reviewers, [who are time-constrained and unpaid, as well]
"Ask HN: How are online communities established?" https://news.ycombinator.com/item?id=24443965 re: building community, MCOS Marginal Cost of Service, CLV Customer Lifetime Value, etc
White House Launches US Digital Corps
I've worked with state government as a volunteer advisor. They're still developing everything with waterfall. Only contracting out to big firms, even if it's a small project. Lawmakers and aides sit in a room and write down what is to be done.
That's changing at the federal level. They know they've got a problem. Why shouldn't federal software be as easy to use as the best web software? If you've ever tried to use it you will quickly learn that isn't the case.
Some sites will only work with IE and no other browser. Developers in two years can make a huge difference for making the government be more agile and operate better.
I always suggest joining a local Code For America brigade. Work on a local project and see if it is for you. If you find yourself drawn to it then consider applying for a two year stint with the federal government. You can really make a difference!
> I've worked with state government as a volunteer advisor. They're still developing everything with waterfall. Only contracting out to big firms, even if it's a small project. Lawmakers and aides sit in a room and write down what is to be done.
The US Digital Services Playbook likely needs few modifications for use at state and local levels? https://github.com/usds/playbook#readme
"PLAY 1: Understand what people need" https://playbook.cio.gov/#play1
"PLAY 4: Build the service using agile and iterative practices" https://playbook.cio.gov/#play4
Do [lawmakers and aides] make good "Product Owners", stakeholders, [incentivized, gamified] app feedback capability utilizers? GitLab has Service Desk: you can email into the service desk email without having an account as necessary to create and follow up on [software] issues in GitHub/BitBucket/GitLab/Gitea project management sytems.
> That's changing at the federal level. They know they've got a problem. Why shouldn't federal software be as easy to use as the best web software? If you've ever tried to use it you will quickly learn that isn't the case.
"PLAY 3: Make it simple and intuitive" https://playbook.cio.gov/#play3
> Some sites will only work with IE and no other browser. Developers in two years can make a huge difference for making the government be more agile and operate better.
US Web Design Standards https://designsystem.digital.gov/
From https://github.com/uswds/uswds#browser-support :
>> We’ve designed the design system to support older and newer browsers through progressive enhancement. The current major version of the design system (2.0) follows the 2% rule: we officially support any browser above 2% usage as observed by analytics.usa.gov. Currently, this means that the design system version 2.0 supports the newest versions of Chrome, Firefox, Safari, and Internet Explorer 11 and up.
> I always suggest joining a local Code For America brigade. Work on a local project and see if it is for you. If you find yourself drawn to it then consider applying for a two year stint with the federal government. You can really make a difference!
From https://en.wikipedia.org/wiki/Code_for_America :
>> [...] described Code for America as "the technology world's equivalent of the Peace Corps or Teach for America". The article goes on to say, "They bring fresh blood to the solution process, deliver agile coding and software development skills, and frequently offer new perspectives on the latest technology—something that is often sorely lacking from municipal government IT programs. This is a win-win for cities that need help and for technologists that want to give back and contribute to lower government costs and the delivery of improved government service."
Launch HN: Litnerd (YC S21) – Teaching kids to read with the help of live actors
Hi HN, my name is Anisa and I am the founder of Litnerd (https://litnerd.com/), an online reading program designed to teach elementary school students in America how to read.
There are 37M elementary school students in America. Schools spend $20B on reading and supplemental education programs. Yet 42% of 4th grade students are reading at a 1st or 2nd grade proficiency level! The #1 reason students aren’t reading? They say it’s boring. We change that by bringing books to life. Think your favorite book turned into a tv-show style episode-by-episode reenactment, coupled with a complete curriculum and lesson plans.
1 in 8 Americans is functionally illiterate. Like any skill, reading is a habit. If you grew up in a household where you did not see your parents reading, you likely do not develop the habit. This correlates to the socio-economic divide. Two thirds of American students who lack reading skills by the end of fourth grade will rely on welfare as adults. To impact this, research suggests that we need to start at the earliest years.
I am passionate about the research in support of art and theatre as well as story-telling to improve childhood learning. Litnerd is the marriage of these interests. The inspiration comes from Sesame Street and Hamilton The Musical. In the late 60s, Joan Cooney decided to produce a children’s TV show that would influence children across America to learn to read—it became Sesame Street. Cooney researched her idea extensively, consulting with sociologists and scientists, and found that TV’s stickiness can be an important tool for education. Lin-Manuel Miranda took the story of Alexander Hamilton and brought it to life as a musical. Kids have learned more about Hamilton’s history thanks to Hamilton the Musical than any of their textbooks. In fact, this was the case so much that a program called EduHam is used to teach history in middle schools across the nation. When I heard that, the lightbulb went off and I decided to go all in on starting Litnerd.
We hire art and theatre professionals to recreate scenes directly from books in episode style format to bring the book to life, in a similar fashion to watching your favorite TV shows. We literally lead 'read out loud' in the classroom while the teacher/actor is acting out the main character in the book. We have a weekly designated Litnerd period in the schools/classes we serve and we live-stream in our teachers/actors for an interactive session (the students participate and read live with the actor as well as complete written lesson plans, phonetic exercises etc). We are currently serving 14,000 students in this manner.
The format of our program is such that if you don't complete the assigned reading and worksheets, you will feel like you are missing out on what is happening in later episodes. In this way, reading is layered in as a fundamental core to the program. Our program is part of scheduled classroom time.
A big part of our business involves curating content and materials that capture the interest and coolness-factor for elementary school students. We’ve found that students love choose-your-own-adventure style stories, especially ones involving mythical creatures—something about being able to have autonomy on the outcomes. So far, it seems to be working. We've even received fan mail from students! But we are obsessed with staying cool/relevant in our content.
Teachers like our product because it eases the burden placed on them. US teachers typically spend 4 to 10 hours a week (unpaid) planning their curriculum and $400-800 of their own money for classroom supplies. That's outrageous! When designing Litnerd, we wanted to ensure our product was not adding more work to their plate. Our programs are led by our own Resident Teaching Artists, who are live streamed into the classroom and remain in character to the episode as they teach the Litnerd curriculum built on top of the books. Our programs come with lesson plans, activity packets, curriculum correlations, educator resources, and complete ebooks.
Traditional K-12 education has extremely long sale cycles and is hard to break into. It can take years to become a contracted vendor, especially with large districts like NYC Department of Education. Because of my experience with my first YC backed startup that sold to government and nonprofits, coupled with my experience working at a large edtech company that built content for Higher Ed, I understand this sector and how to navigate the budget line item process.
Since launching in January, we have become contracted vendors with the New York City Department of Education (the largest education district in America). As a result, we’ve been growing at 60% MoM, are currently used by over 14k students in their classrooms and hit $110K in ARR. Our program is part of scheduled classroom time for elementary schools—not homework, and not extracurricular. Here’s a walkthrough video from a teacher’s perspective: https://www.loom.com/share/9ffc59f0d7ed4a66964003703bba7b94.
I am so grateful for the opportunity to share our story and mission with you. If you loved or struggled with reading as a kid, what factors do you think contributed? Also, if you have experience teachIng Elementary School or if you are a parent, I would love to hear your thoughts and ideas on how you foster reading amongst your students/children! I am excited to hear your feedback and ideas to help us inspire the next generation of readers.
This looks like a really thoughtful resource, and I can tell you are hitting all the right marks when it comes to getting this into classrooms. As a first grade teacher, however, I all curious how you see Litnerd fitting into a balanced literacy program. In browsing your programs, I see language comprehension and social emotional learning but no phonics or phonemic awareness. Do you have plans to support these crucial reading skills as well?
I'm thrilled to have our first elementary school teacher comment!! Yes! We have a team of educators and curriculum writers (broken up by Grades PreK-K, 1-2 and 3-5) that follow Common Core standards and build custom ELA curriculum on top of each book and Litnerd program. We provide 2 lesson plans per week and a total of 8 lesson plans per program. We also create SEL lessons (again, based on the book) that follow NYSED competencies and 'Leader in Me' program language (since that is what most of our schools currently use for SEL). We hope to continue to improve overtime and I would definitely any feedback for us as we grow intros area!
TIL a new acronym word symbol lexeme: SEL: Social and Emotional Learning
> Social Emotional Learning (SEL) is an education practice that integrates social emotional skills into school curriculum. SEL is otherwise referred to as "socio-emotional learning" or "social-emotional literacy." When in practice, social emotional learning has equal emphasis on social and emotional skills to other subjects such as math, science, and reading.[1] The five main components of social emotional learning are self-awareness, self management, social awareness, responsible decision making, and relationship skills.
https://en.wikipedia.org/wiki/Social_and_Emotional_Learning
For good measure, Common Core English Language Arts standards: https://en.wikipedia.org/wiki/Common_Core_State_Standards_In...
Khan Academy has 2nd-9th Grade ELA exercises: English & Language Arts: https://www.khanacademy.org/ela
Unfortunately AFAIU there's not a good way to explore the Khan Academy Kids curriculum graph; which definitely does include reading: https://learn.khanacademy.org/khan-academy-kids/
> The app engages kids in core subjects like early literacy, reading, writing, language, and math, while encouraging creativity and building social-emotional skills
In terms of Phonemic awareness and Phonological awareness, is there a good a survey of US and World reading programs and their evidence-based basis, if any??
From https://en.wikipedia.org/wiki/Phonemic_awareness :
> Phonemic awareness is a subset of phonological awareness in which listeners are able to hear, identify and manipulate phonemes, the smallest mental units of sound that help to differentiate units of meaning (morphemes). Separating the spoken word "cat" into three distinct phonemes, /k/, /æ/, and /t/, requires phonemic awareness. The National Reading Panel has found that phonemic awareness improves children's word reading and reading comprehension and helps children learn to spell.[1] Phonemic awareness is the basis for learning phonics.[2]
> Phonemic awareness and phonological awareness are often confused since they are interdependent. Phonemic awareness is the ability to hear and manipulate individual phonemes. *Phonological awareness includes this ability, but it also includes the ability to hear and manipulate larger units of sound, such as onsets and rimes and syllables.*
What are some of the more evidence-based (?) (early literacy,) reading curricula? OTOH: LETRS, Heggerty, PAL: https://www.google.com/search?q=site%3Aen.wikipedia.org+%22l...
Looks like Cambium acquired e.g. Kurzweil Education in 2005?
More context:
Reading readiness in the United States: https://en.wikipedia.org/wiki/Reading_readiness_in_the_Unite...
Emergent literacies: https://en.wikipedia.org/wiki/Emergent_literacies
An interactive IPA chart with videos and readings linked with RDF (e.g. ~WordNet RDF) would be great. From "Duolingo's language notes all on one page" https://westurner.github.io/hnlog/#comment-26430146 :
> An IPA (International Phonetic Alphabet) reference would be helpful, too. After taking linguistics in college, I found these Sozo videos of US english IPA consonants and vowels that simultaneously show {the ipa symbol, example words, someone visually and auditorily producing the phoneme from 2 angles, and the spectrogram of the waveform} but a few or a configurable number of [spaced] repetitions would be helpful: https://youtu.be/Sw36F_UcIn8
> IDK how cartoonish or 3d of an "articulatory phonetic" model would reach the widest audience. https://en.wikipedia.org/wiki/Articulatory_phonetics
> IPA chart: https://en.wikipedia.org/wiki/International_Phonetic_Alphabe...
> IPA chart with audio: https://en.wikipedia.org/wiki/IPA_vowel_chart_with_audio
> All of the IPA consonant chart played as a video: "International Phonetic Alphabet Consonant sounds (Pulmonic)- From Wikipedia.org" https://youtu.be/yFAITaBr6Tw
> I'll have to find the link of the site where they playback youtube videos with multiple languages' subtitles highlighted side-by-side along with the video.
>> [...] Found it: https://www.captionpop.com/
>> It looks like there are a few browser extensions for displaying multiple subtitles as well; e.g. "YouTube Dual Subtitles", "Two Captions for YouTube and Netflix"
Phonics programs really could reference IPA from the start: there are different sounds for the same letters; IPA is the most standard way to indicate how to pronounce words: it's in the old school dictionary, and now it's in the Google "define:" or just "define word" dictionary.
UN Sustainable Development Goal 4: Quality Education: https://www.globalgoals.org/4-quality-education
> Target 4.6: Universal Literacy and Numeracy
> By 2030, ensure that all youth and a substantial proportion of adults, both men and women, achieve literacy and numeracy.
https://sdgs.un.org/goals/goal4 :
> Indicator 4.6.1: Percentage of population in a given age group achieving at least a fixed level of proficiency in functional (a) literacy and (b) numeracy skills, by sex
... Goals, Targets, and Indicators.
Which traversals of a curriculum graph are optimal or sufficient?
You can add https://schema.org/about and https://schema.org/educationalAlignment Linked Data to your [#OER] curriculum resources to increase discoverability, reusability.
Arne-Thompson-Uther Index code URN URIs could be helpful: https://en.wikipedia.org/wiki/Aarne%E2%80%93Thompson%E2%80%9...
> The Aarne–Thompson–Uther Index (ATU Index) is a catalogue of folktale types used in folklore studies.
Are there competencies linked to maybe a nested outline that we typically traverse in depth-first order? https://github.com/todotxt/todo.txt : Todo.txt format has +succinct @context labels. Some way to record and score our own paths objectively would be great.
> What are some of the more evidence-based (?) (early literacy,) reading curricula? OTOH: LETRS, Heggerty, PAL
Looks like there are only 21 search results for: "LETRS" "Fundation" "Heggerty": https://www.google.com/search?q="LETRS"+"fundation"+"heggert...
What is the name for this category of curricula?
Perhaps the US Department of Education or similar could compare early reading programs in a wiki[pedia] page, according to criteria to include measures of evidence-basedness? Just like https://collegescorecard.ed.gov/data/ has "aggregate data for each institution [&] Includes information on institutional characteristics, enrollment, student aid, costs, and student outcomes."
From YouTube, it looks like there are cool hand motions for Heggerty.
Nimforum: Lightweight alternative to Discourse written in Nim
awesome-selfhosted > "Communication - Social Networks and Forums" doesn't yet link to Nimforum: https://github.com/awesome-selfhosted/awesome-selfhosted#com...
An Opinionated Guide to Xargs
Wanting verbose logging from xargs, years ago I wrote a script called `el` (edit lines) that basically does `xargs -0` with logging. https://github.com/westurner/dotfiles/blob/develop/scripts/e...
It turns out that e.g. -print0 and -0 are the only safe way: line endings aren't escaped:
find . -type f -print0 | el -0 --each -x echo
GNU Parallel is a much better tool: https://en.wikipedia.org/wiki/GNU_parallel(author here) Hm I don't see either of these points because:
GNU xargs has --verbose which logs every command. Does that not do what you want? (Maybe I should mention its existence in the post)
xargs -P can do everything GNU parallel do, which I mention in the post. Any counterexamples? GNU parallel is a very ugly DSL IMO, and I don't see what it adds.
--
edit: Logging can also be done with by recursively invoking shell functions that log with the $0 Dispatch Pattern, explained in the post. I don't see a need for another tool; this is the Unix philosophy and compositionality of shell at work :)
Parallel's killer feature is how it spools subprocess output, ensuring that it doesn't get jumbled together. xargs can't do that. I use parallel for things like shelling out to 10000 hosts and getting some statistics. If I use xargs the output stomps all over itself.
Ah OK thanks, I responded to this here: https://news.ycombinator.com/item?id=28259473
As far as I'm aware, xargs still has the problem of multiple jobs being able to write to stdout at the same time, potentially causing their output streams to be intermingled. Compare this with parallels --group.
Also parallels can run some of those threads on remote machines. I don't believe xargs has an equivalent job management function.
In your examples you fail to put 'xargs -P' in the middle of a pipeline: You only put it at the end.
In other words:
some command | xargs -P other command | third command
This is useful if 'other command' is slow. If you buffer on disk, you need to clean up after each task: Maybe there is not enough free disk space to buffer the output of all tasks.UNIX is great in that you can pipe commands together, but due to the interleaving issue 'xargs -P' fails here. It does not live up to the UNIX philosophy. Which is probably why you unconsciously only use it at the end of a pipeline.
You can find a different counterexample on https://unix.stackexchange.com/questions/405552/using-xargs-... I will be impressed if you can implement that using xargs. Especially if you can make it more clean than the paralel version.
[deleted]
Yeah but xargs doesn't refuse to run until I have agreed to a EULA stating I will cite it in my next academic paper.
parallel doesn't either, it just nags. I agree about how silly and annoying it is. Imagine if every time the parallel author opened Firefox he got a message reminding him to personally thank me if he uses his web browser for research, or if every time his research program calls malloc he has to acknowledge and cite Ulrich Drepper. Very very silly.
Parallel is the better tool but the nagware impairs its reputation.
Enhanced Support for Citations on GitHub
> CITATION.cff files are plain text files with human- and machine-readable citation information. When we detect a CITATION.cff file in a repository, we use this information to create convenient APA or BibTeX style citation links that can be referenced by others.
https://schema.org/ScholarlyArticle RDFa and JSON-LD can be parsed with a standard Linked Data parser. Looks like YAML-LD requires quoting e.g. "@context": and "@id":
From https://docs.github.com/en/github/creating-cloning-and-archi... ; in your repo's /CITATION.cff:
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Lisa"
given-names: "Mona"
orcid: "https://orcid.org/0000-0000-0000-0000"
- family-names: "Bot"
given-names: "Hew"
orcid: "https://orcid.org/0000-0000-0000-0000"
title: "My Research Software"
version: 2.0.4
doi: 10.5281/zenodo.1234
date-released: 2017-12-18
url: "https://github.com/github/linguist"
https://citation-file-format.github.io/Canada calls screen scraping ‘unsecure,’ sets Open Banking target for 2023
It's about time. When I learned that applications like YNAB (You Need A Budget) use services like Plaid to connect to my bank account, and that these services literally take my username and password and impersonate me to get my banking data, I was a little sketched out. I use YNAB every day, and having it connected to my bank account is incredibly useful, but if something goes wrong and Plaid loses my money somehow, is there any recourse?
Hopefully individuals will be able to use the Open Banking APIs to access their own data directly, but it looks like accreditation will be required, so probably not.
Here's the full text of the report: https://www.canada.ca/en/department-finance/programs/consult...
Plaid is only one security breach away from being utterly destroyed. And they will take out the financial lives of all their customers with them.
It’s utterly irresponsible and I have no idea how Plaid hasn’t been shut down. You have no recourse if they are breached. The TOS of your online banking probably says that if you disclose your username and password to any third party then you have no liability protections.
This is FUD. Lots of Plaid-based connections only allow reads. This is a regulated industry, and the fallout reputationally might be tough, but consumers are well-protected.
How is that enforced? What is the technical basis that enforces read-only access using user/password auth? Especially since that user/password auth is used by an end user to do "write"-type actions?
Read-only access is not possible. By handing over the credentials you are handing over write access. You are correct.
A couple of my banking institutions let me generate a read-only set of credentials for this sort of purpose.
Citi and Capital One have OAuth flows that Plaid supports, too, which tends to make me angrier at the banks than Plaid; the need for this stuff has been clear for a decade now, but only a few have added OAuth or similar.
AFAIU, there are still zero (0) consumer banking APIs with Read-Only e.g. OAuth APIs in the US as well?
Banks could save themselves CPU, RAM, bandwidth, and liability by implementing read-only API tokens and methods that need only return JSON - instead of HTML or worse, monthly PDF tables for a fee - possibly similar to the Plaid API: https://plaid.com/docs/api/
There is competition in consumer/retail banking, but still the only way to do e.g. budget and fraud analysis with third party apps is to give away all authentication factors: u/p/sqa; and TBH that's unacceptable.
Traditional and distributed ledger service providers might also consider W3C ILP: Interledger Protocol (in starting their move to quantum-resistant ledgers by 2022 in order to have a 5 year refresh cycle before QC is a real risk by 2027, optimistically, for science) when reviewing the entropy of username+password_hash+security_question_answer strings in comparison to the entropy of cryptoasset account public key hash strings: https://interledger.org/developer-tools/get-started/overview...
> Sender – Initiates a value transfer.
> Router (Connector) – Applies currency exchange and forwards packets of value. This is an intermediary node between the sender and the receiver. {MSB: KYC, AML, 10k reporting requirement, etc}
> Receiver – Receives the value
Multifactor authentication: Something you have, something you know, something you are
Multisig: n-of-m keys required to approve a transaction
Edit: from "Fed announces details of new interbank service to support instant payments" https://news.ycombinator.com/item?id=24109576 :
> For purposes of Interledger, we call all settlement systems ledgers. These can include banks, blockchains, peer-to-peer payment schemes, automated clearing house (ACH), mobile money institutions, central-bank operated real-time gross settlement (RTGS) systems, and even more. […]
> You can envision the Interledger as a graph where the points are individual nodes and the edges are accounts between two parties. Parties with only one account can send or receive through the party on the other side of that account. Parties with two or more accounts are connectors, who can facilitate payments to or from anyone they're connected to.
> Connectors [AKA routers] provide a service of forwarding packets and relaying money, and they take on some risk when they do so. In exchange, connectors can charge fees and derive a profit from these services. In the open network of the Interledger, connectors are expected to compete among one another to offer the best balance of speed, reliability, coverage, and cost.
W3C ILP: Interledger Protocol > Peering, Clearing and Settling: https://interledger.org/rfcs/0032-peering-clearing-settlemen...
> Hopefully individuals will be able to use the Open Banking APIs to access their own data directly, but it looks like accreditation will be required, so probably not.
When you loan your money to a bank by depositing ledger dollars or cash - and they, since GLBA in 1999, invest it and offer less than a 1% checking interest rate - and they won't even give you the record of all of your transactions as CSV/OFX `SELECT * FROM transactions WHERE account_id=?`, you have to pay $20/mo per autogenerated PDF containing a table of transactions to scrape with e.g. PDFminer (because they don't keep all account history data online)?
Seemingly OT, but not. APIs for comparison here:
FinTS / HBCI: Home Banking Computer Information protocol https://en.wikipedia.org/wiki/FinTS
E.g. GNUcash (open source double-entry accounting software) supports HBCI (and QIF (Quicken format), and OFX (Open Financial Exchange)). https://www.gnucash.org/features.phtml
HBCI/FinTS has been around in Germany for quite awhile but nowhere else has comparable banking standards? I.e. Plaid may (unfortunately, due to lack of read-only tokens across the entire US consumer banking industry) be the most viable option for implementing HBCI-like support in GNUcash
OpenBanking API Specifications: https://standards.openbanking.org.uk/api-specifications/
Web3 (Ethereum,) APIs: https://web3py.readthedocs.io/en/stable/web3.main.html#rpc-a...
ISO20022 is "A single standardisation approach (methodology, process, repository) to be used by all financial standards initiatives" https://www.iso20022.org/
Brazil's PIX is one of the first real implementers of ISO20022. A note regarding such challenges: https://news.ycombinator.com/item?id=24104351
What data format does the FTC CAT Consolidated Audit Trail expect to receive mandatory financial reporting information in? Could ILP simplify banking and financial reporting at all?
FWIU, RippleNet (?) is the only network that supports attachments of e.g. line-item invoices (that we'd all like to see in the interest of transparency and accountability in government spending).
W3C ILP: Interledger Protocol. See links above.
Of the specs in this loose category, only cryptoledgers do not depend upon (DNS or) TLS/SSL - at the protocol layer, at least - and every CA in the kept-up-to-date trusted CA cert bundle (that could be built from a CT Certificate Transparency log of cert issuance and revocation events kept in a blockchain or e.g. centralized google/trillian, which they have the trusted sole root and backup responsibilities for).
Though, the DNS dependency has probably crept back into e.g. the bitcoind software by now (which used to bootstrap its list of peer nodes (~UNL) from an IRC IP address instead of a DNS domain).
FWIU, each trusted ACH (US 'Direct Deposit') party has a (one) GPG key that they use to sign transaction documents sent over now (S)FTP on scout's honor - on behalf of all of their customers' accounts.
Interactive Linear Algebra (2019)
For those interested in these kinds of interractive math experiences, I have been keeping track of them for a while. Here is my list so far:
• https://www.intmath.com/ - Interactive Mathematics Learn math while you play with it
• http://worrydream.com/LadderOfAbstraction/ - up and down the ladder of abstraction
• https://betterexplained.com/ - Intuitive guides to various things in math
• https://www.math3ma.com/blog/matrices-probability-graphs - Viewing Matrices & Probability as Graphs
• http://immersivemath.com/ila/index.html - immersive linear alg
https://github.com/topics/linear-algebra?l=jupyter+notebook lists "Computational Linear Algebra for Coders" https://github.com/fastai/numerical-linear-algebra
"site:GitHub.com inurl:awesome linear algebra jupyter" lists a few awesome lists with interactive linear algebra resources: https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
3blue1brown's "Essence of linear algebra" playlist has some excellent tutorials with intuition-building visualizations built with manim: https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...
Git password authentication is shutting down
I'm fine with this change for my usage, I don't think I've used password auth for myself or any automated service I've setup for years now. However, this will introduce more confusion for newcomers who already have to figure out what Git, GitHub, etc are. I just spent some time last weeekend teaching someone the basics of how to create a new project. Such a simple idea required introducing the terminal, basic terminal commands, GitHub, git and its most common commands. It took about 3 hours for us to get through just the most basic pieces. Adding on ssh-keygen, key management adds even more friction.
It's certainly a difficult problem. How can we offer a more gentle learning curve for budding developers while still requiring "real" projects to use best practices for security and development?
I also have found teaching someone how to be even marginally capable of contributing to a Github project from scratch to be a very time consuming and frustrating thing. Think, having your graphics designer able to make commits, or having someone who only wants to update docs.
The worst part is the "easier" solutions are actually just footguns in disguise, as soon as they accidentally click the wrong thing and end up with a detached HEAD, a few commits ahead and behind the REMOTE, and halfway through a botched merge, you have to figure out how to bail them out of that using a GUI you've never actually used. Knowing all this, you either teach them git (high short term pain, high chance of them just giving up immediately) or you tell them to download the first result for "windows foss git gui" and pray that history won't repeat itself.
The solution that everyone actually uses until they learn the unnecessary details is "take a backup of relevant files and blow away & redownload the repo".
wait there is another way?
git stash
git reset --hard master
git pull
git stash pop
Uh oh, you've now merged remote master into your local master which was two commits ahead and 3 behind!
My git config has fast-forward only for pulls and I habitually use --ff-only for any pull :)
The command you want is "git pull --rebase". There are configuration settings to make "git pull" rebase by default, and I'd recommend always turning that on that default (which may be by why the person above omitted it from his pull command).
I actually thought "git pull" did "git pull --rebase" by default (this may be what you get from running `git --configure` without modifications?), but maybe I've just been configuring it that way. You can achieve this in your global Git configuration by setting "pull.rebase" to "true".
I don't think it's sane behavior for "git pull" to do anything else besides rebase the local change onto the upstream branch, so I'm surprised it's not the command's default behavior. Has the project not changed the CLI for compatibility reasons or something?
When do you ever want a "git pull" that's not a rebase? That generates a merge commit saying "Merge master into origin/master" (or something similar) which is stupid. If you really want to use actual branches for some reason, that's fine, but "merge master into master" commits are an anti-pattern that if I ever see in a Git repository I'm working on or responsible for, results in me having a conversation with the author about how to use Git correctly.
I actually don't want to rebase. Rebase can rewrite your local history and make it impossible for you to push without doing a force push (or some other more complicated manuevering). The only time pull with rebase is okay (on a shared branch such as master) is when you know that your local branch is strictly behind remote. That's exactly what --ff-only does. It rebases only if you are behind remote, and declines to do so if you are ahead so you can sort your shit out without rewriting history.
If "git pull --rebase" succeeds without reporting a merge conflict, then it's tantamount to having executed the command with "--ff-only".
I believe you are incorrect however. "git pulll --rebase" will not ever rewrite history from the origin repository. It will only ever modify your local unpublished commits to account for new commits from the origin branch. If your change and the new commits from origin aren't touching the same lines or files, then the update is seamless [1].
If there is a conflict because your change and a new commit from origin both changed the same line in the same file, this can result in a merge commit that you need to resolve. But you resolve this locally, by updating your (unpushed, local) commit(s) to account for the new history. When you complete the merge, it will not show up as a merge commit in the repository -- you will simply have simply amended your local unpublished commit(s) only to account for the changes in upstream, and you will have seen every conflict and resolved each one yourself. When the process is complete, and you've resolved the merge, you'll have a nice linear branch where your commit(s) are on the tip of the origin branch.
The flag "--ff-only" basically just means "refuse to start a local merge while performing this operation, and instead fail".
Because of the potential for these merge conflicts, it's a best practice to "git pull" frequently, so that if there are conflicts you can deal with them incrementally (and possibly discuss the software design with coworkers/project partners if the design is beginning to diverge -- and so people can coordinate what they're working on and try to stay out of each other's way), instead of working through a massive pile of conflicts at the end of your "local 'branch'" (i.e. the code in your personal Git repo that constitutes unpushed commits to the upstream branch).
Additionally, all of the central Git repositories I've worked in in a professional context were also configured to disallow "git push --force" (a command that can rewrite history), for all "real" branches. These systems gave users their own private Git repo sandbox (just like you can fork a repo on GitHub) where they can force push if they want to (but is backed up centrally like GitHub to avoid the loss of work). This personal repo was very useful for saving my work. I was in the habit of committing and pushing to the private repo about once every 30m-1h (to eliminate the chance of major work loss due to hardware failure). Almost always I'd squash all of these commits into one before rebasing onto master, so that the change comes in as a single commit, unless it would be too large to review.
In the occasional circumstances where I've legitimately needed to rewrite history for some reason -- say credentials got into the repository, or someone generated one of these "merge master into master" commits -- then I would change HEAD from master to another branch, delete master, and then recreate it with the desired state. (And even that operation would show up in the system's logs, so in the case of something like credentials you'd additionally contact the security or source control team to make sure the commit objects containing the credentials were actually deleted out of the repo history completely, including stuff you can find only via the reflog.) Then contact the team working on the repo to let them know that you had to rewrite history to correct a problem.
I would recommend disabling "git push --force" for all collaborative projects. If you're operating the repository, you can do this by invoking "git config --system receive" and setting "denyNonFastForwards true". In GitHub there's probably a repository setting switch somewhere.
Once professional software engineers start working with Git all day long, they quickly get past the beginner stage and the need to do this kind of stuff is very rare.
[1] It doesn't mean the software will work though, even if both changes would have worked in isolation. You still need to inspect and test the results of "git pull (--ff-only)", since even if there are no conflicts like commits that modify the same lines as yours, or there are conflicts that Git can resolve automatically, it's possible for the resultant software logic to be defective, since Git has no semantic understanding of the code.
`git pull --rebase` usually is what I need to do. To save local changes and rebase to the git remote's branch:
# git branch -av;
# git remote -v;
# git reflog; git help reflog; man git-reflog
# git show HEAD@{0}
# git log -n5 --graph;
git add -A; git status;
git stash; git stash list;
git pull --rebase;
#git pull --rebase origin develop
# git fetch origin develop
# git rebase origin/develop
git stash pop;
git stash list;
git status;
# git commit
# git rebase -i HEAD~5 # squash
# git push
HubFlow does branch merging correctly because I never can. Even when it's just me and I don't remember how I was handling tags of releases on which branch, I just reach for HubFlow now and it's pretty much good.There's a way to default to --rebase for pulls: is there a reason not to set that in a global gitconfig? Edit: From https://stackoverflow.com/questions/13846300/how-to-make-git... :
> There are now 3 different levels of configuration for default pull behaviour. From most general to most fine grained they are: […]
git config --global pull.rebase true
Maybe start here: http://learngitbranching.js.org/
GitHub Learning Lab: https://lab.github.com/
https://learnxinyminutes.com/docs/git/ #Further_resources
A future for SQL on the web
What's kind of bonkers here is that IndexedDB uses sqlite as its backend. So, this is sqlite (WASM) -> IndexedDB -> sqlite (native).
The Internet is a wild place...
I'm going to build a business that offers SQLite as a web service. It will be backed by a P2P network of browser instances storing data in IndexedDB. Taking investment now.
TIL, about Graph "Protocol for building decentralized applications quickly on Ethereum" https://github.com/graphprotocol
https://thegraph.com/docs/indexing
> Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function.
> GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network.
> Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing.
It's Ethereum though, so it's LevelDB, not SQLite on IndexedDB on SQLite.
Show HN: Python Source Code Refactoring Toolkit via AST
A similar type project. Though I haven't seen much activity recently:
I haven't used bowler, but from what I can see it is using lib2to3 (through fissix), which can't parse newer Python code (parenthesized context managers, new pattern matching statement etc.) due to it being using a LL(1) parser. The regular ast on the other hand is always compatible with the current grammar, since that is what CPython internally consumes. It is also more handy to work with since majority of the linters already using it, so it would be very easy to port rules.
(Author of Bowler/maintainer of fissix)
The main concern with using the ast module from stdlib is that you lose all formatting/whitespace/comments in your syntax tree, so writing any changes back to the original sources requires doing a lot more extra work to preserve the original formatting, put back comments, etc. This is entire point of lib2to3/fissix and LibCST, allowing large scale CST manipulation while preserving all of those comments and formatting. We do recognize the limitations of lib2to3/fissix, though, so there have been some backburner plans to move Bowler onto LibCST, as well as building a PEG-based parser for LibCST specifically to enable support for 3.10 and future syntax/grammar changes. But of course, this is very difficult to give any ETA or target for release.
Did you consider PyCQA/RedBaron (which is based upon PyCQA/baron, an AST implementation which preserves comments and whitespace)? https://redbaron.readthedocs.io/en/latest/
It was considered, but the initial goals of Bowler was to a) build on top of lib2to3 in an attempt to get broader upstream support for maintaining it as an "official" cst module, which fizzled out, and b) to use some of the more complex matching semantics that lib2to3 enabled, which baron and other cst alternatives don't really attempt to cover.
Things like "find any function that uses kwargs" or "find any class that defines a method named `bar`" can be easily expressed with lib2to3's matching grammar, and no other CST that I'm aware of (that isn't itself based on lib2to3) has equivalent functionality. This is something we wanted to add to LibCST, but haven't had the time to focus on given other priorities. Meanwhile, we used LibCST to write a safer alternative to isort: https://usort.readthedocs.io
Rog. I think CodeQL (GitHub acquired Semmle and QL in 2019) supports those types of queries; probably atop lib2to3 as well. https://codeql.github.com/docs/writing-codeql-queries/introd...
From https://news.ycombinator.com/item?id=24511280 :
> Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools […]
Emacs' org-mode gets citation support
FWIW, Jupyter-book handles Citations and bibliographies with sphinxcontrib-bibtex: https://jupyterbook.org/content/citations.html
Some notes about Zotero and Schema.org RDFa for publishing [CSL with citeproc] citations: references of Linked Data resources in a graph, with URIs all: https://wrdrd.github.io/docs/tools/index#zotero-and-schema-o...
Compared to trying to parse beautifully typeset bibliographies in PDFs built from LaTeX with a Computer Modern font, search engines can more easily index e.g. https://schema.org/ScholarlyArticle linked data as RDFa, Microdata, or JSON-LD.
Scholarly search engines: Google Scholar, Semantic Scholar, Meta.org,
NSA Kubernetes Hardening Guidance [pdf]
- Scan containers and Pods for vulnerabilities or misconfigurations.
- Run containers and Pods with the least privileges possible.
- Use network separation to control the amount of damage a compromise can cause.
- Use firewalls to limit unneeded network connectivity and encryption to protect confidentiality.
- Use strong authentication and authorization to limit user and administrator access as well as to limit the attack surface.
- Use log auditing so that administrators can monitor activity and be alerted to potential malicious activity.
- Periodically review all Kubernetes settings and use vulnerability scans to help ensure risks are appropriately accounted for and security patches are applied.
> and encryption to protect confidentiality
Probably the hardest part about this. Private networks with private domains. Who runs the private CA, updates DNS records, issues certs, revokes, keeps the keys secure, embeds the keychain in every container's cert stack, and enforces validation?
That is a shit-ton of stuff to set up (and potentially screw up) which will take a small team probably months to complete. How many teams are actually going to do this, versus just terminating at the load balancer and everything in the cluster running plaintext?
> That is a shit-ton of stuff to set up (and potentially screw up) which will take a small team probably months to complete.
Agree! This is why that "Kubernetes Hardening Guidance" is for NSA, not for startups.
Resource needs aside, keeping basic AppSec/InfoSec hygiene is a strong recommendation. Also there are tons of startups that are trying to provide solutions/services to solve that also. A lot of times, it's worth the money.
This guidance is provided by the NSA, not for the NSA.
From the doc:
>It includes hardening strategies to avoid common misconfigurations and guide system administrators and developers of National Security Systems on how to deploy Kubernetes...
Also:
> Purpose > NSA and CISA developed this document in furtherance of their respective cybersecurity missions, including their responsibilities to develop and issue cybersecurity specifications and mitigations. This information may be shared broadly to reach all appropriate stakeholders.
NSA has multiple mandates and many stakeholders.
Looks like there's actually a "summary of the key recommendations from each section" on page 2.
> Works cited:
> [1] Center for Internet Security, "Kubernetes," 2021. [Online]. Available: https://cisecurity.org/resources/?type=benchmark&search=kube... .
> [2] DISA, "Kubernetes STIG," 2021. [Online]. Available: https://dl.dod.cyber.mil.wp- content/uploads/stigs/zip/U_Kubernetes_V1R1_STIG.zip. [Accessed 8 July 2021]
> [3] The Linux Foundation, "Kubernetes Documentation," 2021. [Online]. Available: https://kubernetes.io/docs/home/ . [Accessed 8 July 2021].
> [4] The Linux Foundation, "11 Ways (Not) to Get Hacked," 18 07 2018. [Online]. Available: https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hac... . [Accessed 8 July 2021].
> [5] MITRE, "Unsecured Credentials: Cloud Instance Metadata API." MITRE ATT&CK, 2021. [Online]. Available: https://attack.mitre.org/techniques/T1552/005/. [Accessed 8 July 2021].
> [6] CISA, "Analysis Report (AR21-013A): Strengthening Security Configurations to Defend Against Attackers Targeting Cloud Services." Cybersecurity and Infrastructure Security Agency, 14 January 2021. [Online]. Available:https://us- cert.cisa.gov/ncas/analysis-reports/ar21-013a [Accessed 8 July 2021].
How can k8s and zero-trust cooccur?
> CISA encourages administrators and organizations review NSA’s guidance on Embracing a Zero Trust Security Model to help secure sensitive data, systems, and services.
"Embracing a Zero Trust Security Model" (2021, as well) https://media.defense.gov/2021/Feb/25/2002588479/-1/-1/0/CSI...
In addition to "zero [trust]", I also looked for the term "SBOM". From p.32//39:
> As updates are deployed, administrators should also keep up with removing any old components that are no longer needed from the environment. Using a managed Kubernetes service can help to automate upgrades and patches for Kubernetes, operating systems, and networking protocols. *However, administrators must still patch and upgrade their containerized applications.*
"Existing artifact vuln scanners, databases, and specs?" https://github.com/google/osv/issues/55
Hosting SQLite Databases on GitHub Pages
That's one lovely trick.
If I may suggest one thing... instead of range requests on a single huge file how about splitting the file in 1-page fragments in separate files and fetching them individually? This buys you caching (e.g. on CDNs) and compression (that you could also perform ahead of time), both things that are somewhat tricky with a single giant file and range requests.
With the reduction in size you get from compression, you can also use larger pages almost for free, potentially further decreasing the number of roundtrips.
There's also a bunch of other things that could be tried later, like using a custom dictionary for compressing the individual pages.
I think that's the core innovation here, smart HTTP block storage.
I wonder if there has been any research into optimizing all http range requests at the client level in a similar way. i.e. considering the history of requests on a particular url and doing the same predictive exponential requests, or grabbing the full file asynchronously at a certain point.
> Methods for remotely accessing/paging data in from a client when a complete download of the dataset is unnecessary:
> - Query e.g. parquet on e.g. GitHub with DuckDB: duckdb/test_parquet_remote.test https://github.com/duckdb/duckdb/blob/6c7c9805fdf1604039ebed...
> - Query sqlite on e.g. GitHub with SQLite: [Hosting SQLite databases on Github Pages - (or any static file hoster) - phiresky's blog](...)
>> The above query should do 10-20 GET requests, fetching a total of 130 - 270KiB, depending on if you ran the above demos as well. Note that it only has to do 20 requests and not 270 (as would be expected when fetching 270 KiB with 1 KiB at a time). That’s because I implemented a pre-fetching system that tries to detect access patterns through three separate virtual read heads and exponentially increases the request size for sequential reads. This means that index scans or table scans reading more than a few KiB of data will only cause a number of requests that is logarithmic in the total byte length of the scan. You can see the effect of this by looking at the “Access pattern” column in the page read log above.
> - bittorrent/sqltorrent https://github.com/bittorrent/sqltorrent
>> Sqltorrent is a custom VFS for sqlite which allows applications to query an sqlite database contained within a torrent. Queries can be processed immediately after the database has been opened, even though the database file is still being downloaded. Pieces of the file which are required to complete a query are prioritized so that queries complete reasonably quickly even if only a small fraction of the whole database has been downloaded.
>> […] Creating torrents: Sqltorrent currently only supports torrents containing a single sqlite database file. For efficiency the piece size of the torrent should be kept fairly small, around 32KB. It is also recommended to set the page size equal to the piece size when creating the sqlite database
Would BitTorrent be faster over HTTP/3 (UDP) or is that already a thing for web seeding?
> - https://web.dev/file-system-access/
> The File System Access API: simplifying access to local files: The File System Access API allows web apps to read or save changes directly to files and folders on the user’s device
Hadn't seen wilsonzlin/edgesearch, thx:
> Serverless full-text search with Cloudflare Workers, WebAssembly, and Roaring Bitmaps https://github.com/wilsonzlin/edgesearch
>> How it works: Edgesearch builds a reverse index by mapping terms to a compressed bit set (using Roaring Bitmaps) of IDs of documents containing the term, and creates a custom worker script and data to upload to Cloudflare Workers
> Would BitTorrent be faster over HTTP/3 (UDP) or is that already a thing for web seeding?
The BT protocol itself runs on both TCP and UDP, but it has preferred the UDP variant for many years already.
Thanks. There likely are relative advantages to HTTP/3 QUIC. Here's this from Wikipedia:
> Both HTTP/1.1 and HTTP/2 use TCP as their transport. HTTP/3 uses QUIC, a transport layer network protocol which uses user space congestion control over the User Datagram Protocol (UDP). The switch to QUIC aims to fix a major problem of HTTP/2 called "head-of-line blocking": because the parallel nature of HTTP/2's multiplexing is not visible to TCP's loss recovery mechanisms, a lost or reordered packet causes all active transactions to experience a stall regardless of whether that transaction was impacted by the lost packet. Because QUIC provides native multiplexing, lost packets only impact the streams where data has been lost.
And HTTP Pipelining / Multiplexing isn't specified by just UDP or QUIC:
> HTTP/1.1 specification requires servers to respond to pipelined requests correctly, sending back non-pipelined but valid responses even if server does not support HTTP pipelining. Despite this requirement, many legacy HTTP/1.1 servers do not support pipelining correctly, forcing most HTTP clients to not use HTTP pipelining in practice.
> Time diagram of non-pipelined vs. pipelined connection The technique was superseded by multiplexing via HTTP/2,[2] which is supported by most modern browsers.[3]
> In HTTP/3, the multiplexing is accomplished through the new underlying QUIC transport protocol, which replaces TCP. This further reduces loading time, as there is no head-of-line blocking anymore https://en.wikipedia.org/wiki/HTTP_pipelining
Ask HN: Any good resources on how to be a great technical advisor to startups?
Bumping up https://news.ycombinator.com/item?id=27600539
## Codelabels: Component: title
### ENH,UBY: HN: linkify URIs in descriptions
## User Stories
Users {__, __, } can ___ in order to ___.
Given-When-Then
~ Who-What-Wow
~ {Marketing, Training, Support, Service} Curriculum Competencies
### Users can click on links in descriptions in order to review referenced off-site resources.
Costs/Benefits: Linkspam?
The URL from this {item,} description: https://news.ycombinator.com/item?id=27600539
Teaching other teachers how to teach CS better
git and HTML and Linked Data should be requisite: https://learngitbranching.js.org/
Pedagogy#Modern_pedagogy: https://en.wikipedia.org/wiki/Pedagogy#Modern_pedagogy
Evidence-based_education: https://en.wikipedia.org/wiki/Evidence-based_education
Computational_thinking#Characteristics: https://en.wikipedia.org/wiki/Computational_thinking#Charact... (Abstraction, Automation, Analysis)
Learning: https://en.wikipedia.org/wiki/Learning
Autodidacticism: https://en.wikipedia.org/wiki/Autodidacticism
Design of Experiments; Hypotheses, troubleshooting, debugging, automated testing, Formal Methods, actual Root Cause Analysis: https://en.wikipedia.org/wiki/Design_of_experiments
Critical Thinking; definitions, Logic and Rationality, Logical Reasoning: Deduction, Abduction and Induction: https://en.wikipedia.org/wiki/Critical_thinking#Logic_and_ra...
Doesn't this all derive from [Quantum] Information Theory? It's actually fascinating to start at Information Theory; who knows what that curriculum would look like without reinforcement and [3D] videos: https://en.wikipedia.org/wiki/Information_theory
Stone, James V. "Information theory: a tutorial introduction." (2015). https://scholar.google.com/scholar?q=%22Information+Theory:+...
It used to be that we had to start engines with a turn of a crank: that initial energy to overcome inertia was enough for the system to feed-forward without additional reinforcement. Effective CS instruction may motivate the unmotivated to care about learning the way folks who are receiving reinforcement do: intrinsically.
Ask HN: Best online speech / public speaking course?
Hi HN - Has anyone taken an online course to help them with public speaking, speech and voice skills that they’d highly recommend? Thanks!
"TED Talks: The Official TED Guide to Public Speaking" https://smile.amazon.com/TED-Talks-Official-Public-Speaking-...
TED Masterclass: https://masterclass.ted.com/
"Power Talk: Using Language to Build Authority and Influence" https://smile.amazon.com/Power-Talk-Language-Authority-Influ...
Re: Clean Language and Symbolic Modeling; listening to metaphors and asking clean questions may be a more effective way to facilitate change: https://westurner.github.io/hnlog/#comment-15471868
/? greatest speeches: https://m.youtube.com/results?sp=mAEA&search_query=Greatest+...
"Lend Me Your Ears: Great Speeches in History" by William Safire. https://a.co/8svyoUw
E.g. "The Prosperity Bible: The Greatest Writings of All Time on the Secrets to Wealth and Prosperity" (Napoleon Hill, PT Barnum, Dale Carnegie, Gibran, Benjamin Franklin; 5000+ pages). https://a.co/b8Ej6o7
Talking points: Peaceful coexistence, #GlobalGoals 1-17 (UN SDGs), "Limits to Growth: The 30-Year Update" by Donella H. Meadows. https://a.co/7MgO0bv
Google sunsets the APK format for new Android apps
I was just trying to explain this the other day. Not sure whether to be disappointed in is this a regression? No, bros, you may not just `repack it` and re-sign the package for me. That's not how it should work unless I trust their build server to sign for me; and I don't and we shouldn't. I'll just CC this here from https://westurner.github.io/hnlog/#comment-27410978 :
```
> Unfortunately all packages aren't necessarily signed either; "Which package managers require packages to be cryptographically signed?" is similar to "Which DNS clients can operate DNS resolvers that require DNSSEC signatures on DNS records to validate against the distributed trust anchors?".
> FWIW, `delv pkg.mirror.server.org` is how you can check DNSSEC:
man systemd-resolved # nmcli
man delv
man dnssec-trust-anchors.d
delv pkg.mirror.server.org
> Sigstore is a free and open Linux Foundation service for asset signatures: https://sigstore.dev/what_is_sigstore/ > The TUF Overview explains some of the risks of asset signature systems; key compromise, there's one key for everything that we all share and can't log the revocation of in a CT (Certificate Transparency) log distributed like a DLT, https://theupdateframework.io/overview/
> Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency
> Yeah, there's a channel to secure there at that layer of the software supply chain as well.
> "PEP 480 -- Surviving a Compromise of PyPI: End-to-end signing of packages" (2014-) https://www.python.org/dev/peps/pep-0480/
>> Proposed is an extension to PEP 458 that adds support for end-to-end signing and the maximum security model. End-to-end signing allows both PyPI and developers to sign for the distributions that are downloaded by clients. The minimum security model proposed by PEP 458 supports continuous delivery of distributions (because they are signed by online keys), but that model does not protect distributions in the event that PyPI is compromised. In the minimum security model, attackers who have compromised the signing keys stored on PyPI Infrastructure may sign for malicious distributions. The maximum security model, described in this PEP, retains the benefits of PEP 458 (e.g., immediate availability of distributions that are uploaded to PyPI), but additionally ensures that end-users are not at risk of installing forged software if PyPI is compromised.
> One W3C Linked Data way to handle https://schema.org/SoftwareApplication ( https://codemeta.github.io/user-guide/ ) cryptographic signatures of a JSON-LD manifest with per-file and whole package hashes would be with e.g. W3C ld-signatures/ld-proofs and W3C DID (Decentralized Identifiers) or x.509 certs in a CT log.
```
FWIU, the Fuschia team is building package signing on top of TUF.
A from-scratch tour of Bitcoin in Python
This is reminds me of Ken Shirriff's 2014 "Bitcoins the Hard Way" blog post that also used Python to build a Bitcoin transaction from scratch: http://www.righto.com/2014/02/bitcoins-hard-way-using-raw-bi...
(The subtitle of the blog is "Computer history, restoring vintage computers, IC reverse engineering, and whatever" and it is full of fascinating articles, several of which have been featured here on HN)
> The 'dumbcoin' jupyter notebook is also a good reference: "Dumbcoin - An educational python implementation of a bitcoin-like blockchain" https://nbviewer.jupyter.org/github/julienr/ipynb_playground...
https://github.com/yjjnls/awesome-blockchain#implementation-... and https://github.com/openblockchains/awesome-blockchains#pytho... list a few more ~"blockchain from scratch" [in Python] examples.
... FWIU, Ethereum has the better Python story. There was a reference implementation of Ethereum in Python? https://ethereum.org/en/developers/docs/programming-language...
An Omega-3 that’s poison for cancer tumors
If you are interested in adding Omega-3 to your stack, make sure it is molecularly distilled fish oil. Otherwise there is the risk of excess lead and mercury.
You can also eat fish that are low on the food chain like sardines.
Fish don't synthesize Omega PUFAs, they eat algae (which unfortunately and inopportunely stains teeth)
From "Warning: Combination of Omega-3s in Popular Supplements May Blunt Heart Benefits" https://scitechdaily.com/warning-combination-of-omega-3s-in-... :
> Now, new research from the Intermountain Healthcare Heart Institute in Salt Lake City finds that higher EPA blood levels alone lowered the risk of major cardiac events and death in patients, while DHA blunted the cardiovascular benefits of EPA. Higher DHA levels at any level of EPA, worsened health outcomes.
> Results of the Intermountain study, which examined nearly 1,000 patients over a 10-year-period,
> “Based on these and other findings, we can still tell our patients to eat Omega-3 rich foods, but we should not be recommending them in pill form as supplements or even as combined (EPA + DHA) prescription products,” he said. “Our data adds further strength to the findings of the recent REDUCE-IT (2018) study that EPA-only prescription products reduce heart disease events.”
Now they're sayin'; so I go look for an EPA-only supplement, and TIL about re-esterified triglyceride and it says it's molecularly distilled anchovies in blister packages. Which early land mammals probably ate, so.
I found this comment too late... Already ordered a "kids" supplement that has a 2:1 ratio of DHA:EPA.
I do not think that you should be much concerned about this, because other studies show that more DHA than EPA is preferable when using other criteria.
So the conclusion is that nobody knows for sure if you should eat more DHA than EPA, to have a better brain, or less DHA than EPA, to have a better heart.
What is certain is that you need some minimum quantity of both DHA and EPA.
Especially for children, who do not have yet to worry about cardio-vascular problems, a supplement with more DHA than EPA seems actually a good choice.
Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification
[Coq, VST, CompCert]
Formal methods: https://en.wikipedia.org/wiki/Formal_methods
Formal specification: https://en.wikipedia.org/wiki/Formal_specification
Implementation of formal specification: https://en.wikipedia.org/wiki/Anti-pattern#Software_engineer...
Formal verification: https://en.wikipedia.org/wiki/Formal_verification
From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :
> Which universities teach formal methods?
> - q=formal+verification https://www.class-central.com/search?q=formal+verification
> - q=formal+methods https://www.class-central.com/search?q=formal+methods
> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?
Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ help with that at all?
Formal methods could be used to prevent side channel attacks. However preventing each type of attack requires specifying properties which needs to be proven to assure it is is not possible. I am not very famililiar with TLA+, but I think in general it is not guaranteed to provide side channel attacks, such as based on timing, CPU state, etc.
Anatomy of a Linux DNS Lookup
This predates systemd's decision to get involved via systemd-resolved. So now it's got another step :)
Edit: Well, and also browsers doing DNS over https. It's pretty confusing these days to know which path your app took to resolve a name.
Which is super annoying when you actually want to run a DNS server and have to try and convince systemd-resolved to stay in its own lane.
Is there a good example of a Linux package that does this correctly?
I suspect the big part is putting "DNSStubListener=no" into the [Resolve] section of /etc/systemd/resolved.conf if you want to have something else listening on port 53.
Or you can just mask the service (better than disabling):
systemctl mask systemd-resolved.service
Other services to consider masking if you are not using SystemD for network configuration and time sync., etc.: systemctl mask systemd-hostnamed.service
systemctl mask systemd-timesyncd.service
systemctl mask systemd-networkd.service
systemctl mask systemd-homed.service
Yeah, but if you regress to 'legacy DNS' by removing systemd-resolved then there's no good way to do per-interface DNS (~client-split DNS), or (optionally) validate DNSSEC, or do DoH/DoT; and then nothing respawns and logs consistently-timestamped process events of substitute network service processes.
FWIU, per-user DNS configs are still elusive. Per-user DNS would make it easier to use family-safe DNS (that redirects to family-safe e.g. SafeSearch domains) by default; some forums are essential for system administration.
Since nothing in the real world is or ever will be DNSSEC signed anyways, it's not that much of a loss. Meanwhile, the most important DoH user --- your browser --- has DoH baked in and doesn't need systemd-resolved's help.
Your system may also depend upon one or more package managers that do all depend upon DNS (and hopefully e.g. DNSSEC and DoH/DoT)
Package managers don't rely on DNSSEC for package security (it would be deeply problematic if they did). But you can also just go look and see that, for instance, Ubuntu.com isn't signed, nor are most of the Pacman mirrors for Arch, nor is alpinelinux.org, &c. DNSSEC is simply not a factor in package management security.
People have weird ideas about how this stuff works.
Unfortunately all packages aren't necessarily signed either; "Which package managers require packages to be cryptographically signed?" is similar to "Which DNS clients can operate DNS resolvers that require DNSSEC signatures on DNS records to validate against the distributed trust anchors?".
FWIW, `delv pkg.mirror.server.org` is how you can check DNSSEC:
man systemd-resolved # nmcli
man delv
man dnssec-trust-anchors.d
delv pkg.mirror.server.org
Sigstore is a free and open Linux Foundation service for asset signatures:
https://sigstore.dev/what_is_sigstore/The TUF Overview explains some of the risks of asset signature systems; key compromise, there's one key for everything that we all share and can't log the revocation of in a CT (Certificate Transparency) log distributed like a DLT, https://theupdateframework.io/overview/
Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency
Yeah, there's a channel to secure there at that layer of the software supply chain as well.
"PEP 480 -- Surviving a Compromise of PyPI: End-to-end signing of packages" (2014-) https://www.python.org/dev/peps/pep-0480/
> Proposed is an extension to PEP 458 that adds support for end-to-end signing and the maximum security model. End-to-end signing allows both PyPI and developers to sign for the distributions that are downloaded by clients. The minimum security model proposed by PEP 458 supports continuous delivery of distributions (because they are signed by online keys), but that model does not protect distributions in the event that PyPI is compromised. In the minimum security model, attackers who have compromised the signing keys stored on PyPI Infrastructure may sign for malicious distributions. The maximum security model, described in this PEP, retains the benefits of PEP 458 (e.g., immediate availability of distributions that are uploaded to PyPI), but additionally ensures that end-users are not at risk of installing forged software if PyPI is compromised.
One W3C Linked Data way to handle https://schema.org/SoftwareApplication ( https://codemeta.github.io/user-guide/ ) cryptographic signatures of a JSON-LD manifest with per-file and whole package hashes would be with e.g. W3C ld-signatures/ld-proofs and W3C DID (Decentralized Identifiers) or x.509 certs in a CT log.
JupyterLite – WASM-powered Jupyter running in the browser
Despite Pyolite has a miserable performance (20MB of downloads), the overall project direction is correct.
I said this already 10 years ago: We don't need more cloud computing but need to empower users end devices again. Jupyter is typically operated on powerful notebooks and not on mobile devices.
Isn't that not just Java all over again, but this time with JavaScript?
There are crucial differences between Java applets and JS.
- Applets tried to render their own GUI, Wasm doesn't and defers to the browser.
- applets needed a big, slow to start and resource hungry VM. Wasm is running in the same thread your JS is also running in, it's light, and loads faster than JS
- Java and flash were plugins, which needed to be installed and kept up to date separately. Wasm is baked into your browser's JS engine
- Wasm code is very fast and can achieve near native execution speeds. It can make use of advanced optimisations. SIMD has shipped in Chrome, and will soon in Firefox
- The wasm spec is very, very good, and really quite small. This means that implementing it is comparatively cheap, and this should make it easy to see it implemented by different vendors.
- Java was just Java. Wasm can serve as a platform for any language. See my earlier point about the spec
So it's apples and oranges. The need to have something besides JS hasn't gone away, so their use cases might be similar. The two technologies couldn't be more distinct, though.
You must view the browser with JS and WASM as a unit.
The browser renders it's own GUI too, it's not OS native
The browser uses lots of resources too.
The browser is kind of a plugin to the OS and must be updated separately.
Java nowadays is pretty fast too.
Java VM serves a platform for multiple languages like Scala, Kotlin, Clojure.
Let's face it, the browser is the new JVM and a soon it gets the same permissions like the JVM to access the file system and such, we get the same problems.
From https://news.ycombinator.com/item?id=24052393 re: Starboard:
> https://developer.mozilla.org/en-US/docs/Web/Security/Subres... : "Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match."
> There's a new Native Filesystem API: "The new Native File System API allows web apps to read or save changes directly to files and folders on the user's device." https://web.dev/native-file-system/
> We'll need a way to grant specific URLs specific, limited amounts of storage.
[...]
> https://github.com/deathbeds/jyve/issues/46 :
> Would [Micromamba] and conda-forge build a WASM architecture target?
Accenture, GitHub, Microsoft and ThoughtWorks Launch the GSF
> With data centers around the world accounting for 1% of global electricity demand, and projections to consume 3-8% in the next decade, it’s imperative we address this as an industry.
> To help in that endeavor, we’re excited to announce the formation of The Green Software Foundation – a nonprofit founded by Accenture, GitHub, Microsoft and ThoughtWorks established with the Linux Foundation and the Joint Development Foundation Projects LLC to build a trusted ecosystem of people, standards, tooling and leading practices for building green software. The Green Software Foundation was born out of a mutual desire and need to collaborate across the software industry. Organizations with a shared commitment to sustainability and an interest in green software development principles are encouraged to join the foundation to help grow the field of green software engineering, contribute to standards for the industry, and work together to reduce the carbon emissions of software. The foundation aims to help the software industry contribute to the information and communications technology sector’s broader targets for reducing greenhouse gas emissions by 45% by 2030, in line with the Paris Climate Agreement.
Here's to now hand-optimized efficient EC, SHA-256, SHA-3, and Scrypt routines due to incentives. See also The Crypto Climate Accord, which is also inspired by the Paris Agreement: https://cryptoclimate.org/
... "Thermodynamics of Computation Wiki" https://news.ycombinator.com/item?id=18146854
Is 100% offset by PPAs always 200% Green?
From "Ask HN: What jobs can a software engineer take to tackle climate change?" https://news.ycombinator.com/item?id=20015801 :
> [ ] We should create some sort of a badge and structured data (JSONLD, RDFa, Microdata) for site headers and/or footers that lets consumers know that we're working toward '200% green' so that we can vote with our money.
>inspired by the Paris Agreement
So we’re just going to let China do the polluting? Maybe you didn’t know but the Paris Agreement exempted China from all standards.
No, under the Paris Agreement, countries set voluntary targets for themselves and regularly reassess.
And when China says it’s doing nothing everybody cheered.
TBF, the glut of [Chinese,] solar panels has significantly helped lower the cost of renewables; which is in everyone's interest.
They’re cheap because the secret ingredient is slavery and Uighur blood.
Rocky Linux releases its first release candidate
One of our clients pushed us toward AlmaLinux instead of CentOS 7. At the time, we vaguely suggested Rocky Linux, but noted we were kind of still waiting for the future (hence our suggestion of going with CentOS 7 and migrating later to the "true successor" of CentOS).
What is the HN community's perception of this? Is AlmaLinux a good choice? Do you believe Rocky Linux will succeed?
I’d say “neither”. CentOS would have fail had Redhat not taken over. My guess is that supporting all the companies who do not want to pay Redhat, for free, is going to push developers way pretty quickly.
Even if I’m wrong I’d still want to see at least two or three solid release before using either in production.
Just use Redhat or switch to Ubuntu. Then you can re-evaluate in five tears time.
> I’d say “neither”. CentOS would have fail had Redhat not taken over.
While it's possible that CentOS would have failed (I'm not sure slow initial releases actually indicate that), it also wasn't the only popular clone of RHEL in popular use. Scientific Linux (developed by FermiLab) was also popular and in fairly wide use. I doubt Red Hat would have made the change to CentOS that spurred all this if Scientific Linux hadn't decided not to continue and just use CentOS instead in 2019, because instead of a "crap, we have to start a new distro to provide what CentOS did" movement there would have been a much quicker and easier mass exodus to Scientific Linux.
The Scientific Linux team decided to not make a Scientific Linux 8, and instead switch over to CentOS 8[0]. Note this was announced prior to the announcement about CentOS 8 going away.
CERN et al are still deciding what to do[1].
[0]: https://listserv.fnal.gov/scripts/wa.exe?A2=SCIENTIFIC-LINUX...
[1]: https://linux.web.cern.ch/#update-on-centos-linux-strategy
USB-C is about to go from 100W to 240W, enough to power beefier laptops
What are the costs to add a USB PD module to an electronic device? https://hackaday.com/2021/04/21/easy-usb‑c-power-for-all-you...
- [ ] Create an industry standard interface for charging and using [power tool,] battery packs; and adapters
Half-Double: New hammering technique for DRAM Rowhammer bug
From "Rowhammer for qubits: is it possible?" https://amp.reddit.com/r/quantum/comments/7osud4/rowhammer_f... :
> Sometimes bits just flip due to "cosmic rays"; or, logically, also due to e.g. neutron beams and magnetic fields.
> With rowhammer, there are read/write (?) access patterns which cause predictable-enough information "leakage" to be useful for data exfiltration and privilege escalation.
> With the objective of modeling qubit interactions using quantum-mechanical properties of fields of electrons in e.g. DRAM, Is there a way to use DRAM electron "soft errors" to model quantum interactions; to build a quantum computer from what we currently see as errors in DRAM?
> If not with current DRAM, could one apply a magnetic field to DRAM in order to exploit quantum properties of electrons moving in a magnetic field?
https://en.wikipedia.org/wiki/DRAM
https://en.wikipedia.org/wiki/Row_hammer
https://en.wikipedia.org/wiki/Soft_error
https://en.wikipedia.org/wiki/Crosstalk
> [...] are there DRAM read/write patterns which cause errors due to interference which approximate quantum logic gates? Probably not, but maybe; especially with an applied magnetic field (which then isn't the DRAM sitting on our desks, it's then DRAM + a constant or variable field).
> I suppose to test this longshot theory, one would need to fuzz low-level RAM loads and search for outputs that look like quantum gate outputs. Or, monitor normal workloads which result in RAM faults which approximate quantum logic gate outputs and train a network to recognize the features.
> I am reminded of a recent approach to in-RAM computing that's not memristors.
> Soft errors caused by cosmic rays are obviously more frequent at higher altitudes (and outside of the Van Allen radiation belt).
Thought I'd ask this here as well.
Quantum tunneling was the perceived barrier at like DDR5 and higher densities FWIU? Barring new non-electron-based tech, how can we prevent adjacent electrons from just flipping at that gate grid gap size?
Other Quantum-on-Silicon approaches have coherence issues, too
Setting up a Raspberry Pi with 2 Network Interfaces as a simple router
Old but relevant, I have used an espressobin as a router for years and the performance I am getting is similar to much more expensive semi-professional routers. https://blog.tjll.net/building-my-perfect-router/
For those who wish to undergo lesser trouble to get similar performance/security there are number of Single Board Computers(SBCs) with OpenWrt support[1] sometimes with official support like those from Friendly Elec.
[1]https://openwrt.org/toh/views/toh_single-board-computers
[2]https://www.friendlyarm.com/index.php?route=product/category...
I'm constantly looking for good priced SBC with cellular connectivity - that chart shows only 2/3 with modems and all appear to be via external modules.
Anyone know of any?
> This page shows devices which have a LTE modem built in and are supported by OpenWrt.
https://openwrt.org/toh/views/toh_lte_modem_supported
It looks like this table is neither current nor complete though. And there's a different table of OpenWRT compatible devices that have a battery as well.
> [The Amarok (GL-X1200) Industrial IoT Gateway has] 2x SIM card slots for 2x 4G LTE modems (probably miniPCI-E so maybe upgradeable to 5G later), external antenna connectors for the LTE modems, MicroSD, #OpenWRT: https://store.gl-inet.com/collections/4g-smart-router/produc...
The Turris Omnia also has 4G LTE SIM card support (and LXC in their OpenWRT build). https://openwrt.org/toh/turris/turris_omnia
There's also a [Dockerized] x86 build of OpenWRT that probably also supports Mini PCI-E modules for 4G LTE, LoRa, and 5G. Route metrics determine which [gateway] route is tried first.
From "How much total throughput can your wi-fi router really provide?" https://news.ycombinator.com/item?id=26596395 :
> In 2021, most routers - even with OpenWRT and hardware-offloading - cannot actually push 1 Gigabit over wired Ethernet, though the port spec does say 1000 Mbps
What to do about GPU packages on PyPI?
“ Our current CDN “costs” are ~$1.5M/month and not getting smaller. This is generously supported by our CDN provider but is a liability for PyPI’s long-term existence.”
Wow
Bear in mind this isn't just end-users installing on their machines, it also includes continuous integration scripts that run quite frequently.
[Huge GPU] packages can be cached locally: persist ~/.cache/pip between builds with e.g. Docker, run a PyPI caching proxy,
"[Discussions on Python.org] [Packaging] Draft PEP: PyPI cost solutions: CI, mirrors, containers, and caching to scale" https://discuss.python.org/t/draft-pep-pypi-cost-solutions-c...
> Continuous Integration automated build and testing services can help reduce the costs of hosting PyPI by running local mirrors and advising clients in regards to how to efficiently re-build software hundreds or thousands of times a month without re-downloading everything from PyPI every time.
[...]
> Request from and advisory for CI Services and CI Implementors:
> Dear CI Service,
> - Please consider running local package mirrors and enabling use of local package mirrors by default for clients’ CI builds.
> - Please advise clients regarding more efficient containerized software build and test strategies.
> Running local package mirrors will save PyPI (the Python Package Index, a service maintained by PyPA, a group within the non-profit Python Software Foundation) generously donated resources. (At present (March 2020), PyPI costs ~ $800,000 USD a month to operate; even with generously donated resources).
Looks like the current figure is significantly higher than $800K/mo for science.
How to persist ~/.cache/pip between builds with e.g. Docker in order to minimize unnecessary GPU package re-downloads:
RUN --mount=type=cache,target=/root/.cache/pip
RUN --mount=type=cache,target=/home/appuser/.cache/pip
Wow, that RUN trick is exactly what I've been looking for! I've spent hours and hours in Docker documentation and hadn't seen that functionality.
Looks like it might be buildkit-specific?
Markdown Notes VS Code extension: Navigate notes with [[wiki-links]]
> Syntax highlighting for #tags.
What's the best way to search for #tags with VS Code? Are #tags indexed into an e.g. ctags file within a project or a directory?
> @bibtex-citations: Use pandoc-style citations in your notes (eg @author_title_year) to get syntax highlighting, autocompletion and go to definition, if you setup a global BibTeX file with your references.
Open VS Code search across files (Press Ctrl+Shift+F on windows) and type the tag that you are searching for: #tag
Simple as that https://code.visualstudio.com/docs/editor/codebasics#_search...
Ask HN: Choosing a language to learn for the heck of it
I'm a technical manager, which means I do a lot of administrative stuff and a little coding. The coding has become a nice distraction when I need to take a break.
For "real work" I write mostly Python, a lot of SQL, a little bit of Go, and some shell scripting to glue it together. I'd like to learn something I have no need of for work. If it becomes useful later, that is OK, but not a goal. The goal is in creating something just for fun. That something is undefined, so general purpose languages are the population.
I have become curious lately in Nim, Crystal, and Zig. Small, modern, high performance languages. Curiousity comes from the cases when they are mentioned here, sometime for similar reasons I list above.
Nim is on top of the list: Sort of Python like, supported on Windows (I use Win/Mac/Linux), appears to have libraries for the things I do: Process text for insights, play projects would use interesting data instead of business data.
Crystal does not support Windows (yet), but appears to closer to Ruby. Its performance may be a bit better.
Zig came on my radar recently, I know less about it, compared to the little I know of the others.
Suggestions on choosing one as a hobby language?
> Suggestions on choosing one as a hobby language?
IDK how much of a hobby it'd remain, but: Rust compiles to WASM, C++ now has auto and coroutines (and real live memory management)
"Ask HN: Is it worth it to learn C in 2020?" https://news.ycombinator.com/item?id=21878664
Show HN: Django SQL Dashboard
This looks great! In a similar vein - does anyone know of a project that will allow for a Django shell in the browser?
I know Jupyter exists - but a solution like this with the permissions would be valuable.
This launches the web-based Werkzeug debugger on Exception:
pip install django-extensions
python manage.py runserver_plus
https://django-extensions.readthedocs.io/en/latest/runserver...This should run IPython Notebook with database models already imported :
python manage.py shell_plus --notebook
But writing fixtures, tests and (celery / dask-labextension) tasks is probably the better way to do things. Django-rest-assured is one way to get a tested REST API with DRF and e.g. factory_boy for generating test data.Interactive IPA Chart
Is there a [Linked Data] resource with the information in this interactive IPA chart (which is from Wikipedia FWICS) in addition to?:
- phoneme, ns:"US English letter combinations", []
- phoneme, ns:"schema.org/CreativeWorks which feature said phoneme", []
AFAIU, WordNet RDF doesn't have links to any IPA RDFS/OWL vocabulary/ontology yet.
Google Dataset Search
Information on how to annotate datasets: https://developers.google.com/search/docs/data-types/dataset
> We can understand structured data in Web pages about datasets, using either schema.org Dataset markup, or equivalent structures represented in W3C's Data Catalog Vocabulary (DCAT) format. We also are exploring experimental support for structured data based on W3C CSVW, and expect to evolve and adapt our approach as best practices for dataset description emerge. For more information about our approach to dataset discovery, see Making it easier to discover datasets.
For more info on those:
- W3C's Data Catalog Vocabulary: https://www.w3.org/TR/vocab-dcat-3/
- Schema.org dataset: https://schema.org/Dataset
- CSVW Namespace Vocabulary Terms: https://www.w3.org/ns/csvw
- Generating RDF from Tabular Data on the Web (examples on how to use CSVW): https://www.w3.org/TR/csv2rdf/
Use cases for such [LD: Linked Data] metadata:
1. #StructuredPremises:
> (How do I indicate that this is a https://schema.org/ScholarlyArticle predicated upon premises including this Dataset and these logical propositions?)
2. #LinkedMetaAnalyses; #LinkedResearch "#StudyGraph"
3. [CSVW (Tabular Data Model),] schema.org/Dataset(s) with per column (per-feature) physical quantity and unit URIs with e.g. QUDT and/or https://schema.org/StructuredValue metadata for maximum data reusability.
4. JupyterLab notebooks:
4a. JupyterLab Metadata Service extension: https://github.com/jupyterlab/jupyterlab-metadata-service :
> - displays linked data about the resources you are interacting with in JuyterLab.
> - enables other extensions to register as linked data providers to expose JSON LD about an entity given the entity's URL.
> - exposes linked data to the user as a Linked Data viewer in the Data Browser pane.
4b. JupyterLab Data Explorer: https://github.com/jupyterlab/jupyterlab-data-explorer :
> - Data changing on you? Use RxJS observables to represent data over time.
> - Have a new way to look at your data? Create React or lumino components to view a certain type.
> - Built-in data explorer UI to find and use available datasets.
Ask HN: Cap Table Service Recommendations
Recent founders, do you have any recommendations for services for managing a cap table? Or do you do it yourself? Any suggestions for how to choose?
Here are the "409a valuation" reviews on FounderKit: https://founderkit.com/legal/409a-valuation/reviews
Hosting SQLite databases on GitHub Pages or any static file hoster
The innovation here is getting sql.js to use http and range requests for file access rather than all being in memory.
I wonder when people using next.js will start using this for faster builds for larger static sites?
See also https://github.com/bittorrent/sqltorrent, same trick but using BitTorrent
Yeah, that was one of the inspirations for this. That one does not work in the browser though, would be a good project to do that same thing but with sqlite in wasm and integrated with WebTorrent instead of a native torrent program.
I actually did also implement a similar thing fetching data on demand from WebTorrent (and in turn helping to host the data yourself by being on the website): https://phiresky.github.io/tv-show-ratings/ That uses a protobufs split into a hashmap instead of SQLite though.
This looks pretty efficient. Some chains can be interacted with without e.g. web3.js? LevelDB indexes aren't SQLite.
Datasette is one application for views of read-only SQLite dbs with out-of-band replication. https://github.com/simonw/datasette
There are a bunch of *-to-sqlite utilities in corresponding dogsheep project.
Arrow JS for 'paged' browser client access to DuckDB might be possible and faster but without full SQLite SQL compatibility and the SQLite test suite. https://arrow.apache.org/docs/js/
> Direct Parquet & CSV querying
In-browser notebooks like Pyodide and Jyve have local filesystem access with the new "Filesystem Access API", but downloading/copying all data to the browser for every run of a browser-hosted notebook may not be necessary. https://web.dev/file-system-access/
DuckDB can directly & selectively query Parquet files over HTTP/S3 as well. See here for examples: https://github.com/duckdb/duckdb/blob/6c7c9805fdf1604039ebed...
Wasm3 compiles itself (using LLVM/Clang compiled to WASM)
Self-hosting (compilers) https://en.wikipedia.org/wiki/Self-hosting_(compilers) :
> In computer programming, self-hosting is the use of a program as part of the toolchain or operating system that produces new versions of that same program—for example, a compiler that can compile its own source code
How big of a milestone is self-hosting for a language?
Semgrep: Semantic grep for code
Is there a more complete example of how to call semgrep from pre-commit (which gets called before every git commit) in order to prevent e.g. Python print calls (print(), print \\n(), etc.) from being checked in?
https://semgrep.dev/docs/extensions/ describes how to do pre-commit.
Nvm, here's semgrep's own .pre-commit-config.yml for semgrep itself: https://github.com/returntocorp/semgrep/blob/develop/.pre-co...
I've never used the `pre-commit` framework, but it's really simple to wire up arbitrary shell scripts; check out the
`.git/hooks` directory in your repo for samples, e.g. `.git/hooks/pre-commit.sample`.
You can run any old shell script there, without having to install a python tool.
Yeah but that githook will only be installed on that one repo on that one machine. And they may have no or a different version of bash installed (on e.g. MacOS or Windows). IMHO, POSIX-compatible portable shell scripts are more trouble than portable Python scripts.
Pre-commit requires Python and pre-commit to be installed (and then it downloads every hook function).
This fetches the latest version of every hook defined in the .pre-commit-config.yml:
pre-commit autoupdate
https://pre-commit.com/#pre-commit-autoupdateA person could easily `ln -s repo/.hooks/hook*.sh repo/.git/hooks/` after every git clone.
Out of curiosity, Is there value in doing this over (say) running a GitHub Action post commit and failing the build if it finds something nasty?
If you can catch it before the commit is even made then why do/wait for a build?
Fair enough. Guess IDE plugins work even better for that
IDE plugins are not at all consistent from one IDE to another. Pre-commit is great for teams with different IDEs because all everyone needs to do is:
[pip,] install pre-commit
pre-commit install
# git commit
# pre-commit run --all-files
# pre-commit autoupdate
https://pre-commit.com/Ask HN: What to use instead of Bash / Sh for scripting?
I'm at the point where I feel a certain fatigue writing Bash scripts, but I am just not sure of what the alternative is for medium sized (say, ~150-500 LOC) scripts.
The common refrain of "use Python" hasn't really worked fantastically: I don't know what version of Python I'm going to have on the system, installing dependencies is not fun, shelling out when needed is not pleasant, and the size of program always seemingly doubles.
I'm willing to accept something that's not on the system as long as it's one smallish binary that's available in multiple architectures. Right now, I've settled on (ab)using jq, using it whenever tasks get too complex, but I'm wondering if anyone else has found a better way that should also hopefully not be completely a black box to my colleagues?
A configuration management system may have you write e.g. YAML with Jinja2 so that you don't reinvent the idempotent wheel.
It's really easy to write dangerous shell scripts ("${@}" vs ${@} for example) and also easy to write dangerous Python scripts (cmd="{}; {}").
Sarge is one way to use subprocess in Python. https://sarge.readthedocs.io/en/latest/
If you're doing installation and configuration, the most team-maintainable thing is to avoid custom code and work with a configuration management system test runner.
When you "A shell script will be fine, all I have to do is [...]" and then you realize that you need a portable POSIX shell script and to be merged it must have actual automated tests of things that are supposed to run as root - now in a fresh vm/container for testing - and manual verification of `set +xev` output isn't an automated assertion.
> avoid custom code and work with a configuration management system test runner
ansible-molecule is a test runner for Ansible playbooks that can create VMs or containers on local or remote resources.
You can definitely just call shell scripts from Ansible, but the (parallel) script output is only logged after the script returns a return code unless you pipe the script output somewhere and tail that .
> manual verification of `set +xev` output isn't an automated assertion.
From "Bash Error Handling" https://news.ycombinator.com/item?id=24745833 : you can display the line number in `set -x` output by setting $PS4:
export PS4='+(${BASH_SOURCE}:${LINENO}) '
set -x
But that's no substitute for automated tests and a test runner that produces e.g. TAP output from test runner results: http://testanything.org/producers.html#shellEstonian Electronic Identity Card and Its Security Challenges [pdf]
Proud e-resident here :)
US citizen here trying to champion such a system in the US. Would you be willing to share pros and cons from your experience using the system day to day?
Are you working with a specific org or initiative? I'm also an American interested in this (and an e-resident who formerly worked for the Estonian government).
Mostly activist citizen efforts from the outside, as legislation is going to be required to appropriate funding and direction from Congress, and the legislators I interface with are busy with arguably more pressing work (unfortunate but entirely understandable, such are the times).
I intend to apply at the USDS for the Login.gov team in some capacity to help on the tech side if the necessary legislation can be put in place to support such an initiative. Their system already supports the DOD CAC (common access card), which is a short walk away from a citizen digital ID card (would be a different org and PKI root to administer and govern citizen cards, to grossly simplify).
Login.gov recently expanded to support city and local gov IAM needs (when they have ties to federal programs) [1] [2], so there is roadmap momentum and executive branch will. "Digital identity is a big deal. [3]" They're already serving 30 million users, and 500k DAUs, really just a matter of scaling up.
[1] https://www.gsa.gov/blog/2021/02/18/logingov-to-provide-auth...
[2] https://www.govloop.com/login-gov-expands-use-to-cities-stat...
I get the spirit and a m familiar with the CAC and general benefits, and using that for all .gov stuff is attractive (taxes, bills, FASFA, etc)
I think you’re glossing over two major implementation factors that need to be part of the discussion from get-go:
* how much CAC use is dependent on fairly specific govt tech infra to be widely deployed, and how will that work for everyone (ever tried to setup a CAC on a civ computer)
* key control, either extremely decentralized like the iPhone (lock yourself out of your passport?), or extremely centralized and the newest honeypot OPM holds (root cert for all e-citizen PKI). US currently doesn’t have the internal cybersec chops to run that at all (closest equivalent is CISA).
You're right to point this out, but my counter argument is that these are solvable pain points for such an implementation (either done today in competent zero trust security architectures in progressive orgs or at nation state levels such as Estonia). You won't need a CAC reader on computers, for the most part, if you have mobile apps that can perform the identity proofing (such that's already done with examples like Apple's biometrics systems, FIDO2/WebAuthn, Apply Pay, etc). You'd still have the card for interfacing at endpoints (banks, postal service, IRS/SSA offices, other trust anchors and government services endpoints). I already login to my US CBP Global Entry account with Login.gov and 2FA, why can I not today use the same IAM system to login to my Social Security account? Or my IRS tax account? Or to attest to my citizenship or other attributes that I'd normally need a certified document for (yuk!).
I'm not arguing for such a system without robust support and reasonable downgrades for failure scenarios (identity reproofing if you lose your digital ID, for example). I'm arguing for, admittedly challenging, digital ID modernization without disenfranchisement. I genuinely appreciate you pointing out the challenges, as they must be addressed.
US DHS CISA is absolutely a resource that needs to be leaned on heavily to implement what I describe, and to ensure a strong security posture throughout the federal government's infrastructure.
FWIU, DHS has funded [1] development of e.g W3C DID Decentralized Identifiers [2] and W3C Verifiable Credentials [3]:
[1] https://www.google.com/search?q=site%3Aw3.org+%22funded+by+t...
[2] https://www.w3.org/TR/did-core/
[3] https://www.w3.org/TR/vc-data-model/
Additional notes regarding credentials (certificates, badges, degrees, honorarial degrees, then-evaluated competencies) and capabilities models: https://news.ycombinator.com/item?id=19813340
westurner/blockchain-credential-resources.md: https://gist.github.com/westurner/4345987bb29fca700f52163c33...
Value storage and transmission networks have developed standards and implementations for identity, authentication, and authorization. ILP (Interledger Protocol) RFC 15 specifies "ILP addresses" for [crypto] ledger account IDs: https://interledger.org/rfcs/0015-ilp-addresses/
From "Verifiable Credentials Use Cases" https://w3c.github.io/vc-use-cases/ :
> A verifiable claim is a qualification, achievement, quality, or piece of information about an entity's background such as a name, government ID, payment provider, home address, or university degree. Such a claim describes a quality or qualities, property or properties of an entity which establish its existence and uniqueness. The use cases outlined here are provided in order to make progress toward possible future standardization and interoperability of both low- and high-stakes claims with the goals of storing, transmitting, and receiving digitally verifiable proof of attributes such as qualifications and achievements. The use cases in this document focus on concrete scenarios that the technology defined by the group should address.
FWIU, the US Department of Education is studying or already working with https://blockcerts.org/ for educational credentials.
Here are the open sources of blockchain-certificates/cert-issuer and blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates
Might a natural-born resident get a government ID card for passing a recycling and environmental sustainability quiz.
Systemd makes life miserable, again, this time by breaking DNS
So, I made the mistake of updating my laptop from Fedora 31 to Fedora 33 last night. Normally this is fairly painless, as my laptop is one of the last machines I perform distribution upgrades. Today while doing some pole survey work out in the field, I tethered my laptop to my phone as has been done hundreds of times before. To my surprise, DNS doesn't work anymore, but only in web browsers. Both Firefox and Chrome can't resolve names anymore. Command line tools like ping and host work normally. WTF?
Why are distributions continuing to allow systemd to extend its tentacles deeper and deeper into more parts of Linux userland with poorly tested subsystem replacements for parts of Linux that have been stable for decades? Does nobody else consider this repeating pattern of rewrite-replace-introduce-new-bugs a problem? Newer is not all that better if you break what is a pretty bog standard and common use-case.
As well, Firefox now defaults to DoH (DNS over HTTPS), which may be bypassing systemd-resolved by doing DNS resolution in the app instead of calling `gethostbyname()` (`man gethostbyname`) and/or `getaddrinfo()`.
`man systemd-resolved` describes why there is new DNS functionality: security; "caching and validating DNS/DNSSEC stub resolver, as well as an LLMR and MulticastDNS resolver and responder".
From `man systemd-resolved` https://man7.org/linux/man-pages/man8/systemd-resolved.servi... :
> To improve compatibility, /etc/resolv.conf is read in order to discover configured system DNS servers, but only if it is not a symlink to /run/systemd/resolve/stub-resolv.conf, /usr/lib/systemd/resolv.conf or /run/systemd/resolve/resolv.conf
> [...] Note that the selected mode of operation for this file is detected fully automatically, depending on whether /etc/resolv.conf is a symlink to /run/systemd/resolve/resolv.conf or lists 127.0.0.53 as DNS server.
Is /etc/resolv.conf read on reload and or restart of the systemd-resolved service (`servicectl restart systemd-named`)?
Some examples of validating DNSSEC in `man delv` would be helpful.
NetworkManager (now with systemd-resolved) is one system for doing DNS configuration for zero or more transient interfaces:
man nmcli
nmcli connection help
nmcli c help
nmcli c h
nmcli c show ssid_or_nm_profile | grep -i dns
nmcli c modify help
man systemd-resolved
man delv
man dnssec-trust-anchors.d
it's probably dnssec & your time is probably off.
jorunalctl -xe should be showing errors if so. probably check 'joirnalctl -xeu systemd-resolved'
use 'date' to check the time. if it's off, I use ntpdate to update my clock. but since dns isn't resolving I use 'dig pool.ntp.org @9.9.9.9' to resolve a ntp pool server ip address. then ntpdate [that up].
one of those things that hits me a bunch that I haven't automated quite yet. supposedly newer systemd can detect dnssec being bad in some cases & disable it, after a time, after a bunch of failures, but it either hasn't tripped in a number of cases for me or something else was odd in my envs. manually syncing the clock via ntp usually gets my dns working again.
> manually syncing the clock via ntp usually gets my dns working again.
Why is this necessary?
normally systemd-timedated does a good job keeping things in sync!
but some of my systems seem to have not great real-time clock batteries, and the system will forget the time. i think there might be some other circumstance that sometimes causes my system clock to be way out of whack, but i'm not sure what.
so that's why my clock doesn't work and needs sync.
the dnssec internet protocols are designed to guarantee the user that they have up to date, accurate, trustable records. this depends on your system knowing what time it is now. if your system is way ahead or way behind the actual time, the dnssec records it gets dont appear as valid. And systemd-resolved will reject them, if it is set up to respect DNSSEC.
Systemd-timedated: https://www.freedesktop.org/software/systemd/man/systemd-tim...
timedatectl set-ntp false && \
timedatectl set-ntp true
Src: https://github.com/systemd/systemd/blob/main/src/timedate/ti...Ask HN: How bad is proof-of-work blockchain energy consumption?
I'm not a blockchain/crypto expert by any means, but I've been hearing about how much energy the proof-of-work blockchains (Bitcoin, Ethereum, NFTs) consume. Unless I'm mistaken their whole design relies on cranking through more and more CPU cycles. Should we be more concerned about this? Are the concerns overblown? Are there ways to improve it without certain crypto currencies imploding?
A rational market would be choosing an asset that offers value storage and transmission (between points in spacetime) according to criteria: "security" (security theater, infosec, cryptologic competency assessment, software assurances), "future stability" (future switching costs), and "cost".
The externalities of energy production are what must be overcome if we are to be able to withstand wasteful overconsumption of electricity. Eventually, we could all have free clean energy and no lightsabers, right?
So, we do need to minimize wasteful overconsumption. Define wasteful in terms of USD/kWHr (irregardless of industry)? In terms of behavioral economics, why are they behaving that way when there are alternatives that cost <$0.01/tx and a fairly-aggregated comprehensive n kWhr of electricity?
TIL about these guys, who are deciding to somewhat-responsibly self-regulate in the interest of long-term environmental sustainability for all of the land: "Crypto Climate Accord". https://cryptoclimate.org/
"Crypto Climate Accord Launches to Decarbonize Cryptocurrency Industry Brings together the likes of CoinShares, ConsenSys, Ripple, and the UNFCCC Climate Champions to lead sustainability in blockchain and crypto" (2021) https://bit.ly/CryptoClimateAccord
> What are the objectives of the Crypto Climate Accord? The Accord’s overall objective is to decarbonize the global crypto industry. There are three provisional objectives to be finalized in partnership with Accord supporters:
> - Enable all of the world’s blockchains to be powered by 100% renewables by the 2025 UNFCCC COP Conference
> - Develop an open-source accounting standard for measuring emissions from the cryptocurrency industry
> - Achieve net-zero emissions for the entire crypto industry, including all business operations beyond blockchains and retroactive emissions, by 2040
Similar to the Paris Agreement (2015), stakeholders appear to be setting their own targets for sustainability in accordance with the Crypto Climate Accord (2021). https://cryptoclimate.org/accord/
Someone who's not in renewables could launch e.g. a "Satoshi Nakamoto Clean Energy Fund: SNCEF" to receive donations from e.g. hash pools and connect nonprofits with sustainability managed renewables. How many SNCEFs did you give this year and why?
#CleanEnergy
Thankfully, cheap energy fit for Bitcoin mining is increasingly not found in fossil fuels, but in renewables, wasted and “stranded” sources of energy, as we’ll dive into next. [1]
tl;dr; The perfect competition of bitcoin mining is/will drive bitcoin to eventually only work off the least expensive (which currently is renewable[2]) and even more so on marginally wasted energy[3] or "stranded energy" (like a natural gas flare on a well).[1]: https://bitcoinmagazine.com/business/bitcoin-will-save-our-e...
[2]: https://www.carbonbrief.org/solar-is-now-cheapest-electricit...
[3]: For example, to illustrate the point, I currently mine BTG on a spare GPU I had because my house needed to be heated anyways. I turn the miner on when I want heat in my house (about 600w) and turn it off when not needed... This is a micro example of what could be done across our entire society "Recent data suggest that space heating accounts for about 42 percent of energy use in U.S. residences and about 36 percent of energy use in U.S. commercial buildings."[4] . Every one of those space heaters could actually be a processor doing computation. In the future the "waster" will not be a miner but someone who turns electricity straight to heat without intermediate chains of value (such as computation or light).
[4]:https://www.epa.gov/rhc/renewable-space-heating#:~:text=Rece....
It's important to note for #3 that a mining GPU is a terribly inefficient space heater.
No, all space heaters are equally efficient. They all have perfect 100% efficiency, because they turn electrical power into heat. When your work product is heat and the waste product is also heat, then there really is no waste.
Technically in the case of cryptocurrency mining, some of the electrical power is turned into information rather than heat. In principle this reduces the amount of heat that you get, but in practice this isn’t even measurable. Most of the information is erased (discarded as useless), which turns it back into heat. Only a few hundred bits of information will be kept after successfully mining a block of transactions, and the amount of heat that costs you is fantastically small. Far smaller than you can measure.
More transistors per unit area, but also more efficient please! There should be demand for more efficient chips (semiconductors,) that are fully-utilized while depreciating on your ma's electricity bill (which is not yet (?) really determined by a market-based economy with intraday speculation to smooth over differences in supply and demand in the US). Oversupply of the electrical grid results in damage costs; which is why the price sometimes falls so low where there are intraday prices and supply has been over-subsidized pending the additional load from developing economies and EVs: Electric Vehicles.
New grid renewables (#CleanEnergy) are now less expensive than existing baseload; which makes renewables long term environment-rational and short term price-rational.
"Thermodynamics of Computation Wiki" (2018) https://news.ycombinator.com/item?id=18146854
> No, all space heaters are equally efficient. They all have perfect 100% efficiency, because they turn electrical power into heat. When your work product is heat and the waste product is also heat, then there really is no waste.
This heat must be distributed throughout the room somehow (i.e. a batteryless woodstove fan or a sterling engine that does work with the difference in entropy when there is a difference in entropy)
> Technically in the case of cryptocurrency mining, some of the electrical power is turned into information rather than heat. In principle this reduces the amount of heat that you get, but in practice this isn’t even measurable. Most of the information is erased (discarded as useless), which turns it back into heat.
See "Thermodynamics of Computation Wiki" re: a possible way to delete known observer-entangled bits while reducing heat/entropy (thus bypassing Landauer's limit for classical computation?)?
> Only a few hundred bits of information will be kept after successfully mining a block of transactions, and the amount of heat that costs you is fantastically small. Far smaller than you can measure.
Each n-symbol sequence in the hash function output does appear to have nearly equal frequency/probability of occurrence. Indeed, is Proof-of-Work worth the heat if you're not reusing the waste heat?
What does a PGP signature on a Git commit prove?
Totally though this was going to be some snarky article claiming they're worthless. Refreshing to see the article just ernestly answers the headline.
> To my knowledge, there are no effective attacks against sha1 as used by git
Perhaps im missing something, but wouldn't a chosen prefix collision be relavent here? I imagine the real reason is that cost to pull it off for sha1 is somewhere in the $10,000-$100,000 range (but getting cheaper every year) which is lots of $$$ to attack something without an obvious attack scenario that can justify it.
So the big problem with Git and SHA1 is that in many cases you are giving full control to an untrusted third party over the sha1 hash of something. For example, if you merge a binary file, it'd be quite easy for them to generate two different versions of the same file, with the same SHA1 digest, and then use the second version to cause problems in the future. You may also be able to modify a text file in the same way without getting noticed in review (I'm not up to speed on how advanced sha1 collision techniques are now).
Similarly, the git commits you merge themselves could have that done - the actual git commit serialization gives you a fair bit of ability to append stuff to it that isn't shown in the UI. That wouldn't affect the signed git commits. But it's still dubious to have the ability to change old history in a checkout.
Anyway, Git is apparently moving towards SHA256 support, so hopefully this problem will be fixed soon: https://lwn.net/Articles/823352/
> it'd be quite easy for them to generate two different versions of the same file
Citation needed. When SHA1 was cracked, it cost $110k worth of cloud computing. And there was some restriction on the two files which matched checksums. IIRC it was like the Birthday Paradox — you don’t pick one and find another sharing the same match, but you generate billions of mutations of similar binaries and statistically two would have the same checksum.
Not exactly easy, fast, cheap, or work with all use cases.
That nonce value could be ±\0 or 5,621,964,321e100; though for well-designed cryptographic hash functions it's far less likely that - at maximum difficulty - a low nonce value will result in a hash collision.
? How are nonces even involved in this?
Searching for the value to prepend or append that causes a hash collision is exactly the same as finding a nonce value at maximum difficulty (not less than the difficulty value, exactly equal to the target hash).
Mutate and check.
The term nonce has a broader meaning than how it is used in bitcoin. (Edit: i reworded this sentence from what i originally had)
That said, no, finding a collision and finding a preimage are very different things, and well the collision attacks on sha1 will involve a lot of guessing and checking, they are not generic birthday or bruteforce attacks but rely on weaknesses in sha-1 to be practical. They also do not make preimage attacks practical.
Brute forcing to find `hash(data_1+nonce) == hash(data_0)` differs very little from ``hash(data_1+nonce) < difficulty_level`. Write each and compare the cost/fitness/survival functions.
If the hash function is reversible - as may be discovered through e.g. mutation and selection - that would help find hashes that are equal and maybe also less than.
Practically, there are "rainbow tables" for very many combinations of primes and stacked transforms: it's not necessary to search the whole space for simple collisions and may not be necessary for preimages; we don't know and it's just a matter of time. "Collision attack" https://en.wikipedia.org/wiki/Collision_attack
Crytographic nonce > hashing: https://en.wikipedia.org/wiki/Cryptographic_nonce#Hashing
> Brute forcing
The attack being discussed is not a brute force attack (or not purely). If the best attack on sha1 was bruteforce than we would still be using it.
> to find `hash(data_1+nonce) == hash(data_0)` differs very little from ``hash(data_1+nonce) < difficulty_level`.
Neither of those are collision attacks (assuming you dont control the data variable). The first is a second pre-image and the second (with equality) would be a normal preimage.
The attack for sha1 under discussion (chosen prefix collision) is finding hash(a+b) == hash(c+d) where you control b and d (but not neccesarily a and c)
> Practically, there are "rainbow tables" for very many combinations of primes and stacked transforms:
What do primes or rainbow tables have to do with any of this? Primes especially. Rainbow tables are at least related to reversing hashes, if totally irrelavent to the subject at hand, but how did you get to primes?
Practically, iff browsers still relied upon SHA-1 to fingerprint and pin and verify certificates instead of the actual chain, and there were no file size limits on x.509 certificates, some fields in a cert (e.g. CommonName and SAN) would be chosen and other fields would then potentially be nonce.
In context to finding a valid cert with a known good hash fingerprint, how many prime keypairs could there be to precompute and cache/memoize when brute forcing.
"SHA-1 > Cryptanalysis and validation " does list chosen prefix collision as one of many weaknesses now identified in SHA-1: https://en.wikipedia.org/wiki/SHA-1#Cryptanalysis_and_valida...
This from 2008 re: the 200 PS3s it took to generate a rogue CA cert with a considered-valid MD5 hash: https://hackaday.com/2008/12/30/25c3-hackers-completely-brea...
... Was just discussing e.g. frankencerts the other day: https://news.ycombinator.com/item?id=26605647
Breakthrough for ‘massless’ energy storage
"Researchers from Chalmers University of Technology have produced a structural battery that performs ten times better than all previous versions"
It's 10 times more everything, just like every other battery breakthrough in the last several decades. AND, you get to use it as a building material?
- Sounds too good to be true: Check
- Sounds like every other "big" breakthrough: Check
- Article light on science and heavy on assertions: Check
- Skepticism engaged: Check
It may really be 10x, but you've mischaracterized what the paper is claiming a 10x breakthrough for. In fairness, the linked article is light on the specifics.
The claim is:
__This battery holds 10x more charge per kilo of material than previous attempts at STRUCTURAL (massless) battery materials__.
To be specific, this material holds only _ONE FIFTH_ the charge of what your smartphone's battery can hold per kilo of battery. No laws of thermodynamics is being broken and this is not at all about battery chemistry. It's all about the 'physics' of construction materials: About how this stuff is made in the factory and layered. The basic battery chemistry going on is not much different from what's been available for years - the interesting part is how this material encases it.
You can't make a car by building the chassis out of smartphone batteries. But the promise of this paper is that you CAN build the car chassis out of this battery, and even if this battery is only 20% as effective, a car chassis is rather large, and you needed it anyway, so every drop of power you can store in the chassis itself was effectively 'free' - hence the somewhat hyperbolous 'massless' terminology.
> You can't make a car by building the chassis out of smartphone batteries
They're called Structural batteries (or [micro]structural super/ultracapacitors)
"Carmakers want to ditch battery packs, use auto bodies for energy storage" (2020,) https://arstechnica.com/cars/2020/11/carmakers-want-to-ditch...
This is literally what TFA is about
OpenSSL Security Advisory
If the cloud companies just paid a team of five people 200K each to spend a year rewriting OpenSSL from scratch, they would save multiple millions in scrambling to deploy bug fixes.
Nope, they'd just create new software with different bugs that have not been discovered yet. Then we'd all be scrambling to fix those bugs.
Not if the implementation is formally verified, like miTLS [0] and EverCrypt [1]. Parts of the latter were integrated into Firefox, which even provided a performance boost (10x in one case) [2].
I think what is needed is something like EverCrypt but for TLS. Or in other words, something like miTLS but which extracts to C and/or Assembly, to avoid garbage collection and for easy interoperation with different programming languages (preferably including an OpenSSL-compatible API for backwards compatibility).
[0]: https://mitls.org/ [1]: https://hacl-star.github.io/HaclValeEverCrypt.html [2]: https://blog.mozilla.org/security/2020/07/06/performance-imp...
Formally verified against what? and what assumption were made?
miTLS is formally verified against the TLS spec on handshaking, assuming the lower-level crypto routine are good. It is not even free from timing attack.
EverCrypt have stronger proof, but it is only safe as in not crashing and correct as in match the spec in all valid input. It is not proved to be free from DoS or invalid input attack.
OpenSSL do more then TLS. Lots of interesting thing are in the cert format parsing and management.
> Formally verified against what? and what assumption were made?
Well, tell me again, what is OpenSSL formally verified against? What assumptions were made in OpenSSL?
Formal verification does not eliminate very large classes of bugs only when absolutely everything is formally verified, including timing attacks. Instead, it consistently produces more reliable software, many times even when only certain basic properties are proved (such as, memory safety, lack of integer overflows or division by zero, etc).
Formal verification can be a continuum, like testing, but proven to work for all inputs (that possibly meet certain conditions, under certain assumptions). The assumptions and properties that are proven can always be strengthened later, as seen in multiple real-world projects (such as seL4 and others).
The result is code that is almost always a lot more bug-free than code that is not formally verified. And as I said, more properties can be proven over time, especially as new classes of attacks are discovered (e.g. timing attacks, speculation attacks in CPUs, etc).
To formally verify an implementation you need a... formal description of what is to be implemented and verified.
Constructing a formal specification from RFC8446 is possible.
Constructing a formal specification for PKIX... is not. PKIX is specified by a large number of RFCs and ITU-T/ISO specs, some of which are more formal than others. E.g., constructing a formal specification for ASN.1 should be possible (though a lot of work), while constructing a formal specification for certificate validation is really hard, especially if you must support complex PKIs like DoD's. Checking CRLs and OCSP, among other things, requires support for HTTP, and even LDAP, so now you have... a bunch more RFCs to construct formal descriptions of.
And there had better be no bugs in the formal descriptions you construct from all these specs! Recall, they're mostly not formal specs at all -- they're written in English, with smatterings of ASN.1 (which is formal, though the specs for it, though they're very very good, mostly aren't formal).
The CVE in question is in the PKIX part of OpenSSL, not the TLS implementation.
What you're asking for is not 5 man-years worth of work, but tens of man-decades. The number of people with deep knowledge of all this stuff is minute as it is -- maybe just a handful, tens at most. The number of people with deep knowledge of a lot of this stuff is larger, but still minute. So you're asking to spend decades' worth of a tiny band of people's time on this project, when there are other valuable things for them to do.
The number of people who can do dev work in this space is much larger, of course -- in the thousands. But very few of them have the right expertise to work on a formal, verified implementation of PKIX.
Plus, it's all a moving target.
Sure, we could... train a lot of people just for such a project, but it takes time to do that, and it still takes time from that tiny band of people who know this stuff really well.
I'm afraid you're asking for unobtanium.
EDIT: Plus, there's probably tens of millions of current dollars' (if not more) worth of development embodied in OpenSSL as it stands. It's probably at least that much to replace it with a verified implementation, and probably much more because the value of programmers expert enough to do it is much more than the $200K/year suggested above (even if you train new ones, it would take years of training, and then they would be just as valuable). I think a proper , formally verified replacement of OpenSSL would probably run into the hundreds of millions, especially if it's one huge project since those tend to fail.
Well, sure, if your start with those premises, then I'm not surprised that you reach the conclusion that the goal is unachievable.
First of all, if constructing a formal specification for PKIX is not possible, then that should be telling you that it either needs to be simplified, better specified or scrapped altogether for something better (the latter would require an extremely large transition period, I'm imagining, so the first two are much preferred in this situation).
Otherwise, how can you be sure that any implementation in fact implements it correctly?
> And there had better be no bugs in the formal descriptions you construct from all these specs!
Well, I don't think that is true. You should in fact allow bugs in the formal description, otherwise how will you ever get anything done in such a project?
You see, having a formal description with a bug is much better than having no formal description at all.
If you have no formal description, you can't really tell if your code has bugs. If you have a buggy formal description, then you are able to catch some bugs in the implementation and the implementation can also catch some bugs in the formal description.
Also, some parts of the formal description can catch bugs in other parts of the formal description.
So the end result can be strictly better than the status quo.
> Plus, it's all a moving target.
Sure, but hopefully it's a moving target moving in the direction of simplification rather than getting more complex, otherwise things will just get worse rather than get better, regardless if we keep with the status quo or not. I'm not a TLS expert by any means but I think TLS 1.3 moved in that direction for some parts of the protocol, at least (if I'm not mistaken).
Also, I think you are not fully appreciating that formal verification can be done incrementally.
You can start by doing the minimum possible, i.e. simply verifying that your code is free of runtime errors, which would eliminate all memory safety-related bugs, including Heartbleed.
This would already be better than reimplementing in Rust, because the latter can protect from memory safety bugs but not other runtime bugs (such as division by zero, unexpected panics, etc).
BTW, this minimal verification effort would already eliminate the second bug in this security advisory.
You can then verify other simple properties, even function by function, no complicated models are even necessary at first.
For example, you could verify that your function that verifies a CA certificate, when passed the STRICT flag, is really more strict than when not passed the STRICT flag.
This would eliminate the first bug in this security advisory and all other similar bugs in the same function call-chain.
BTW, what I just said is really easy to specify, and I'm guessing it's also very easy to prove, since I'm guessing that the strict checks are just additional checks, while the normal checks are shared between the two verification modes.
Many other such properties are also easy to prove. The more difficult properties/models, or even full functional verification, can be implemented by more expert developers or even mathematicians.
I think the problem is also that OpenSSL devs, like the vast majority of devs, probably have no desire/intention/motivation/ability to do formal verification, otherwise you could even do this in the OpenSSL code base itself (although that is not ideal because verifying a program written in C requires more manual work than verifying it in a simpler language whose code extracts to C).
I'm also guessing that your budget estimate is an exaggeration since miTLS and EverCrypt, although admittedly projects whose scope has still not reached your ambitious goals (i.e. full functional verification of all layers in the stack), was probably done with a much smaller budget.
And it's not like you can't build on top of that and incrementally verify more properties over time, e.g. more layers of the stack or whatever.
You don't need a huge mega-project, just a starting point, and sufficient motivation.
> Well, sure, if your start with those premises, then I'm not surprised that you reach the conclusion that the goal is unachievable.
I didn't say unachievable. I said costly.
> First of all, if constructing a formal specification for PKIX is not possible, then that should be telling you that it either needs to be simplified, better specified or scrapped altogether for something better (the latter would require an extremely large transition period, I'm imagining, so the first two are much preferred in this situation).
The specs for the new thing would basically have to be written in Coq or similar. Even if you try real hard to keep it small and not make the... many mistakes made in PKIX's history... it would still be huge. And it would be even less accessible than PKIX already is.
> For example, you could verify that your function that verifies a CA certificate, when passed the STRICT flag, is really more strict than when not passed the STRICT flag.
That's just an argument for better testing. That's not implementation verification.
> This would eliminate the first bug in this security advisory and all other similar bugs in the same function call-chain.
Only if you thought to write that test to begin with. Writing a test for everything is... not really possible. SQLite3, one of the most tested codebases in the world, has a private testsuite that gets 100% branch coverage, and even that is not the same as testing every possible combination of branches.
> BTW, what I just said is really easy to specify, and I'm guessing it's also very easy to prove, since I'm guessing that the strict checks are just additional checks, while the normal checks are shared between the two verification modes.
It's not. The reason the strictness flag was added was that OpenSSL was historically less strict than the spec demanded. It turns out that when you're dealing with more than 30 years of history, you get kinks in the works. It wouldn't be different for whatever thing replaces PKIX.
> I think the problem is also that OpenSSL devs, like the vast majority of devs, probably have no desire/intention/motivation/ability to do formal verification, otherwise you could even do this in the OpenSSL code base itself [...]
You must be very popular at parties.
> I'm also guessing that your budget estimate is an exaggeration since miTLS and EverCrypt, [...]
Looking at miTLS, it only claims to be an implementation of TLS, not PKIX. Not surprising. EverCrypt is a cryptography library, not a PKIX library.
> That's just an argument for better testing. That's not implementation verification.
No, I'm not talking about testing, I'm talking about really basic formal verification:
forall (x: Certificate), verify_certificate(x, flags = 0) == invalid ==> verify_certificate(x, flags = STRICT) == invalid
This is trivial to specify and almost as trivial to prove to be correct for all inputs. Testing can't do that, no matter how good your testing.
> Only if you thought to write that test to begin with. Writing a test for everything is... not really possible.
I agree, but I'm not talking about testing. I'm talking about formal verification.
> It's not. The reason the strictness flag was added was that OpenSSL was historically less strict than the spec demanded. It turns out that when you're dealing with more than 30 years of history, you get kinks in the works. It wouldn't be different for whatever thing replaces PKIX.
That's totally fine and wouldn't affect the verification of that function at all. This is a very simple property to verify and it would avoid this bug and all similar bugs in that function (even if that function calls other functions, no matter how large or complex they are).
Other functions also have such easy to verify properties, this is not hard at all to come up with (although sure, some more difficult properties might be harder to verify).
> You must be very popular at parties.
I didn't want to be dismissive of OpenSSL devs or other devs in general, I just find it frustrating that there are so many myths surrounding this topic, and a lot less education, interest and investment than I think there should be, nowadays.
You are confusing verification as in "certificate" with verification as in "theorem proving", and you are still assuming a formal description of what to verify (which I'll remind you: doesn't exist). And then you go on to talk about myths and uneducated and uninterested devs. Your approach is like tilting at windmills, and will achieve exactly as much.
If I understood you correctly, then I am not confusing those two things.
Maybe you haven't noticed but I actually wrote a theorem about a hypothetical verify_certificate() function in my previous comment. Maybe you also haven't noticed, but I didn't need a formal description of how certificate validation needs to be done to write that theorem.
And I assure you, if the implementation of verify_certificate() is anything except absolute garbage, it would be very easy to prove the theorem to be correct. I actually have some experience doing this, you know? I'm not just parroting something I read, I've proved code to be 100% correct multiple times using formal verification (i.e. theorem proving) tools. That is, according to certain basic and reasonable assumptions, of course, like e.g. the hardware is not faulty or buggy while running the code, and that the compiler itself is not buggy, which would be a lot more rare than a bug in my code -- and even then, note that compilers and CPUs can be (and have been) formally verified.
Maybe you don't agree with my approach but I think it's the most realistic one for a project such as this and I am absolutely confident it would be practical, would completely eliminate most (i.e. more than 50%, at least) existing bugs with minimal verification effort (which almost anyone would be capable of doing with minimal training), and would steadily become more and more bug-free with additional, incremental verification (i.e. theorem proving) and refactoring effort.
https://project-everest.github.io/ :
> Focusing on the HTTPS ecosystem, including components such as the TLS protocol and its underlying cryptographic algorithms, Project Everest began in 2016 aiming to build and deploy formally verified implementations of several of these components in the F* proof assistant.
> […] Code from HACL*, ValeCrypt and EverCrypt is deployed in several production systems, including Mozilla Firefox, Azure Confidential Consortium Framework, the Wireguard VPN, the upcoming Zinc crypto library for the Linux kernel, the MirageOS unikernel, the ElectionGuard electronic voting SDK, and in the Tezos and Concordium blockchains.
S2n is Amazon's formally verified TLS library. https://en.wikipedia.org/wiki/S2n
IDK about a formally proven PKIX. https://www.google.com/search?q=formally+verified+pkix lists a few things.
A formally verified stack for Certificate Transparency would be a good way to secure key distribution (and revocation); where we currently depend upon a TLS library (typically OpenSSL), GPG + HKP (HTTP Key Protocol).
Fuzzing on an actual hardware - with stochastic things that persist bits between points in spacetime - is a different thing.
Funny, the first hit for that search you linked is... my comment above. The "few things" other than that are for alternatives to PKIX, which is fine and good, but PKIX will be with us for a long time yet. As for Everest, it jives with what I wrote above, that verified implementations of TLS are feasible (Everest also implements QUIC and similar), but -surprise!- not listed is PKIX.
I know, it sounds crazy, really crazy, but PKIX is much bigger than TLS. It's big. It's just big.
The crypto, you can verify. The session and presentation layers, you can verify. Heck, maybe you can verify your app. PKIX implementations of course can be verified in principle, but in fact it would require a serious amount of resources -- it would be really expensive. I hope someone does it, to be sure.
I suppose the first step would be to come up with a small profile of PKIX that's just enough for the WebPKI. Though don't be fooled, that's not really enough because people do use "mTLS" and they do use PKINIT, and they do use IPsec (mostly just for remote access) with user certificates, and DoD has special needs and they're not the only ones. But a small profile would be a start -- a formal specification for that is within the realm of the achievable in reasonably short order, though still, it's not small.
Both a gap and an opportunity; someone like an agency or a FAANG with a budget for something like this might do well to - invest in the formal methods talent pipeline and - very technically interface with e.g. Everest about PKIX as a core component in need of formal methods.
"The SSL landscape: a thorough analysis of the X.509 PKI using active and passive measurements" (2011) ... "Analysis of the HTTPS certificate ecosystem" (2013) https://scholar.google.com/scholar?oi=bibs&hl=en&cites=16545...
TIL about "Frankencerts": Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations (2014) https://scholar.google.com/scholar?cites=3525044230307445257... :
> Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations.
> Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc.
W3C ld-signatures / Linked Data Proofs, and MerkleProof2017: https://w3c-ccg.github.io/lds-merkleproof2017/
"Linked Data Cryptographic Suite Registry" https://w3c-ccg.github.io/ld-cryptosuite-registry/
ld-proofs: https://w3c-ccg.github.io/ld-proofs/
W3C DID: Decentralized Identifiers don't solve for all of PKIX (x.509)?
"W3C DID x.509" https://www.google.com/search?q=w3c+did+x509
Thanks for the link about frankencerts!
How much total throughput can your wi-fi router really provide?
Out of curiosity does some company benchmark other types of commercial networking devices like this? Things like load balancers, DDoS protection, switches, etc.? How do businesses shop for such products without having a standard measure from an independent third party?
netperf and iperf are utilities for measuring network throughput: https://en.wikipedia.org/wiki/Iperf
It's possible to approximate the https://dslreports.com/speedtest using the flent CLI or QT GUI (which calls e.g. fping and netperf) and isolate out ISP variance by running a netperf server on a decent router and/or a workstation with a sufficient NIC (at least 1Gbps). https://flent.org/tests.html
`dslreports_8dn`: https://github.com/tohojo/flent/blob/master/flent/tests/dslr...
From https://flent.org/ :
> RRUL: Create the standard graphic image used by the Bufferbloat project to show the down/upload speeds plus latency in three separate charts:
> `flent rrul -p all_scaled -l 60 -H address-of-netserver -t text-to-be-included-in-plot -o filename.png`
In 2021, most routers - even with OpenWRT and hardware-offloading - cannot actually push 1 Gigabit over wired Ethernet, though the port spec does say 1000 Mbps.
The Most Important Scarce Resource Is Legitimacy
It seems that nearly every "public good" in the Ethereum ecosystem got gobbled up by VCs and pivoted to for-profit endeavors. I don't mind VCs and I don't mind people seeking profit, but this moral high ground that Ethereum has been trying to build for itself is delusional. In practice it is just a marketing gimmick.
An astute observer may have noticed that Vitalik has a habit of coming up with definitions that only himself and the Ethereum foundation can meet the standards of.
The reality is everyone is just trying to make money. I think we are better off coming to terms with that instead of living in a collective fantasy.
Public goods are by definition non-rivalrous and non-excludable. How can they be gobbled?
https://en.wikipedia.org/wiki/Public_good_(economics)
(I'm not being pedantic here. The concept of a public good, in this technical sense, is a very important concept to have -- and it's specifically the kind of "public goods" the article was talking about, along with ideas for improving their tendency to be under-supplied. Goods that can be gobbled up by someone tend to be produced at closer to the optimum amount, since there's a less diffuse private incentive for their production.)
Public goods ... Welfare economics ... Social choice theory, Arrow's, Indifference curve: https://en.wikipedia.org/wiki/Indifference_curve
People do collectibles; commemorative plates.
A few notes on message passing
[deleted]
The big problem nobody talks about with actors and message passing is non-determinism and state.
If two otherwise independent processes A and B are sending messages to a single process C, it's non-deterministic which arrives at C first. This can create a race condition. C may respond differently depending on which messages arrives first - A's or B's. This is also effectively an example of shared mutable state - the state of C, which can be mutated whenever a message is received - is shared with A and B because they're communicating with it and their messages work differently based on the current state.
Non-determinism, race-conditions, shared mutable state: the absolute opposite of what we want when dealing with concurrency and parallelism.
> Luckily, global orders are rarely needed and are easy to impose yourself (outside distributed cases): just let all involved parties synchronize with a common process.
When there are multiple agents/actors in a distributed system, and the timestamp resolution is datetime64, and clock synchronization and network latency are variable, and non-centralized resilience is necessary to eliminate single points of failure, global ordering is impractical to impossible because there is no natural unique key with which to impose a [partial] preorder [1][2]: there are key collisions when you try and merge the streams.
Just don't cross the streams.
[1] https://en.wikipedia.org/wiki/Preorder_(disambiguation)
[2] https://en.wikipedia.org/wiki/Partially_ordered_set
The C in CAP theorem is for Consistency [3][4]. Sequential consistency is elusive because something probably has to block/lock somewhere unless you've optimally distributed the components of the CFG control flow graph.
[3] https://en.wikipedia.org/wiki/Consistency_model
[4] https://en.wikipedia.org/wiki/CAP_theorem
FWIU, TLA+ can help find such issues. [5]
[5] https://en.wikipedia.org/wiki/TLA%2B
Wouldn't a partial order be possible using logical clocks? Vector clocks probably don't satisfy "practical" but a hybrid logical clock is practical if you can assume clock synchronisation is within some modest delta.
> if you can assume clock synchronisation is within some modest delta.
From experience with clocks, you should really be actively testing the delta, and not just assuming it.
The assumption would be the limit you actively test for.
The Lamport timestamp: https://en.wikipedia.org/wiki/Lamport_timestamp :
> The Lamport timestamp algorithm is a simple logical clock algorithm used to determine the order of events in a distributed computer system. As different nodes or processes will typically not be perfectly synchronized, this algorithm is used to provide a partial ordering of events with minimal overhead, and conceptually provide a starting point for the more advanced vector clock method.
Duolingo's language notes all on one page
Succinct. What a useful reference.
An IPA (International Phonetic Alphabet) reference would be helpful, too. After taking linguistics in college, I found these Sozo videos of US english IPA consonants and vowels that simultaneously show {the ipa symbol, example words, someone visually and auditorily producing the phoneme from 2 angles, and the spectrogram of the waveform} but a few or a configurable number of [spaced] repetitions would be helpful: https://youtu.be/Sw36F_UcIn8
IDK how cartoonish or 3d of an "articulatory phonetic" model would reach the widest audience. https://en.wikipedia.org/wiki/Articulatory_phonetics
IPA chart: https://en.wikipedia.org/wiki/International_Phonetic_Alphabe...
IPA chart with audio: https://en.wikipedia.org/wiki/IPA_vowel_chart_with_audio
All of the IPA consonant chart played as a video: "International Phonetic Alphabet Consonant sounds (Pulmonic)- From Wikipedia.org" https://youtu.be/yFAITaBr6Tw
I'll have to find the link of the site where they playback youtube videos with multiple languages' subtitles highlighted side-by-side along with the video.
Found it: https://www.captionpop.com/
It looks like there are a few browser extensions for displaying multiple subtitles as well; e.g. "YouTube Dual Subtitles", "Two Captions for YouTube and Netflix"
Ask HN: The easiest programming language for teaching programming to young kids?
Hi,
I want to start a small community pilot project to help young kids, 8 and above, get interested in programming. We will use video games and robotics projects. We want to keep our tech stack as simple as possible. Here are some of the choices:
Godot + Aurdino: We can use C in Godot and Aurdino. Aurdino might be more interesting for kids as opposed neatly packaged Lego Kits.
Apple SpriteKit + Lego Mindstorm: We can use Swift with Legos. But cost will be higher.
Some of the projects we are thinking are:
Game-ish:
1. Sound visualizer like how Winamp and old school visualization were. Use speakers. And various other ideas around these concepts.
2. AR project that shows the world around you in cartoonish style. Swap faces etc.
3. Of cousre, platform games.
Robotics projects:
I see a lot of tutorials for Arduino such as robots that follow sound or light, or stuff like lights display. We will use mostly those.
Some harder project ideas I have are for drones, boats, and other navigational vehicles. This is why I want to use Arduino. But is C going to be too hard for young kids to play with?
What do you recommend? If this works, I would like to expand it and start a company around it.
awesome-python-in-education > "Python suitability for education" lists a few justifications for Python: https://github.com/quobit/awesome-python-in-education#python...
There is a Scratch Jr for Android and iOS. You can view Scratch code as JS. JS does run in a browser, until it needs WASI.
awesome-robotics-libraries: https://github.com/jslee02/awesome-robotics-libraries
FWIU, ROS (Robot Operating System) is now installable with Conda/Mamba. There's a jupyter-ros and a jupyterlab-ros extension: https://github.com/RoboStack/jupyter-ros
I just found this: https://coderdojotc.readthedocs.io/projects/python-minecraft...
> This documentation supports the CoderDojo Twin Cities’ Build worlds in Minecraft with Python code group. This group intends to teach you how to use Python, a general purpose programming language, to mod the popular game called Minecraft. It is targeted at students aged 10 to 17 who have some programming experience in another language. For example, in Scratch.
K12CS Framework has your high-level CS curriculum: https://k12cs.org/ [PDF]: https://k12cs.org/wp-content/uploads/2016/09/K%E2%80%9312-Co...
Educational technology > See also links to e.g. "Evidence-based education" and "Instructional theory" https://en.wikipedia.org/wiki/Educational_technology https://en.wikipedia.org/wiki/Educational_technology
Thank you for these resources, reviewing them now.
Yw. Np. So I just searched for "site: readthedocs.io kids python" https://www.google.com/search?q=site%3Areadthedocs.io+kids+p... and found a few new and old things:
SensorCraft (pyglet (Python + OpenGL)) from US AFRL Sensors Directorate has e.g. Gravity, Rocket Launch, and AI tutorials:
> Most people are familiar with Minecraft [...] for this project we are using a Minecraft type environment created in the Python programming language. The Air Force Research Laboratory (AFRL) Sensors Directorate located in Dayton, Ohio created this guide to inspire kids of all ages to learn to program and at the same time get an idea of what it is like to be a Scientist or Engineer for the Air Force. We created this YouTube video about SensorCraft
https://sensorcraft.readthedocs.io/en/latest/intro.html
`conda install -c conda-forge -y pyglet` should probably work. Miniforge on Win/Mac/Lin is an easy way to get Python installed on anything including ARM64 for a RPi or similar; `conda create -n scraft; conda install -c conda-forge -y python=3.8 jupyterlab jupytext jupyter-book pyglet` . If you're in a conda env, `pip install` should install things within that conda env. Here's the meta.yaml in the conda-forge pyglet-feedstock: https://github.com/conda-forge/pyglet-feedstock/blob/master/...
"BBC micro:bit MicroPython documentation" https://microbit-micropython.readthedocs.io/en/latest/
$25 for a single board-computer with a battery pack and a case (and curricula) is very reasonable: https://en.wikipedia.org/wiki/Micro_Bit
> The [micro:bit] is described as half the size of a credit card[10] and has an ARM Cortex-M0 processor, accelerometer and magnetometer sensors, Bluetooth and USB connectivity, a display consisting of 25 LEDs, two programmable buttons, and can be powered by either USB or an external battery pack.[2] The device inputs and outputs are through five ring connectors that form part of a larger 25-pin edge connector. (V2 adds a Mic and a Speaker)
Raspberry Pi for Kill Mosquitoes by Laser
This work (I think) worked with the mosquitos 30cm away with a servo scanning Pi Camera (1080p) and a 1W laser.
To work in the real world - cover a whole room or terrace - presumably a much higher resolution camera (or much faster scanning system) would be required. Even a 1W laser is dangerous to eyesight, if it was being fired at targets mingling with people.
The system could be mounted on small drones that would patrol larger areas - but the idea of robotic drones armed with lasers roaming around is beginning to sound worse than the mosquitos.
Good luck detecting a mosquito optically from a distance of several meters using a cheap camera and Raspberry Pi. Oh and you want to do from a moving drone. That will certainly make it work!
Just look at the images in the article - the guy's best result was detecting a black speck appearing on a nearby white wall with some 60-70% reliability (based on his own numbers). So you would be missing a lot of mosquitoes - but will be happy firing the laser at random shadows and what not. And that was in a completely stationary setup and controlled lab conditions, i.e. not at all something resembling a typical poorly lit room!
This article is BS. Preprints are not peer reviewed (i.e. nobody has checked anything in it - so could even be a complete hoax), it is a pretty typical gadgetry style paper (we do it because we can, not because it makes sense) you do at when you need to fill up your resume with research papers (e.g. for keeping/obtaining a job reasons).
The "save the world" (mosquito control, diseases, etc.) justification is also par for the course for this type of crappy paper. Anyone who seriously thinks that one could control mosquito problem by shooting them one by one by a laser is delusional.
But neural networks and "AI" are being used, so it has to be cutting edge groundbreaking stuff, right?
BTW, this nonsense idea has been floated as a publicity stunt a few years ago (including a slow motion video of a laser burning off wing of a mosquito in flight) and it seems that some Russian PhD student from a fairly obscure uni either didn't do their research or has reinvented the wheel (or just plain copied the thing without attribution). The list of irrelevant or only very tangentially relevant (it is about mosquitoes, so in scope, right?) references is a dead giveaway there (paper on mosquitoes spreading zika? seriously?).
Here, it was even on National Geographic in 2010(!): https://www.youtube.com/watch?v=BKm8FolQ7jw
Oh and that was supposed to be a handheld device to boot. With the same "save the world from malaria" spiel too. I wonder what are the owners of the company that was pushing this concept to investors back then trying to sell today ...
There are actually multiple videos on Youtube showing products from different companies that were attempting to push this as some sort of viable concept.
> using a cheap camera
even if you had a RED Komodo feeding uncompressed 4K DCI 60fps video to a pci-express bus capture card, the sensor resolution and tiny size of mosquitoes means that unless the lighting conditions are just right, and the mosquito is somehow highlighted against a background, it's going to be very hard to pick them out at distances of 2 or more meters.
and that's before you get into the software problem of processing the fire hose of data that is 4096x2160 at 60fps raw. and the hardware cost of a very serious workstation class PC capable of taking the capture at 1:1 realtime.
possibly a lidar based sensor or something might be more suitable to locating the x/y/z position of mosquitoes in a few meter area.
Mosquitos, as we all know, have a highly distinctive auditory signature.
A phased microphone array is the only sensible approach to localized mosquito detection. It would probably work reasonably well.
The problem is that the entire field is patent encumbered, because Myhrvold's company Intellectual Ventures has done some research on it, and no on in their right mind would go up against those guys.
Yeah, they already did sharks with lasers. IDK what the licensing terms are on that
ONE MILLION DOLLARS
Donate Unrestricted
The article mentioned the Gates foundation which has terrible records in education initiatives:
- The big failure in small schools initiative: https://marginalrevolution.com/marginalrevolution/2010/09/th...
- The big failure in teacher evaluation initiative: https://www.businessinsider.com/bill-melinda-gates-foundatio...
- The big failure in Common Core initiative: https://www.washingtonpost.com/news/answer-sheet/wp/2016/06/...
Now it is funding anti-racist math: https://equitablemath.org/
Unbelievable.
Rather than diminishing the efforts of others, you could start helping by describing your own efforts to improve education (in order to qualify your ability to assess the mentioned and other efforts to improve education and learning)
In context to seed and series funding for a seat on a board of a for-profit venture, an NGO non-profit organization can choose whether to accept restricted donations and government organizations have elected public servant leaders who lead and find funding.
Works based on Faust: https://en.wikipedia.org/wiki/Works_based_on_Faust
Bitcoin Is Time
I'm ready to be downvoted to oblivion, especially by all the Bitcoin purists, but I hope my message sparks at least some curiosity.
Bitcoin is getting "weaker" every year for several reasons:
- emerging centralization due to economies of scale. If this trend continues the mining power will be so consolidated that a 51% attack will be likely. The Nakamoto coefficient is already at 4, and tending towards 3.
- energy usage due to Proof of Work is growing astronomically, and the higher the price of bitcoin, the less incentive there is to use renewable energy. Bitcoin uses more energy than the country of Argentina.
- transaction times are SLOW, and expensive. The lightning network is incredibly buggy and won't actually solve the problems it's promising.
There is a better alternative. RaiBlocks, named nano since 2018. It got a bad rap due to the BitGrail hack, but it's picking up steam again. The developer community is great, the main dev team on the Nano protocol has been consistently chugging along regardless of the 3 years of crypto winter on a shoestring budget and the nano community is made up of users, not speculators.
This article sums it up quite nicely: https://senatusspqr.medium.com/why-nano-is-the-ultimate-stor...
I'm happy to receive downvotes, but all I ask in return is that you give it a read with an open mind.
> The lightning network is incredibly buggy and won't actually solve the problems it's promising.
Not an expert but I found that a readable summary of different layer 2 scaling strategies and why Ethereum developers prefer Rollups instead of Channels (like Lightning):
"Bitcoin scalability problem" could link to the Ethereum design docs: https://en.wikipedia.org/wiki/Bitcoin_scalability_problem
The Ethereum design docs could link to direct-listed premined [stable] coins as a solution for Proof of Work and TPS reports: https://github.com/flare-eng/coston#smart-contracts-with-xrp
(edit) re: n-layer solutions: The https://interledger.org/ RFCs and something like Transaction Permission Layer (TPL) will probably be helpful for interchain compliance.
> Interledger is not tied to a single company, blockchain, or currency.
From https://tplprotocol.org/ :
> The challenge: Current blockchain-based protocols lack an effective governance mechanism that ensures token transfers comply with requirements set by the project that issued the token.
> Projects need to set requirements for a variety of reasons. For instance, remaining compliant with securities laws, limiting transfer to beta testers, or limiting transfer to a particular geo-spatial location. Whatever your reason, if a requirement can be verified by a third-party, TPL will be able to help.
In the US, S-Corps can't have international or more than n shareholders, for example; so if firms even wanted to issue securities on a first-layer network, they'd need an extra-chain compliance mechanism to ensure that their issuance is legal pursuant to local, sovereign, necessary policies. Re-issuing stock certificates is something that has to be done sometimes. When is it possible to cancel outstanding tokens?
Foundational Distributed Systems Papers
From "Ask HN: Learning about distributed systems?" https://news.ycombinator.com/item?id=23932271 :
> Papers-we-love > Distributed Systems: https://github.com/papers-we-love/papers-we-love/tree/master...
> awesome-distributed-systems also has many links to theory: https://github.com/theanalyst/awesome-distributed-systems
And links to more lists of distributed systems papers under "Meta Lists": https://github.com/theanalyst/awesome-distributed-systems#me...
In reviewing this awesome list, today I learned about this playlist: "MIT 6.824 Distributed Systems (Spring 2020)" https://youtube.com/playlist?list=PLrw6a1wE39_tb2fErI4-WkMbs...
> awesome-bigdata lists a number of tools: https://github.com/onurakpolat/awesome-bigdata
Low-Cost Multi-touch Whiteboard using the Wiimote (2007) [video]
"Interactive whiteboard" / "smart board" https://en.wikipedia.org/wiki/Interactive_whiteboard
Wii Remote > Features > Sensing: https://en.wikipedia.org/wiki/Wii_Remote#Sensing
.. > Third-Party Development describes a number of applications for IR/optical tracking with an array of nonstationary emitters: https://en.wikipedia.org/wiki/Wii_Remote#Third-party_develop...
Augmented Reality (AR) > Technology > Tracking: https://en.wikipedia.org/wiki/Augmented_reality#Tracking
... links to "VR positional tracking" which does have headings for "Optical" and "Sensor fusion": https://en.wikipedia.org/wiki/VR_positional_tracking
How to Efficiently Choose the Right Database for Your Applications
I kinda disagree with separate branch for "document database" for Mongo. Mongo is a key-value storage, with a thin wrapper that converts BSON<->JSON, and indices on subfields.
You can achieve exactly the same thing with PostgreSQL tables with two columns (key JSONB PRIMARY KEY, value JSONB), including indices on subfields. With way more other functionality and support options.
> You can achieve exactly the same thing with PostgreSQL tables with two columns (key JSONB PRIMARY KEY, value JSONB), including indices on subfields. With way more other functionality and support options.
PostgreSQL docs > "JSON Functions and Operators" https://www.postgresql.org/docs/current/functions-json.html
MongoDB can do jsonSchema:
> Document Validator¶ You can use $jsonSchema in a document validator to enforce the specified schema on insert and update operations:
db.createCollection( <collection>, { validator: { $jsonSchema: <schema> } } )
db.runCommand( { collMod: <collection>, validator:{ $jsonSchema: <schema> } } )
https://docs.mongodb.com/manual/reference/operator/query/jso...Looks like there are at least 2 ways to handle JSONschema with Postgres: https://stackoverflow.com/questions/22228525/json-schema-val... ; neither of which are written in e.g. Rust or Go.
Is there a good way to handle JSON-LD (JSON Linked Data) with Postgres yet?
There are probably 10 comparisons of triple stores with rule inference slash reasoning on data ingress and/or egress.
A Data Pipeline Is a Materialized View
Sometime when I am old(er) and (somehow?) have more time, I'd like to jot down a "Rosetta Stone" of which buzzwords map to the same concepts. So often we change our vocabulary every decade without changing what we're really talking about.
Things started out in a scholarly vein, but the rush of commerce hasn't allowed much time to think where we're going. — James Thornton, Considerations in computer design (1963)
Like a Linked Data thesaurus with typed, reified edges between nodes/concepts/class_instances?
Here's the WordNet RDF Linked Data for "jargon"; like the "Jargon File": http://wordnet-rdf.princeton.edu/lemma/jargon
A Semantic MediaWiki Thesaurus? https://en.wikipedia.org/wiki/Semantic_MediaWiki :
> Semantic MediaWiki (SMW) is an extension to MediaWiki that allows for annotating semantic data within wiki pages, thus turning a wiki that incorporates the extension into a semantic wiki. Data that has been encoded can be used in semantic searches, used for aggregation of pages, displayed in formats like maps, calendars and graphs, and exported to the outside world via formats like RDF and CSV.
Google Books NGram viewer has "word phrase" term occurrence data by year, from books: https://books.google.com/ngrams
There’s no such thing as “a startup within a big company”
When I was at PowerBI in Microsoft, all the execs hailed it as Startup within Microsoft. Come work here instead of Uber. I worked like a dog, sometimes till 2am in morning. My manager would routinely ask us to come on weekends. I was naive, I thought we are growing customer base, this is what a startup looks like.
The ultimate realization was in a startup you have equity, a decent amount in a good startup. At Microsoft it was a base salary and set amount of stock. What we did moved very little of the top revenue metric. It made little difference if I worked like a dog, or slacked. The promos were very much “buddy buddy” system.
In the end I realized you can’t have startups in big companies (esp as an engineer you don’t have the huge upside if the startup is successful, your upside is capped)
Startups work because you have skin in the game, when you build something people want, you get to reap rewards proportional to it. That correlation and feedback loop is very important.
At big companies you don’t have the the same correlation. Some big shot exec they hired reaps far far more on the work you did.
Equity is what builds wealth.
>The ultimate realization was in a startup you have equity, a decent amount in a good startup. At Microsoft it was a base salary and set amount of stock.
I joined a startup in 1999. There were 3 founders and I was employee #2 after that. I received a ton of options (this was before RSUs became popular). We had a great product and a great team, but 18 months later ran out of money and unfortunately it was right after the dotcom implosion of early 2001 when funding had completely dried up.
My takeaway was that base salary is actually the most important component of TC. Cash bonus based on some metric that you control comes second. Equity comes third. If you work for a FAANG, maybe equity can move higher up (though it remains to be seen how long this will be true).
Outside of FAANG (and top executives at F500 sized public companies) very few people are getting rich off of the "equity" component of their TC. The vast majority of startups go bust before IPO or acquisition.
Time is the most valuable commodity you have, don't squander it for lottery tickets and empty promises.
>Time is the most valuable commodity you have, don't squander it for lottery tickets and empty promises.
That goes the other way around too, the only way to have massive amounts of free time without going FIRE is to win the lottery. So why not throw the dice once or twice when you're young and then settle into a stable corporate job if it doesn't pay out?
Modern Silicon Valley lets you do both. Why not throw the dice and still make around $200k cash?
As a [good] engineer you can have your cake and eat it too. Sure it won’t make you a billionaire but you can live a comfy life, maybe win the lottery, and retire at 50 if the lottery fails.
Living and working elsewhere with the wages of the region reduces expenses and opportunities; but the wealth of educational resources online [1][2] does make it feasible to even bootstrap a company on the side. Do you need to borrow money to scale quickly enough to pay expenses with sufficient cash flow for the foreseeable future?
Income sources: Passive income, Content, Equity that's potentially worth nothing, a backtested diversified portfolio (Golden Butterfly or All Weather Portfolio and why?) of sustainable investments, Business models [3]; Software implementations of solutions to businesses, organizations, and/or consumers' opportunities
Single-payer / Universal Healthcare is a looming family expense for many entrepreneurs; many of whom do get into entrepreneurship later in life.
Small businesses make up a significant portion of GDP. Small businesses have to have to accept risk.
There's still opportunity in the world.
[1] Startup School > Curriculum https://www.startupschool.org/curriculum
[2] https://www.ycombinator.com/library
[3] "Business models based on the compiled list at [HN]" https://gist.github.com/ndarville/4295324
From "Why companies lose their best innovators (2019)" https://news.ycombinator.com/item?id=23887903 :
> "Intrapreneurial." What does that even mean? The employee, within their specialized department, spends resources (time, money, equipment) on something that their superior managers have not allocated funding for because they want: (a) recognition; (b) job security; (c) to save resources such as time and money; (d) to work on something else instead of this wasteful process; (e) more money.
I’d you are going to start something on the side, carefully read your current employment agreement, particularly the part about IP assignment. Most Silicon Valley companies I’ve worked with, including FAANGs, claim ownership of everything you produce, inside or outside of employment, at home or in office, using their equipment or yours. You don’t want to lick into a unicorn idea and have your former employer’s lawyers send you that letter...
Unusual data point (I'm not sure if South African practice is applicable elsewhere) , but most bigger corps I've worked at have provided a mechanism to declare external interests, pending management approval, to work around these global clauses.
On paper any IP would be default belong to the Corp, but after some discussion/negotiations with your line manager and his higher-ups, you could do the paper work to declare.
Are these actually enforceable?
It probably doesn’t matter in practice. They will have more lawyers and more money to burn on legal process than you. Who will go bankrupt first fighting a legal battle between you and, say, Apple?
In practice, I don't think I've ever seen a SV example of that kind of litigation from garage development of IP. Maybe I've just missed them?
Theft of IP from one company to a new company on the other hand there are multiple public examples of.
The only time they'd actually sue is if you made something worth money. in that case, there's a good chance you can get investors to pay the lawsuit.
It does matter in practice in California.
If its "related" to your employers business yes - unless you agree a variation of the contract.
Ask HN: Keyrings: per-package/repo; commit, merge, and release keyrings?
Are there existing specs for specifying per-package release keyrings and per-repo commit and merge keyrings?
Keyring: a collection of keys imported into a datastore with review.
DevOpsSec; Software Supply Chain Security
Packages {X, Y, Z} in Indexes {A, B, C} are artifacts that are output from Builds (on workstations or servers with security policies) which build a build script (which is often deliberately not specified with a complete programming language in order to minimize build complexity; instead preferring YAML) which should be drawn from a stable commit hash in a Repository (which may be a copy of technically zero or more branches of a Repository hosted centrally next to Issues and Build logs and Build artifact Signing Keys).
Maxmimally, are there potentially more keyrings (or key authorization mappings between key and permission) than (1) commit; (2) merge; and (3) release?
Source Projects: Commit, Merge, [Run Build, Login to post-build env], Release (and Sign) package
Downstream Distros: Commit, Merge, [Run Build, Login to post-build env], Release (and Sign) package for the {testing, stable, security} (Signed) Index catalogs
Threat Actors Now Target Docker via Container Escape Features
It may be quipped that Docker or Kubernetes is remote code execution (RCE) as a service!
The vulnerability pointed out here is server operators who have exposed Docker's API to the public internet, allowing anyone to run a container. The use of a privileged container is just icing on the cake, and probably not nessecary for a cryptominer.
A more subtle and interesting attack vector is genuine container imags that become compromised, much like any other package manager, particularly ones that search for the Docker API on the host's internal IP address, hoping only public network access is firewalled.
Docker engine docs > "Protect the Docker daemon socket" https://docs.docker.com/engine/security/protect-access/
dev-sec/cis-docker-benchmark /controls: https://github.com/dev-sec/cis-docker-benchmark/tree/master/...
Eh. This advice is less practical than it’s made to seem. Like it “works” but it’s not really usable for anything other than connecting two privileged apps over a hostile network.
* Docker doesn’t support CRLs so any compromised cert means reissuing everyone’s cert.
* Docker’s permissions are all or nothing without a plug-in. And if you’re going that route the plug-in probably has better authentication.
* Docker’s check is just “is the cert signed by the CA” so you have to do one CA per machine / group homogeneous machines.
* You either get access to the socket or not with no concept of users so you get zero auditing.
* Using SSH as transport helps but then you have to also lock down SSH which isn’t impossible but more work and surface area to cover than feels necessary. Also since your access is still via the Unix socket it’s all or non permissions again.*
django-ca is one way to manage a PKI including ACMEv2, OCSP, and a CRL (Certificate Revocation) list: https://github.com/mathiasertl/django-ca
"How can I verify client certificates against a CRL in Golang?" mentions a bit about crypto/tls and one position on CRLs: https://stackoverflow.com/questions/37058322/how-can-i-verif...
CT (Certificate Transparency) is another approach to validating certs wherein x.509 cert logs are written to a consistent, available blockchain (or in e.g. google/trillian, a centralized db where one party has root and backup responsibilities also with Merkle hashes for verifying data integrity). https://certificate.transparency.dev/ https://github.com/google/trillian
Does docker ever make the docker socket available over the network, over an un-firewalled port by default? Docker Swarm is one config where the docker socket is configured to be available over TLS.
Docker Swarm docs > "Manage swarm security with public key infrastructure (PKI)" https://docs.docker.com/engine/swarm/how-swarm-mode-works/pk... :
> Run `docker swarm ca --rotate` to generate a new CA certificate and key. If you prefer, you can pass the --ca-cert and --external-ca flags to specify the root certificate and to use a root CA external to the swarm. Alternately, you can pass the --ca-cert and --ca-key flags to specify the exact certificate and key you would like the swarm to use.
Docker ("moby") and podman v3 socket security could be improved:
> From "ENH,SEC: Create additional sockets with limited permissions" https://github.com/moby/moby/issues/38879 ::
> > An example use case: securing the Traefik docker driver:
> > - "Docker integration: Exposing Docker socket to Traefik container is a serious security risk" https://github.com/traefik/traefik/issues/4174#issuecomment-...
> > > It seems it only require (read) operations : ServerVersion, ContainerList, ContainerInspect, ServiceList, NetworkList, TaskList & Events.
> > - https://github.com/liquidat/ansible-role-traefik
> > > This role does exactly that: it launches two containers, a traefik one and another to securely provide limited access to the docker socket. It also provides the necessary configuration.
> > - ["What could docker do to make it easier to do this correctly?"] https://github.com/Tecnativa/docker-socket-proxy/issues/13
> > - [docker-socket-proxy] Creates a HAproxy container that proxies limited access to the [docker] socket
This looks like the attacker is just using a publicly exposed Docker API honeypot to run a new Docker container with `privileged: true`. I don't see why that's particularly interesting given that they could just bind-mount the host / filesystem with `pid: host` and do pretty much whatever they want?
It would be much more interesting to see a remote-code-execution attack against some vulnerable service deploying a container escape payload to escalate privileges from the Docker container to the host.
Exposed on the WAN is obviously bad, but how do you keep your own containers from calling those APIs? Yes, you don't mount the docker socket in the container, but what about the orchestrator APIs?
With kubernetes, you enable RBAC, and only allow each pod the absolute minimum of access required.
In my own setup, I have only five pods with any privileges: kubernetes-dashboard, nginx-ingress, prometheus, cert-manager, and the gitlab-runner, and of those only kubernetes-dashboard can modify existing pods in all spaces, cert-manager can only write secrets of its own custom type, and gitlab-runner can spawn non-privileged pods in its own namespace (but not mount anything from the host, nor any resources from any other namespace).
And when building docker images in the CI, I use google’s kaniko to build docker images from within docker without any privileges (it unpacks docker images for building and runs them inside the existing container, basically just chroot).
All these APIs are marked very clearly and obviously as DO NOT MAKE AVAILABLE PUBLICLY. If you still make them public, well it’s pretty much your own fault.
Does RBAC not limit these by default? Does cert-manger not already give itself restricted permission on install? Do I need to fix up my cluster right now? If so, do you have any example RBAC yamls? :D
> And when building docker images in the CI, I use google’s kaniko to build docker images from within docker without any privileges (it unpacks docker images for building and runs them inside the existing container, basically just chroot).
You can also use standalone buildkit which comes with the added benefit of being able to use the same builder locally natively.
No RBAC doesn't automatically do this. And many publicly available Helm charts are missing these basic security configurations. You should use Gatekeeper or similar to enforce these settings throughout your cluster.
Gatekeeper docs: https://open-policy-agent.github.io/gatekeeper/website/docs/
Gatekeeper src: https://github.com/open-policy-agent/gatekeeper
awesome-container-security: https://github.com/kai5263499/awesome-container-security
"container-security" GitHub label: https://github.com/topics/container-security
In somewhat related news, rootless Docker support is no longer an experimental feature. Has anyone used it and liked/disliked it?
I have, and it's great/you forget about it pretty quickly. I've actually gone (rootful?)docker->(rootless)podman->(rootless)docker, and I've written about it:
- https://vadosware.io/post/rootless-containers-in-2020-on-arc...
- https://vadosware.io/post/bits-and-bobs-with-podman/
- https://vadosware.io/post/back-to-docker-after-issues-with-p...
I still somewhat long after podman's lack of a daemon but some of the other issues with it currently leave it completely out for me (now that docker has caught up) -- for example lack of linking support.
Podman works great outside of those issues though, you just have to be careful if it does something in a way that expects docker semantics (usually the daemon or an unix socket on disk somewhere). You'll also find some cutting edge tools like dive[0] also have some support for podman[1] but it's not necessarily the usual case.
podman v3 has a docker-compose compatible socket. From https://news.ycombinator.com/item?id=26107022 :
> "Using Podman and Docker Compose" https://podman.io/blogs/2021/01/11/podman-compose.html
Ask HN: What security is in place for bank-to-bank EFT?
When I set up an ETF on a bank's website, all I need to enter is the other bank's routing number and account number, which can be readily found on a paper check. Then you can transfer money from one bank to another... What security and authentication is in place to prevent fraud? In case of fraud, is the victim guaranteed to get the money back?
AFAIU, no existing banking transaction systems require the receiver to confirm in order to receive a funds transfer.
You can create a "multisig" DLT smart contract that requires multiple parties' signatures before the [optionally escrowed] funds are actually transferred.
EFT: Electronic Funds Transfer: https://en.wikipedia.org/wiki/Electronic_funds_transfer
As far as permissions to write to the account ledger: Check signatures are scanned. Cryptoasset keys are very long, high-entropy "passwords". US debit cards are chip+pin; it's not enough to just copy down the card number (and CVV code).
Though credit cards typically are covered by fraud protection, debit card transactions typically aren't: hopefully something will be recovered, but AFAIU debit txs might as well be as unreversible as cryptoasset transactions.
TPL: Transaction Permission Layer is one proposed system for permissions in blockchain; so that e.g. {proof of residence, receiver confirmation, accredited investor status, etc.} can be necessary for a transaction to go through.
ILP: Interledger Protocol > RFC 32 > "Peering, Clearing and Settling" describes how ~EFT with Interledger works: https://interledger.org/rfcs/0032-peering-clearing-settlemen...
Podman: A Daemonless Container Engine
Is the title of this page out of date?
AFAIU, Podman v3 has a docker-compose compatible socket and there's a daemon; so "Daemonless Container Engine" is no longer accurate.
"Using Podman and Docker Compose" https://podman.io/blogs/2021/01/11/podman-compose.html
Podman does not run in a daemon-mode by default. There's a dbus-activated service unit that activates the daemon when you try to use Docker Compose with it.
It is almost 100% compatible with docker. I only had to do minimal changes in my Dockerfiles to use it.
But not Buildkit and docker-compose.
Podman v3 is compatible with docker-compose (but not yet swarm mode, FWIU), has a socket and a daemon that services it.
Buildah (`podman buildx`, `buildah bud --arch arm64`) just gained multiarch build support; so also building arm64 containers from the same Dockerfile is easy now. https://github.com/containers/buildah/issues/1590
IDK what BuildKit features should be added to Buildah, too?
> IDK what BuildKit features should be added to Buildah, too?
Cache mounts are what we rely on for incremental rebuilds.
Cambridge Bitcoin Electricity Consumption Index
Ok, I have to admit that I am one of those people who knew about BTC very, very early but never bought any. Now, of course I should have mined at least 100 back in the days, but anyways.
The reason I have never mined nor bought into it is that I still, to this day struggle to come up with a real reason to do so.
Am I getting too old? Do I not see the "huge potential" what it may become? Don't I want to get freaking rich? I simply can't answer what BTCs and other crypto coins are actually good for. Can someone help me out here?
I'm in your boat, when it was first explained to me in a hacker space at college when it was first growing and priced at a few cents per 100++ Bitcoin. All I could see was a waste of energy and resources (time, energy, hardware).
It will always be a drain on society, and it scares me that it hasn't failed because we see it draining our GPU industry to the point that market prices have gone insane and availability is practically non-existent / all left up to chance.
It does take a rocket science to see the trends, as long as it's tied to needing hardware to compute arbitrary calculations to work... It's going to waste computer resources and energy. Both which are finite resources... Yes finite, solar cells aren't free to make, they don't last forever after you make them. Same with any other green technology. Maybe if fusion was our only power source it would be less of a concern, but we still haven't cracked Q=1, so... Yes it is a waste of finite resources.
It’s not draining the GPU industry. No one buys GPUs for mining Bitcoin anymore, they use ASICs.
Even as just ASICs there's something of a drain, Bitcoin ASICs consume fab production which means less remaining production capacity for GPUs.
This is an equilibrium that should eventually fix itself, but while Bitcoin and crypto mining is growing they are making a material dent in the global semiconductor economy.
Cryptoasset mining creates demand for custom chip fab (how different are mining rigs from SSL/TLS accelerator expansion cards), which is definitely not zero sum: more revenue = more opportunities.
https://en.wikipedia.org/wiki/Price_elasticity_of_supply
With insufficient demand, a market does not develop into a sustainable market. "Rule of three (economics)" says that markets are stable with 3 major competitors and many smaller competitors; nonlinearity and game theory.
https://en.wikipedia.org/wiki/Rule_of_three_(economics)
We've always had custom chip fab, but the prices used to be much higher. Proof of Work (and Proof of Research) incentivize microchip and software energy efficiency; whereas we had observed and been most concerned with doublings in transistor density.
FWIU, it's now more sustainable and profitable to mine rare earth elements from recycled electronics than actually digging real value out of the earth?
Compared to creating real value by digging for gold, how do we value financial services?
Bitcoin's fundamental value is negative given its environmental impact
If the price of energy is calculated with corresponding carbon tax included, shouldn't Bitcoin be neutral?
> If the price of energy is calculated with corresponding carbon tax included, shouldn't Bitcoin be neutral?
Yes, but there are vastly more energy efficient substitute DLTs with near-zero switching costs. Litecoin and scrypt (instead of AES256), for example.
Apply a USD/kWhr threshold across all industries.
Is this change (and focus on the external costs of energy production) more the result of penalties or incentives?
Pre-mined coins are vastly more energy efficient (with tx costs <1¢ and similarly minimal kWhr/tx costs), but the market doesn't trust undefined escrow terms that are fair game in commodities and retail markets.
We have trouble otherwise storing energy from noon to commute and dinner time; whereas a commodity like grain may keep for quite awhile.
Bitcoin serves as a demand subsidy when heavily-subsidized energy prices crash due to oversupply (that we should recognize as temporary because we are moving to electric vehicles and we need to reach production volumes so that, in comparison to alternatives, renewables are now more cost effective)
In the US, we have neither carbon taxes nor intraday prices. The EU has carbon taxes and electrical energy markets.
>In the US, we have neither carbon taxes nor intraday prices.
Like, no intraday electricity market?
Ask HN: What are some books where the reader learns by building projects?
2021 Edition. This is a continuation of the previous two threads which can be found here:
https://news.ycombinator.com/item?id=22299180
https://news.ycombinator.com/item?id=13660086
Other resources:
https://github.com/danistefanovic/build-your-own-x
https://github.com/AlgoryL/Projects-from-Scratch
https://github.com/tuvtran/project-based-learning
"Agile Web Development with Rails [6]" (2020) teaches TDD and agile in conjunction with a DRY, CoC, RAD web application framework: https://g.co/kgs/GNqnWV
Is it wrong to demand features in open-source projects?
Yes, it's wrong to demand something for nothing: that's entitlement, not business (which involves some sort of equitable exchange of goods and/or services).
Better questions: How do I file a BUG report issue, create a feature ENHancement request issue, send a pull request with a typo fix, write DOCs and send a PR, write test cases for a bug report?
How can I sponsor development of a feature?
A project may define a `.github/FUNDING.yml`, which GitHub will display on the 'Sponsor' tab of the GitHub project. A project may also or instead include funding information in their /README.md.
How do I ask IRC or a mailing list or issues how much and how long it would cost to develop a feature, if somebody had some international stablecoin and a limited term agreement?
The answer may be something like, "thanks for the detailed use case or user story, that's on our roadmap, there are major issues blocking similar features and that's where the expense would be."
CompilerGym: A toolkit for reinforcement learning for compiler optimization
What a great idea. Would per-instruction/opcode costs make this easier to optimize?
Are MuZero or the OpenAI baselines useful for RL compiler optimization?
Turning desalination waste into a useful resource
Tangentially related: there are millions of abandoned oil wells emitting greenhouse gases like methane across the US. Vice had a great video about this last month, but it didn't get much attention here on HN. [1] [2]
Evcxr: A Rust REPL and Jupyter Kernel
I have been using evcxr for the last 2-3 weeks and I'm loving it. There are two things that bother me: (a) fancy UI components seem to only work with Python and C++ kernel (not theoretically, but with available packages), (b) while you can redefine values and functions when you're iterating, you cannot do so with structs (yes, big structs go to a local package that I depend on, but sometimes I have ad-hoc utility structs).
Here's the xeus-cling (Jupyter C++ Kernel) source: https://github.com/jupyter-xeus/xeus-cling/tree/master/src
Do any of the other non-Python Jupyter kernels have examples of working fancy UI components? https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
Jupyter kernels implement the Jupyter kernel message spec. Introspection, Completion: https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
Debugging (w/ DAP: Debug Adapter Protocol) https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
A `display_data` Jupyter kernel message includes a `data` key with a dict value: "The data dict contains key/value pairs, where the keys are MIME types and the values are the raw data of the representation in that format." https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
This looks like it does something with MIME bundles: https://github.com/jupyter-xeus/xeus-cling/blob/00b1fa69d17b...
ipython.display: https://github.com/ipython/ipython/blob/master/IPython/displ...
ipython.core.display: https://github.com/ipython/ipython/blob/master/IPython/core/...
ipython.lib.display: https://github.com/ipython/ipython/blob/master/IPython/lib/d...
You can also run Jupyter kernels in a shell with jupyter/jupyter_console:
pip install jupyter-console jupyter-client
jupyter kernelspec list
jupyter console --kernel python3
Ask HN: What is the cost to launch a SaaS business MVP
I interviewed several entrepreneurs, I noticed several spent so much money on developing their product before launch. What is your experience?
When you're not yet paying yourself, your costs are your living costs and opportunity costs (in addition to the given fixed and variable dev and prod deployment cloud costs).
Early feedback from actual customers on an MVP can save lots of development time. GitLab Service Desk is one way to handle emails as issues from users who don't have GitLab accounts.
A beta invite program / mailing list signup page costs very little to set up; you can start building your funnel while you're developing the product.
Cryptocurreny crime is way ahead of regulators and law enforcement
I used to be surprised when I would read articles about this mythical legislative process which can and will protect lowly plebians like me. I'm not sure why so many people are under the impression legislation will be able to keep up with perversely motivated individuals or groups. Who are these legislators with motives so pure that they will never be bribed to only look out for my interests? Let's never forget there is only one person in jail for the 2008 crash this article is trying to convince us was the result of too little regulation. There was never enough incentive to protect us from the people who created that market crash.
I think the problem with 2008 was that no one was doing anything illegal, just a lot of poor judgement and lack of regulations.
Bitcoin was created by people who were so peeved at the regulator's interventions in the '08 crisis that they decided that bypassing the entire financial system was the best option. Forever immortalising "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" just as a reminder of why they were doing what they did [0].
We don't need better regulation. We need, when financial firms muck up on a global scale, to replace the people who run those firms with different people. There is an easy way to do that, let them go broke.
The biggest problem in the room is that the regulators have bought in to the idea that some companies are "too big to fail". That is a stupid idea, fundamentally unfair, completely throwing out the best part of capitalism which is that recognised idiots aren't allowed to be in charge. That problem will certainly not be solved by giving the regulators more power.
Bitcoin was created in context to "Transparency and Accountability": a campaign motto not coincidentally found in the title of the "Federal Funding Accountability and Transparency Act of 2006".
> The Federal Funding Accountability and Transparency Act of 2006 (S. 2590)[2] is an Act of Congress that requires the full disclosure to the public of all entities or organizations receiving federal funds beginning in fiscal year (FY) 2007. The website USAspending.gov opened in December 2007 as a result of the act
Sen. Obama's office is the origin of this bill; which was fronted by Sens Coburn and McCain, who had the clout.
https://usaspending.gov/ creates a mandatory database with budgetary line item metadata. Where money actually goes is something that is far more transparent and accountable with bitcoin and other public ledgers than any existing ledger covered by bank secrecy laws.
For context, in 2008-09, global financial systems were failing as a result of the American economy: housing bubble burst, HFT "flash crash" that we didn't have CAT or big data tools to determine the cause of, DDOS attacks and cyber security losses increasing YoY, credit default swaps had been rated as AAA securities (they sold bundled bad debt like it was worth something, and then wrote down losses), Enron energy speculation amidst rolling blackouts that were leaving hospitals in the dark, on gas generators, government investments in renewables had been paltry since the Carter administration had put solar panels on the roof of the White House before the whole oil price shock, and oil commodity speculation had driven the price of oil to like 2-4x the 2000 price (with resultant price effects on most CPI inflation/PPP basket goods); but electricity consumption was down in 2008 and renewables hadn't reached production volumes necessary to reach the competitive price point that renewables now present: cheaper than nonrenewables.
Who would have thought that the speculative price would continue to exceed the production cost. incentives or penalties?
Externalities per dollar returned per kWh is one way to assess the total costs of electricity production methods.
"Buy Gold" was the refrain of the day: TV commercials, signs out in front of piano stores (a somewhat-arbitrary commodity, sales of which are observed to be a leading indicator of economic health), signs on the road. And the message was "take your money out the market and put it in gold" which drives up the prices for chips and boards and medical equipment that rely upon that commodity as a material input. Gold is necessary for tropical spec components in high-humidity environments: gold hinges are prized, for example.
But, "look, there's water flowing from the chocolate fountain; so you can go ahead and go" and "you know you want to put it back in there, in that market" we're the appropriate messages given our revenue at the time.
For further technology scene context in 2007-2009,
"Grid computing" links to a number of distributed computing projects: https://en.wikipedia.org/wiki/Grid_computing#History
IIRC, there was a production metric-priced grid system developed around Seattle/Vancouver called "Gold" (?) that was built on Xen and is likely a precursor to metric-priced Cloud services like EC2 and S3 (which now simplify calculations for how much a 51% attack against a Proof of Work txpool with adaptive tx fees costs with n good participants in the game) which incentivize efficiency by penalizing expensive operations.
Code bloat was already a thing: how is everything getting slower when Moore's Law predicts the growth rate in transistor density? Are there sufficient incentives for code efficiency when there seem to be surplus compute resources just idly depreciating.
MySQL primary/secondary replication was considered a viable distributed database system, but securing replication depends upon cert exchange and (optionally), PKI, DNS, and IP tunnels of some sort. And then who has root, write to the journal and tables and indexes on the filesystem, UPDATE, and DELETE access in an inter-organizational distributed systems architecture with XML, Web Services, our very own ESB to scale separately from the database replication and off-site backups that nobody ever checks against the online data, and fragmented and varyingly-implemented industry standards that hopefully specify at least a sufficient key for the record that's unique across ledgers/systems/databases.
BitTorrent DHT magnet: links were extant.
Linden Dollars in Linden Lab's Second Life (there's a price floor on land, which is necessary to sell digital assets/goods/products/services) and accumulated avatar value in e.g. EverQuest and WarCraft (for which there were secondhand markets).
ACH was ACH: GPG-signed files over SFTP on the honor of the audited bank to not allow transfers that deposit money that doesn't exist.
There was no common struct for banking APIs (as apparently only e.g. Plaid, Quicken, and Mint solve for): ledger transactions have a fixed width text field that may contain multiple fields concatenated into one string, and there's no "payee URI" column in the QIF or CSV dumps of an account ledger.
To request more than e.g. the past 90 days of one's own checking account ledger, one was expected to parse tables out of per-month PDFs with e.g. PDFminer at $20 apiece, and then think up ones own natural key in order to merge and lookup records because (2008-01-01,3.99) and (2008-01-01,3.99,storename) are indistinct as a natural key (and when hashed). If you loan a your bank money (for them to now freely invest in the other side, since GLBA in 1999), wouldn't you think that the least they can do is give you `SELECT * WHERE account_id=?` as a free CSV without any datetime limitation in regards to what's offline and what's online.
"Audit the Fed", "Audit DoD" were being chanted by economically-aware citizens amidst severe correction and what was then the most severe recession since the Great Depression: the "Great Recession" it was called, and payouts to essential cronies (who hadn't saved wheat for the famine) were essential.
Overdraft was an error charged to the customer, who didn't build an inconsistent system (CAP theorem) that allows spending money that doesn't exist (at interest charged to the consumer/taxpayer).
"Catch Me If You Can" (2002) described the controls for bank fraud at the time. Why are fees so high?
"Office Space" (1999) described penny-shabing / salami-slicing attack: "fractions of a penny".
"Beverly Hills Ninja" (1997) detailed the story of the Great White Ninja and Tanley! (fistpalm)
"Swordfish" (1999) described a domestic disaster and bank transfers confirmable in seconds.
Ask HN: Why aren't micropayments a thing?
Amazon aws and related services can charge you a rate per email, or per unit time of computation, so why can't news sites just charge you $0.01 to read an article, or even half that?
For reference, there's Web Monetization [1] which tries to solve exactly that.
As others have noted it all boils down to user agent support. Otherwise most publishers probably won't consider giving up ad or subscription financed models.
Also, forcing users into subscriptions allows for better demographics data/statistics.
https://webmonetization.org/ lists Coil (flat $5/mo) as the first Web Monetization provider: https://coil.com/
Web Monetization builds upon ILP (Interledger Protocol), which is designed to work with any type of ledger; though it's probably not possible for any traditional ledger to beat the <1¢ transaction fee that only pre-mined coins have been able to achieve.
I really like their approach and I paid for Flattr for quite a while when it started. But something big needs to push them to the critical adoption rate. On a number of very techy blog posts, I got... 0 from WebMonetization. And that's on best case audience (coming from HN and tech Reddit).
Elon Musk announces $100M carbon capture prize
https://www.xprize.org/prizes/carbon :
> The $20M NRG COSIA Carbon XPRIZE develops breakthrough technologies to convert CO₂ emissions into usable products.
CCS: https://en.wikipedia.org/wiki/Carbon_capture_and_storage
CCU: https://en.wikipedia.org/wiki/Carbon_capture_and_utilization
Sequestration: https://en.wikipedia.org/wiki/Carbon_sequestration
Tim Berners-Lee wants to put people in control of their personal data
Interesting in a technological sense, but what problem it solves isn’t obvious to me. It lets me granularly authorize first party access what data I have in my pod, but there can’t be any technical guarantees with regards to illegitimate sharing or otherwise copying (many might at least cache, for instance) – nor about what is collected and shared outside this system.
I keep seeing data-hubs and identity-providers touting themselves as solutions to the web's privacy issues, but I don't see how they actually solve anything.
It seems like an attempted technical solution to a social problem to me.
The real problem with data based services (ads, Google search, etc) is really that a bunch of data is collected opaquely, unethically, and in some cases illegally. The whole system including data brokers and real time bidding is out of control.
In my (personal) view, it's the technical part of a solution that definitely also needs to have a social/legislative component. It cannot prevent parties from illegitimate sharing of my data, but it does give them the option to hand over control to me. There are lots of companies that currently hold data on us but for whom that data is not their primary competency, and they only need a small nudge (like GDPR) to make having the customer responsible for that data an attractive proposition.
You might be right, but I think it's disingenuous to market it as though this "solves privacy". Worst case, people are lulled into a false sense of security.
Data-storage + authorization doesn't solve any (new) technical privacy-issues; this is "data protection" rather than "data privacy" in my book.
While I recognize the value of W3C LDP and SOLID, I also fail to see anything in SOLID that prevents B from sharing A's now pod-siloed information.
Does it prevent screenshots and OCR?
So it's in standard record structs and that makes it harder for the bad guys?
Who moderates mean memes with my face on them?
It is my hope that future Linked Data spec tutorials model something benign like shapes or cells instead of people: so that we can still see the value.
Laws still exist against things like perjury, even though the existence of the law is not a technical means in itself able to prevent perjury. Note that one of the comments upthread specifically mentioned legislation. The current notion that many people in the tech world have, which roughly states that what determines whether something is kosher is whether it's technically possible to accomplish, is something that needs to change, instead of things just staying a permanent Wild West forever.
There's also an old phrase that putting locks on your doors doesn't actually stop a determined attacker, but that it's okay because they're not meant to—that they're meant to "keep honest people honest". It's a principle that applies here.
No, there are few to no actual privacy improvements over centralized systems.
Perhaps even functional regression: what, are you going to run a hash blocklist across all nodes? Like spamhaus? Is there logging or user accounting? Is anything chain of custody admissable, or are we actually talking about privacy and liberty here?
Is everything just marked, "not for unlimited distribution"? And we dwpend upon there not being bad actors?
Real costs are very different with just friendly early adopters.
Cryptographically signing posts (with LD-Signatures) may help with integrity, but that can be done with centralized systems and does nothing to help with confidentiality.
What about availability? Is it a trivially-DOS'able system?
Governments spurred the rise of solar power
I can't read the article, but I think evidence like this is important when it comes to the whole idea of government subsidises impacting on free markets. The free market has become a bit of a religion for some (you can see in the comments here), and there's this staunch divide between those that believe the free market will always come up with a humanitarian solution, to those that believe the government need to push things for it to happen.
My FIL was head of a large coal plant in the UK, and is very critical of renewable energies, their cost, the impact on the fossil fuel industry etc... It simply isn't possible to have a conversation with him about global warming - he will simultaneously argue it isn't happening and it's natural and not caused by us. He might be an extreme example, but there is general scepticism of government subsidies here.
I'd love to see a study where researchers take things that are seen as good/important/essential to modern life and measure the amount of public/government sponsorship that helped bring it about.
I always find the free market vs government arguments a false dichotomy. Even the free-est markets require a whole host of government services: regulation, standards setting, contracts enforcement, antifraud measures provision and infrastructure to name but a few. People only cry free market when they don't like a particular government activity.
That coal plant relies on infrastructure to get its coal. It needs fair utility prices to deliver its power to customers. It needs reliable (abuse free) markets to sell the power on. It needs voltage and frequency/time standards to access the network. It needs contract enforcement to actually get paid by its customers. It needs insurance and educated workers and a CEO who won't embezzle.
All provided by government, all welcomed by industry. But suddenly government has no place in free markets when it comes to CO2.
Should we prefer penalties or incentives in order to use predictable markets for the change we need?
That depends on whether you want power to be cheaper or more expensive. Cheaper is nice but you'll pay for it taxes etc. Expensive is painful at the point of consumption but encourages efficiency. It's a strategic and political decision and depends on a nations needs and objectives...
> I'd love to see a study where researchers take things that are seen as good/important/essential to modern life and measure the amount of public/government sponsorship that helped bring it about.
Essential technology investment of US tax dollars?
NASA spinoff technologies: https://en.wikipedia.org/wiki/NASA_spinoff_technologies
NSF, DARPA, IARPA, In-Q-Tel, ARPA-e (2009)
List of emerging technologies: https://en.wikipedia.org/wiki/List_of_emerging_technologies
Termux no longer updated on Google Play
The GitHub discussion is significantly more informative and carries a lot of thinking behind the changes: https://github.com/termux/termux-app/issues/1072
IMO a better link than a short paragraph on Wiki.
Note that there are 297 hidden items in that issue so you have to click "Load more..." ceil(297/60) times to read all of the comments about how APK packaging is soon necessary for latest Android devices so the termux package manager can't just dump executable binaries wherever.
FWIU:
- Android Q+ disallows exec() on anything in $HOME, which is where termux installed binaries that may have been writeable by the executing user.
- Binaries installed from APKs can be exec()'d, so termux must keep APK repacks rebuilt and uploaded to a play store.
- Termux shouldn't be installed from Google Play anymore: you should install termux from the F-Droid APK package repos, and it will install APKs instead of what it was doing.
- Compiling to WASM with e.g. emscripten or WASI was one considered alternative. "Emscripten & WASI & POSIX" https://github.com/emscripten-core/emscripten/issues/9479
What about development on-the-device?
- It seems C compiled with clang on the device wouldn't be executable? (If it was, that would be a way around the restriction: distribute packages as source, like the good old days)
> offer users the option of generating an apk wrapping their native code in a usable way. https://github.com/termux/termux-app/issues/1072#issuecommen...
This seems a promising solution: compile from source, create an apk, install - your custom distribution! For popular collections of packages, a pre-built apk.
- Java might be explicitly blocked, being a system language for android, even though its byecode is interpeted and not exec()ed.
- Other interpreted languages should be OK e.g. python
> > offer users the option of generating an apk wrapping their native code in a usable way.
> This seems a promising solution: compile from source, create an apk, install - your custom distribution! For popular collections of packages, a pre-built apk.
FPM could probably generate APKs in addition to the source archive and package types that it already supports.
The conda-forge package CI flow is excellent. There's a bot that sends a Pull Request to update the version number in the conda package feedstock meta.yml when it detects that e.g. there's a new version of the source package on e.g. PyPI. When a PR is merged, conda-forge builds on Win/Mac/Lin and publishes the package to the conda-forge package channel (`conda install -y -c conda-forge jupyterlab pandas`)
The Fedora GitOps package workflow is great too, but bugzilla isn't Markdown by default.
Keeping those APKs updated and rebuilt is work.
Ask HN: What should go in an Excel-to-Python equivalent of a couch-to-5k?
Yesterday, my co-founder published a blog about her experiences Ditching Excel for Python in her job as a Reinsurance Analyst [0].
One of the responses on reddit [1] asked what they should do, "Step 1 day 1," if having read Amy's post they were convinced to try and begin the long journey from tangled Excel/Access spaghetti.
My (flippant) reaction to a friend that brought the comment to my attention was unhelpful; "Step 1 day 1, quit." So he has challenged me to write eight helpful blog posts during the remainder of my Garden Leave.
What should go in them?
[0] https://amypeniston.com/ditching-excel-for-python/
How to write functions in JS / VB script and call them from a cell expression.
How to name variables something other than AB3.
How to use physical units and datatypes. (How to specify XSD datatype URIs that map to native primitives in an additional frozen header row. e.g. py-moneyed and Pint & NumPy ufuncs)
How transitive sort works (is there a tsort to determine what to calculate first (and whether there are cycles) on every modification event?)
Which Jupyter platforms do and don't support interactive charts with e.g. ipywidgets?
pandas.df.plot(kind=) (matplotlib), seaborne (what are the calculated parameters of this chart?), holoviews, plotly, altair
Reproducibility w/ repo2docker / BinderHub:
pip freeze > constraints.txt
cp constraints.txt requirements.txt
conda env export --from-history
Also,
When is it better to have code in a notebook instead of in a module?
How to export notebook cells to a module with nbdev
How to write tests to assert the quality of the code and the model: @pytest.mark.parametrize, pytest-notebook, jupyter-pytest-2, pytest-jupyter
When is it appropriate to parametrized a notebook with e.g. papermill?
How to handle concurrency: dask.distributed + dask-labextension, ipyparallel
Interesting, your suggestion is that the first thing to do is to learn how to make functions and variable names within excel?
Yeah if you port it to functions and verify that you haven't broken anything, you could then easily port to Python functions that you could call from Excel with an add-on that everyone that opens the sheet needs to have installed; but everyone that opens a sheet that calls Python must have that same extension (and all python package dependencies) installed, too
>My (flippant) reaction to a friend that brought the comment to my attention was unhelpful; "Step 1 day 1, quit." So he has challenged me to write eight helpful blog posts during the remainder of my Garden Leave.
One way to do it would be to do impedance matching: maximizing usefulness by bringing Python to the universe the person who asked the question lives in: Excel, Word, PDF, email.
This means "Automate the Boring Stuff"[0] to ease into programming with something specific to do that is relevant and useful right off the bat for manipulating files, spreadsheets, PDF documents, etc.
Openpyxl[1] for playing more with Excel files, and basically searching "Excel Python" on the internet for more use cases.
Python-docx[2] for playing with Word documents, extracting content from tables, etc.
PdfPlumber[3] for playing with PDFs. Sometimes you have data in .pptx files, and you can use LibreOffice to convert all of them to PDFs because you hate the pptx format:
libreoffice --headless --invisible --convert-to pdf *.pptx
Also, if you both are using notebooks, take a look at what we're building at https://iko.ai. I posted about it a bit earlier[4]. It has no-setup real-time Jupyter notebooks, and all the jazz. Focuses on solving machine learning problems since we built it for ourselves to slash projects' time.- [0]: https://nostarch.com/automatestuff2
- [1]: https://openpyxl.readthedocs.io/en/stable/
- [2]: https://python-docx.readthedocs.io/en/latest/
Using openpyxl to extract data from exist excel workbooks and other documents would seem like a good place to start.
I suspect many companies use excel workbooks as "forms" with lots of data at the same cell in multiple workbooks.
> I suspect many companies use excel workbooks as "forms" with lots of data at the same cell in multiple workbooks.
Downstream data quality costs can be minimized with data normalized schema and data collection process controls like forms-based data validation.
There are established UI/UX design patterns for data validation of user-supplied data: accessible [web] forms with tab-ordered input fields and specific per-input feedback with accessible HTML5 and ARIA. IIUC, Firefox now supports PDF forms, too?
Why would we move from a spreadsheet to an actual database?
Data integrity:
Referential integrity (making sure that record keys actually point to something when creating, updating, or deleting),
Columnar datatypes (float, decimal, USD, complex fraction),
Access controls (auth(z): authentication and authorization),
Auditing (what was the value before and after that) and Disaster Recovery,
Organizationally-unified schema development and corresponding validation.
Repeatability / Reproducibility: can you replay the steps needed to build the whole sheet? What parameters were entered and how to we script that par so that we can easily assess the relations between the terms of the argument presented?
Scientists turn CO2 into jet fuel
Link to the article in question: https://www.nature.com/articles/s41467-020-20214-z
They call it 'renewable' yet use Iron as a catalyst. I'm not a chemist - how is that renewable?
Yao, B., Xiao, T., Makgae, O.A. et al. "Transforming carbon dioxide into jet fuel using an organic combustion-synthesized Fe-Mn-K catalyst." Nat Commun 11, 6395 (2020). https://doi.org/10.1038/s41467-020-20214-z
Synthetic biology can do it better. Fix CO2 by growing sugar cane, turn sugar into jet fuel with genetically engineered yeast.
https://www.total.com/media/news/press-releases/total-and-am...
Still not commercially viable, but much closer to it than the linked process.
That approach requires lots of land, water, and time. Nothing wrong with that, but much of the funding for CO2->jet fuel research comes from the military, which wants to be able to create liquid fuel quickly from CO2 in the atmosphere and they don't care how inefficient it is. Military customers are typically not very interested in processes that require running a farm.
The military also spends a lot of time turning jet fuel into electricity, because the former is energy-dense, easy to store, easy to transport. And they need electricity in lots of places without a grid.
Where would the reverse be useful? Somehow you have unlimited electricity but no fuel, and you have time and space to run a chemical refinery? (And isn't carbon capture from the atmosphere likely to require farm-scale infrastructure anyway?)
You cannot run aircraft on electricity; liquid fuel is the only option. The military is keenly interested in being able to refuel aircraft in situations where liquid fuel shipments might be denied or unavailable for some reason.
You can run aircraft on electricity.
Locomotives run on electrical energy produced by diesel generators (because electric motors are more energy efficient), for example.
The limits are the cost and weight of the batteries and the charge time.
You can't in any practical sense. Locomotives don't have a problem carrying the weight of a diesel engine plus a generator; in fact the extra weight works in their favor.
If you tried that trick with an airplane it would never get off the ground. Locomotives use this system not for efficiency but because they need the low-end torque electric motors provide. (If they only needed efficiency they'd just drive the wheels with the diesel engine directly.) Airplanes don't need torque; they need power.
You can run an airplane on batteries but only for a few minutes with modern battery technology. Practical battery-powered aircraft for military applications are not going to happen any time soon. We might see battery-powered air taxis soon but those are not even close to meeting military requirements for fighters, transports, etc.
From https://en.wikipedia.org/wiki/Electric_aircraft :
> [For] large passenger aircraft, an improvement of the energy density by a factor 20 compared to li-ion batteries would be required
The time it takes to surpass this energy density threshold is affected by battery tech investments; which had been comparatively paltry in terms of defense spending. Trillions on batteries would've been a much better investment; with ROI.
Sadly, some folks in defense still can't understand why non-oil investments in battery tech are best for all.
There are multiple electric trainer aircraft with flight times over an hour and quite a few more in development.
Jet engines are terribly inefficient (30-50% efficient) compared to electric motors.
Show HN: Stork: A customizable, WASM-powered full-text search plugin for the web
OP - congratulations on shipping! WASM-powered search caught my eye.
Looks like the search index is downloaded and used locally in the browser, so this is as fast as search can get. One trade-off though is that you're limited to relatively small datasets. While this shouldn't be an issue for small-medium static sites, an index that needs to be downloaded over the wire will affect your page performance for larger sites / datasets.
In those cases, you'd want to use a dedicated search server like Algolia or its open source alternative Typesense [1]. Both of them store the index server-side and only return the relevant records that are searched for.
For eg: you'd probably not want to download 2M records over the wire to search through them locally [2]. You'd be better off storing the index server-side.
[1] https://github.com/typesense/typesense (shameless plug)
Just a quick non-thought-through idea but would it be possible to build an index in a way that allows clients to download only parts of it based on what they search? I.e. the search client normalizes the query in some way and then requests only the relevant index pages. The index would probably be a lot larger on the server but if disk space is not an issue...?
(Though at some point you have to ask yourself what the benefits of such an approach are compared to a search server.)
Merkle Search Trees: Efficient State-Based CRDTs in Open Networks https://hal.inria.fr/hal-02303490/document
Peer-to-Peer Ordered Search Indexes https://0fps.net/2020/12/19/peer-to-peer-ordered-search-inde... (which adds useful context about the above)
Sorry for just dropping these links, I should already be asleep :)
> Merkle Search Trees: Efficient State-Based CRDTs in Open Networks https://hal.inria.fr/hal-02303490/document
https://scholar.google.com/scholar?cites=7160577141569533185... ... "Merkle Hash Grids Instead of Merkle Trees" (2020) https://scholar.google.com/scholar?cluster=13503894708682701...
Browser-side "Blockchain Certificate Transparency" applications need to support at least exact key lookup by domain/SAN and then also by cert fingerprint value; but the whole CT chain with every cert issue and revocation event is impractically large in terms of disk space.
https://github.com/amark/gun#history may also be practically useful.
Upptime – GitHub-powered open-source uptime monitor and status page
Maker here, I made Upptime as a way to scratch my own itch... I wanted a nice status page for Koj (https://status.koj.co) and was previously using Uptime Robot which didn't allow much customization to the website.
It started with the idea of using GitHub issues for incident reports and using the GitHub API to populate the status page. The obvious next step was opening the issues automatically, so the uptime monitor was born. I love using GitHub Actions in interesting ways (https://github.blog/2020-08-13-github-actions-karuna/) and have made it a large part of my toolkit!
https://news.ycombinator.com/item?id=25557032 mentions "~3000 minutes per month". GitLab's new pricing structure: [(runner_minutes, usd_per_month), (400, $0), (2_000, $4), (10_000, $19), (50_000, $99)]
You can run a self-hosted GitHub or GitLab Runner with your own resources: https://docs.github.com/en/free-pro-team@latest/actions/host...
GitLab [Runner] also runs tasks on cron schedules.
The process invocation overhead for CI is greater than for a typical metrics collection process like a nagios check or a memory-resident daemon like collectd with the curl plugin and the "Write HTTP" plugin (if you're not into using a space and time efficient timeseries database for metrics storage)
An open source project with a $5/mo VPS could run collectd in a container with a config file far far more energy efficiently than this approach.
Collectd curl statistics: https://collectd.org/documentation/manpages/collectd.conf.5....
Collect list of plugins: https://collectd.org/wiki/index.php/Table_of_Plugins
Is there a good way to do {DNS, curl HTTP, curl JSON} stats with Prometheus (instead of e.g. collectd as a minimal approach)?
Show HN: Simple-graph – a graph database in SQLite
I wonder if there are ways, in SQLite, to build indices for s,p,o/s,p/p,o/ and maybe more subtle ones... That would be uber nice, given the fact that most graph databases have their own indexing strategies, and you cannot craft your own.
rdflib-sqlalchemy is a SQLAlchemy rdflib graph store backend: https://github.com/RDFLib/rdflib-sqlalchemy
It also persists namespace mappings so that e.g. schema:Thing expands to http://schema.org/Thing
The table schema and indices are defined in rdflib_sqlalchemy/tables.py: https://github.com/RDFLib/rdflib-sqlalchemy/blob/develop/rdf...
You can execute SPARQL queries against SQL, but most native triplestores will have a better query plan and/or better performance.
Apache Rya, for example:
> indexes SPO, POS, and OSP.
Thanks for your comment. I use rdflib frequently but have never tried the SQLAlchemy back end. Now I will. That said, Jena or Fuseki, or the commercial RDF stores like GraphDB, Stardog, and Allegrograph are so much more efficient.
In CPython, types implemented in C are part of the type tree
The docs should have coverage on this:
Python/C API Reference Manual: https://docs.python.org/3/c-api/index.html
Python/C API Reference Manual » Object Implementation Support > Type Objects: https://docs.python.org/3/c-api/typeobj.html
CPython Devguide > Exploring Python Internals > Additional References: https://devguide.python.org/exploring/
Experiments on a $50 DIY air purifier that takes 30s to assemble
From "Better Box Fan Air Purifier" https://tombuildsstuff.blogspot.com/2013/06/better-box-fan-a... :
> Air purifiers can be expensive and you've probably seen articles recommending to just put a 20" x 20" x 1" furnace filter on a cheap 20" box fan and POOF! instant cleaner air for not a lot of money. It really does clean the air pretty cheap.
> There's a problem with this though. These fans weren't designed to be run with a filter. The filter will restrict air flow which will put a higher strain on the motor causing it to use more electricity and in worse cases could be a fire hazard. The higher the MERV rating (cleaning efficiency) of the filter the more stress it will put on the fan.
> Don't worry! You can still have your cheap air purifier as long as the filter area is increased to decrease the effect of air resistance. Instead of using one 20x20x1 filter we'll use two 20x25x1 filters which increases the filter surface area over 250%. It's a little more expensive because you're using two filters instead of one but the increased filter surface area also helps the filter last longer before it gets clogged up and we're saving on energy use compared to a single filter.
> There's a problem with this though. These fans weren't designed to be run with a filter. The filter will restrict air flow which will put a higher strain on the motor causing it to use more electricity and in worse cases could be a fire hazard. The higher the MERV rating (cleaning efficiency) of the filter the more stress it will put on the fan.
I'm fairly certain the opposite is true. In fact, most rowing machines work on this principle. The easiest setting is the one where the fan is as closed off as possible because it's pulling a vacuum. Less air, less resistance, easier to row / less power required to spin the fan. If anything, fans should draw less current with air flow on the inlet side restricted.
A. Putting two [larger] filters in a 'V' with cardboard to fill the top and bottom pulls the same amount of air through a larger area of filters
B. pulling the same volume of air through greater surface area results in greater pressure between the filter and fan than one filter directly affixed to the fan
C. The lower air pressure / "suction" due to an obstructed intake causes an electric fan motor to fail more quickly.
D. Increasing the air pressure that the motor is in reduces the failure rate?
Goodreads plans to retire API access, disables existing API keys
This makes absolutely no sense and has no relation to any economic variables. Goodreads isn’t some struggling self-funded startup — it’s owned by Amazon.com. The acquisition was a deal that should have never been approved, if the Obama administration had been anything beyond completely impotent at protecting us from monopoly games:
https://www.theguardian.com/books/2013/apr/02/amazon-purchas...
I would like to understand the true strategic interest behind this. Is Amazon simply penny-pinching now that they’ve successfully obliterated the market for both new and used books online? There’s way more to this story than appears on the surface.
It's not mentioned in the article but Amazon had disabled their affiliate link program for Goodreads ahead of this announcement, which cut off a major source of their revenue for them, and forced them to sell. They had no choice.
The strategic reason for Amazon is obvious. As someone else mentioned, Amazon doesn't want Goodreads data to be used to add value to their competitors' offerings.
Speaking as a developer who tried to build on top of Goodreads API, I also want to add that this was a long time coming. The API had been neglected for some time. And some of the most interesting datasets weren't even made available through the API.
Apparently Amazon’s Kindle lost the ability to share progress on Goodreads in the last week as well.
I guess the writings on the wall...
Turing Tumble Simulator
(Turing Tumble is a (fun) marble-powered computer game.)
Python Pip 20.3 Released with new resolver
If anyone hasn't seen it, now is a good time to look at https://python-poetry.org/ It is rapidly becoming _the_ package manager to use. I've used it in a bunch of personal and professional projects with zero issues. It's been rock solid so far, and I'm definitely a massive fan.
I occasionally ask our principle / sr. Python engineers about this, and their response is always, "These things come and go, virtualenv/wrappers + pip + requirements.txt works fine - no need to look at anything else."
We've got about 15 repos, with the largest repo containing about 1575 files and 34MBytes of .py source, 14 current developers (with about 40 over the last 10 years) - and they really are quite proficient, but haven't demonstrated any interest at looking at anything outside pip/virtualenv.
Is there a reason to look at poetry if you've got the pip/virtualenv combination working fine?
People who use poetry seem to love it - so I'm interested in whether it provides any new abilities / flexibility that pip doesn't.
beware of zealot geeks bearing gifts. if your environment is currently working fine and you are only interested in running one version of python and perhaps experimenting with a later one then venv + pip is all you need, with some wrapper scripts as you say to make it ergonomic (to set the PYTHON* environment variables for your project, for example)
The main reason I use Poetry is for its lockfile support. I've written about why lockfiles are good here: https://myers.io/2019/01/13/what-is-the-purpose-of-a-lock-fi...
Pip supports constraints files. https://pip.pypa.io/en/stable/user_guide/#constraints-files :
> Constraints files are requirements files that only control which version of a requirement is installed, not whether it is installed or not. Their syntax and contents is nearly identical to Requirements Files. There is one key difference: Including a package in a constraints file does not trigger installation of the package.
> Use a constraints file like so:
python -m pip install -c constraints.txt
What does that command do? Does it install dependencies, or simply verify the versions of the installed dependencies?
It just gives you an error:
ERROR: You must give at least one requirement to install (see "pip help install")
So it seems like a strange choice of usage example. You have to provide both requirements and constraints for it to do anything useful (applying the version constraints to the requirements and their dependencies).Your package manager should be boring, extremely backward and forward compatible, and never broken. Experience has shown this not to be true for python. Several times over the years i’ve found myself, pinning, upgrading, downgrading, or otherwise juggling versions of setuptools and pip in order to work around some bug. Historically I have had far more problems with the machinery to install python packages I have had with all of the other python packages being installed combined, and that is absurd.
Convolution Is Fancy Multiplication
FWIW, (bounded) Conway's Game of Life can be efficiently implemented as a convolution of the board state: https://gist.github.com/mikelane/89c580b7764f04cf73b32bf4e94...
How to better ventilate your home
I recently purchased a home and spent some time improving ventilation in it this year. This is a home in Minneapolis, so just "opening a window" is not an option, as it would be too cold in the winter and dramatically reduce energy efficiency. Its effectiveness also changes based on factors like the wind, and the temperature difference between inside and out.
The general strategy for doing this right now with existing homes is to insulate your home as well as possible for energy efficiency, then install a continuous ventilation fan. This is essentially a bathroom fan, except that it runs all the time at a constant low speed, helping to circulate the air through and out your house by "pulling" a designed amount of air through the cracks.
I didn't want to punch a new hole in the house just to do this, and already had a bathroom fan installed, so as a hack I just turned it into a whole house ventilation fan with this: https://www.aircycler.com/pages/smartexhaust
Basically you calculate how much CFM you need per hour based on the square footage of your house, and then you set it on the fan control. It still acts like a bathroom fan, except every hour it also runs for a set period of time (in my case, about 12 minutes).
The standard for this is ASHRAE 62.2. Use this to calculate the CFM for code: https://homes.lbl.gov/ventilate-right/step-3-whole-building-...
Then the formula for calculating the fan run time is on this sheet: https://cdn.shopify.com/s/files/1/0221/7316/files/AC_DOC_7_0...
And presto, you have a ventilation system without having to do a lot of work. Do _not_ try this with a crappy, rusty old bathroom fan - clean or replace the motor first, and for extra credit, use an arc fault circuit interrupter on the breaker so if the motor fails it will blow the fuse instead of potentially causing a fire.
Note: This is really just to manage general air quality and VOCs. If you want to specifically make ventilation for COVID-19, that's a different problem. They focus on Air Changes per Hour (ACH), and a cubic feet calculation is used rather than square feet. There's no "recommended amount" of ACH for managing COVID-19. You're likely improving the situation by increasing it, but I wouldn't start inviting people over after you did it. ACH is very high in ICUs but staff are still getting sick there.
RE Humidity - ideal range varies based on region and outside temperature, but this chart roughly shows it: https://lh3.googleusercontent.com/proxy/mPz-jGdLpnPGgvWR3Egf...
I have an on-furnace humidifier controlled by an ecobee for the winter, it's a huge quality of life improvement if you live in cold climates, but make sure to set it to "frost control" otherwise it won't lower the humidity based on the outside air and you can get mold in your walls. For summer, a standard house A/C combined with continuous ventilation should be sufficient to bring down humidity levels.
Finally, the "correct" air ventilation is a moving target with trade-offs and concerns of the moment. It was higher in 1925 (30cfm/person) to try to prevent tuberculosis and infectious diseases, then was lowered to 5cfm/person in the 70s during the energy crisis, and is currently at 15cfm/person. I imagine COVID-19 could make us re-consider the current recommendations. https://homes.lbl.gov/ventilate-right/ashrae-standard-622
I'm in an older home that leaks a sieve but has central air. Would turning on that house fan to circulate without running the AC accomplish the same thing? Also, do any smart thermostats allow this sort of thing to be automated?
"Fan control with a Nest thermostat" https://support.google.com/googlenest/answer/9296419?hl=en
Looks like there could be: (1) an every hour for n minutes schedule; (2) an option to run the fan with the thermostat off; (3) an option to shut off the fan when everyone is gone
Quantum-computing pioneer Peter Shor warns of complacency over Internet security
If an organization has a 5 year refresh cycle (~time to implement a new IT system), and there exists a quantum computer with a sufficient number of error-corrected qubits by 2027 [1], an organization/industry has 5 years from 2022 to go quantum-resistant: replace their existing solution with quantum-resistant algos (and, in some cases, a DLT with a coherent pan-industry API) and/or double their RSA and ECDSA key sizes.
[1] "Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)" https://news.ycombinator.com/item?id=15907523
Which DLT/blockchains without PKI (or DNS) will implement the algorithms selected from the NIST Post-Quantum Cryptography (PQC) round 3 candidate algorithms? https://csrc.nist.gov/projects/post-quantum-cryptography
CERN Online introductory lectures on quantum computing from 6 November
This is probably a dumb question but are there any data sciencey or tensorflowy things that can be done faster on a quantum computer?
There are some basic linear algebra subroutines (Matrix inversion, finding eigenvalues & eigenvectors) that can be performed with an exponential speedup on a quantum computer in theory, that's why there is so much interest in Quantum Machine Learning. If you are asking about the current hardware level, then no, current quantum computers can not solve any practical problem faster than a classical computer.
In theory classical computers are also not limited to have better solutions to problems where quantum computers claim superiority like prime factorisation or like Tang's quantum inspired classical algorithm that beats HHL for low rank matrices.
> There are some basic linear algebra subroutines (Matrix inversion, finding eigenvalues & eigenvectors) that can be performed with an exponential speedup on a quantum computer in theory
Eh... those particular subroutines have polynomial time algorithms already on a classical computer.
You can't exponentially speed up something that's polynomial time already.
https://quantumalgorithmzoo.org/ lists algorithms, speedups, and descriptions.
That's a great list, thanks!
(Though for the benefit of readers here, the list doesn't include any "basic linear algebra subroutines (Matrix inversion, finding eigenvalues & eigenvectors)").
A Manim Code Template
The demo video looks cool. It's maybe not obvious that there's a link to the code-video-generator (which is built on manim by 3blue1brown) demo video in the README: https://youtu.be/Jn7ZJ-OAM1g
Startup Financial Modeling: What is a Financial Model? (2016)
https://www.causal.app/ has free business model templates: SaaS (Foresight), eCommerce (https://foresight.is/), Startup Runway, Buy/Rent, Ads Calculator
The article explicitly, repeatedly says templates are a bad idea. If you disagree, it'd be interesting to see your reasoning, rather than links to free templates.
We'd do better to find a list of business modeling books and tools.
And then take a look at integrating actual data sources; hopefully some quantitative with APIs.
Uncertainties supports mean±"error" w/ "error propagation": https://pypi.org/project/uncertainties/
Sliders etc can be done in Jupyter notebooks with e.g. ipywidgets: https://ipywidgets.readthedocs.io/en/latest/
At what grade level do presidential candidates debate?
Intelligence does not imply superior moral, ethical, or rational judgement.
Incomplexity of speech does not imply lack of intelligence.
Here's the section on Simple English in Simple English Wikipedia: https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_Eng...
Imagine being reprimanded for use of complex words and statistical terms in an evidence-based policy discussion in a boardroom. Imagine someone applying to be CEO, President, or Chairman of the Board and showing up without a laptop, any charts, or any data.
Topicality!
Perhaps there is a better game for assessing competency to practice evidence-based policy.
This commenter effectively refutes the claim that Fleisch-Kincaid is a useful metric for assessing the grade-level of interpretively-punctuated spoken language: https://news.ycombinator.com/item?id=24807610
Like I said, from "Ask HN: Recommendations for online essay grading systems?" https://news.ycombinator.com/item?id=22921064 :
> Who else remembers using the Flesch-Kincaid Grade Level metric in Word to evaluate school essays? https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabi...
> Imagine my surprise when I learned that this metric is not one that was created for authors to maximize: reading ease for the widest audience is not an objective in some deparments, but a requirement.
> What metrics do and should online essay grading systems present? As continuous feedback to authors, or as final judgement?
That being said, disrespectful little b will not be tolerated or venerated by the other half of the curve.
ElectricityMap – Live CO₂ emissions of electricity production and consumption
What's going on in Queensland, Australia? "793g Carbon intensity" seems insanely high, especially as compared with Norway/France/Sweden (at 24-41g carbon intensity).
Does anyone have insight into this? I've never pictured Australia as a big contributor to pollution.
Good observation. I was curious, and it looks like the parser code to retrieve the data is open source; so here's the relevant module for Australia:
https://github.com/tmrowco/electricitymap-contrib/blob/24ea2...
The production mix (energy generation by source) source URL is listed on L326.
Opening the dataset in LibreOffice Calc, filtering by Region = QLD1, filtering for Current Output > 0, and sorting by Current Output descending does seem to show that the vast bulk of production reported has fuel source descriptor 'Black Coal'.
If that's a bug in the reported data or source, hopefully the ElectricityMap team can jump in to take a look. The data initially seems legit, from searching for some of the power stations listed.
Edit / addendum: worth searching for Australia in the project's GitHub issues too; there's some existing (although not directly related) investigation & validation work ongoing for that module
Also - home solar penetration in QLD is the highest in Australia. Is that not factored in to the equation? It doesn't look that way given 0% renewables shown.
This website is pulling information from public sources, so it can only show public generation & consumption information.
If someone is running a grid-tied solar system, outbound energy might be accounted for in these statistics, but it won't show any immediate consumption of that solar generation by the property, nor will it show any battery storage, it's only capable of showing what goes through the energy meter.
+1. California has substantial rooftop solar (11 Gwh peak) behind the meter, which isn’t shown in these graphs.
https://pv-magazine-usa.com/2019/04/15/californias-solar-pow...
How would behind the meter electricity consumption change the reported amount of CO2 emitted by other electricity production sources?
The map is intended to show the emissions intensity of electricity so if you miss a bunch of electricity from distributed generation you will over-estimate the intensity.
Man that was a dumb question. Thanks for clarifying.
So we'd need electric utility companies to share the live data of how many kwh of solar and wind people are selling back to the grid in order to get an accurate regional comparison of real-time carbon intensity?
FWIU, they're already parsing the EIA data; but it's significantly more delayed than the max 2 hour delay specified by ElectricityMap.
Here's the parser for the current data from EIA: https://github.com/tmrowco/electricitymap-contrib/blob/maste...
Should the EIA (1) source, aggregate, cache, and make more real-time data available; and (2) create a new data item for behind the meter kwh from e.g. residential wind and solar?
(Edit) "Does EIA publish data on peak or hourly electricity generation, demand, and prices?" https://www.eia.gov/tools/faqs/faq.php?id=100&t=3
> Hourly Electric Grid Monitor is a redesigned and enhanced version of the U.S. Electric System Operating Data tool. It incorporates two new data elements: hourly electricity generation by types of energy/fuel source and hourly sub-regional demand for certain balancing authorities in the Lower 48 states.
> [...]
> EIA does not publish hourly electricity price data, but it does publish wholesale electricity market information including daily volumes, high and low prices, and weighted-average prices on a biweekly basis.
AFAIU, retail intraday rates aren't yet really a thing in the US; but some countries in Europe do have intraday rates (which create incentives for the grid scale energy storage necessary for wide-scale rollout of renewables).
(Edit) "Introduction to the World of Electricity Trading" https://www.investopedia.com/articles/investing/042115/under... :
> Energy prices are influenced by a variety of factors that affect the supply and demand equilibrium. On the demand side, commonly referred to as a load, the main factors are economic activity, weather, and general efficiency of consumption. On the supply side, commonly referred to as generation, fuel prices and availability, construction costs and the fixed costs are the main drivers of the price of energy. There's a number of physical factors between supply and demand that affect the actual clearing price of electricity. Most of these factors are related to the transmission grid, the network of high voltage power lines and substations that ensure the safe and reliable transport of electricity from its generation to its consumption.
Which customers (e.g. data centers, mining firms) would take advantage of retail intraday rates?
How does cost and availability of storage affect the equilibrium price of electricity?
>So we'd need electric utility companies to share the live data of how many kwh of solar and wind people are selling back to the grid in order to get an accurate regional comparison of real-time carbon intensity?
You actually need production numbers for what's used "behind the meter" which by its nature is not directly available live or otherwise and has to be estimated.
>Which customers (e.g. data centers, mining firms) would take advantage of retail intraday rates?
Loads! In the UK you can get half-hourly priced electricity even at a domestic level (with a smart meter) and if you have loads that can be be re-scheduled (mainly EV charging but if you had an electric storage heater that would work as ell) you can save quite a lot of money.
Heavy industrial users definitely move usage around both to avoid expensive wholesale charges but also to reduce their transmission connection charges which (in the UK) are based on their usage during the most congested periods of the year. Water companies will vary pump operations for this reason and water pumping and treatment alone is 2% of electricity use.
Data centers definitely pay on a half-hourly settled basis but tend not to shift their workloads around to take advantage, some data centers will run their cooling systems in such a way as to reduce usage during the most expensive few half hours though. I have heard that larger users like Amazon, FB, Google, will automatically load balance between global centers to reduce electricity bills and carbon footprint.
> how many kwh of solar and wind people are selling back
Not just selling back. Producing - and then either using or selling.
This USA data seems kind of strange to me. For instance my home state, Kansas, we are fourth in the nation for installed wind capacity, supplying 41% of our energy demand. Our sole nuclear power plant provides an 18% base load. 33% is coal and the rest random crap. We're very population sparse and low population in general. Yet in other areas of the country that are strictly coal and natural gas fired, with a greater population density and higher total population, they're less turd colored. I'm guessing this is a strange interaction between renewable availability or it just might be a lack of available data.
From https://github.com/tmrowco/electricitymap-contrib#data-sourc... :
> Here are some of the ways you can contribute:
> Building a new parser, Fixing a broken parser, Changes to the frontend, Find data sources, Verify data sources, Translating electricitymap.org, Updating region capacities
I sent a few tweets and emails about the data in this region but nothing happened here either
[deleted]
Bash Error Handling
From https://twitter.com/b0rk/status/1312413117436104705 :
> TIL that you can use the "DEBUG" trap to step through a bash script line by line
trap '(read -p "[$BASH_SOURCE:$LINENO] $BASH_COMMAND?")' DEBUG
> [...] it does something very different than sh -x — sh -x will just print out lines, this stops before* every single line and lets you confirm that you want to run that line*>> you can also customize the prompt with set -x
export PS4='+(${BASH_SOURCE}:${LINENO}) '
set -x
With a markdown_escape function, could this make for something like a notebook with ```bash fenced code blocks with syntax highlighting?A Customer Acquisition Playbook for Consumer Startups
> For consumer companies, there are only three growth “lanes” that comprise the majority of new customer acquisition:
> 1. Performance marketing (e.g. Facebook and Google ads)
> 2. Virality (e.g. word-of-mouth, referrals, invites)
> 3. Content (e.g. SEO, YouTube)
> There are two additional lanes (sales and partnerships) which we won't cover in this post because they are rarely effective in consumer businesses. And there are other tactics to boost customer acquisition (e.g PR, brand marketing), but the lanes outlined above are the only reliable paths for long-term and sustainable business growth.
Marketing calls those "channels". I don't think they're exclusive categories: a startup's YouTube videos could be supporting a viral marketing campaign, for example; ads aren't the only strategy for (targeted) social media marketing; if the "ask" / desired behavior upon receiving the message is to share the brand, is that "viral"?
What about Super Bowel commercials?
Traditional marketing: press releases, (linked-citation-free) news wires, quasi-paid interviews, "news program" appearances, product placement.
"Growth hacking": https://en.wikipedia.org/wiki/Growth_hacking
Gathering all open and sustainable technology projects
Great list! https://github.com/protontypes/awesome-sustainable-technolog...
"Earth" topic label: https://github.com/topics/earth
Though there are unfortunately zero (0,) results for "sustainability-report[s,ing]", it may be useful to have a heading: https://github.com/topics/sustainability-reporting ... GRI Reporting Standards & XBRL: https://en.wikipedia.org/wiki/Global_Reporting_Initiative , Sustainability Cloud, https://en.wikipedia.org/wiki/Sustainability_reporting
TIL about the "SWEET" (Semantic Web for Earth and Environmental Terminology) Ontologies: https://github.com/ESIPFed/sweet
SunPy: https://docs.sunpy.org/en/stable/generated/gallery/index.htm...
Jupyter Notebooks Gallery
This is not a valid Show HN, so we've taken that out of the title. Please see the rules: https://news.ycombinator.com/showhn.html. Note the word 'lists'.
Lists tend not to make good HN submissions because the only thing one can really discuss about them is the lowest common denominator (or greatest common factor?) of the list elements. It would be better to pick one of the most interesting elements and submit that instead.
HN is itself a list, and a pointer to a pointer to a pointer is too much indirection.
https://hn.algolia.com/?query=denominator%20list%20by:dang&d...
Jupyter/Jupyter > Wiki > "A gallery of interesting Jupyter Notebooks" lists hundreds of notebooks: https://github.com/jupyter/jupyter/wiki/A-gallery-of-interes...
The mybinder.org Grafana dashboard lists the most popular notebook repos in the last hour: https://grafana.mybinder.org/
Jupyter/Nbviewer > FAQ > "How do you choose the notebooks featured on the nbviewer.jupyter.org homepage?" :
> We originally selected notebooks that we found and liked. We are currently soliciting links to refresh the home page using a Google Form. You may also open an issue with your suggestion.
https://nbviewer.jupyter.org/faq#how-do-you-choose-the-noteb...
Google Form: https://docs.google.com/forms/d/e/1FAIpQLSd6AlVvC7KagENypGTc...
Here's the Nbviewer source code. AMP (And https://schema.org/ScholarlyArticle / Book / CreativeWork metadata) could be useful. https://github.com/jupyter/nbviewer
Jupyter notebooks are almost a huge step forward for programming. The instant feedback, saving of partial program state, and visualizations mixed in are amazing and fun to work with. The fact that it's taking place in a browser, you don't have modal editing with Vim, you don't get plugin support (like MyPy), and you don't have intellisense is a huge step backwards.
I'm hoping that someone addresses these issues, I know VSCode is trying, and Scala "Metals" worksheets are another similar concept (though different, it puts variable values in comments next to code as you update the code).
I feel like all of these are a step closer than what the Lighttable/Eve teams were shooting for, but not quite the next generation way of coding that it could be.
I'm a huge fan of rmarkdown since it addresses many of these shortcomings by way of being much simpler than jupyter notebooks. The files are simply text files, and you can write python too if you like, they aren't just for R.
NestedText, a nice alternative to JSON, YAML, TOML
So, this is basically YAML, "but better". I can repeat once more that "easily understood and used by both programmers and non-programmers" is unapologetically stupid concept that can never succeed. So I see how all of this will sound all too familiar to anybody with a little experience, which makes them to automatically dismiss this YAYAML.
But YAML is really quite complicated, and JSON (which shouldn't be used for config files at all) and TOML (which I love and wish it would gain more popularity) aren't exactly alternatives to YAML. So, I would be actually totally ok with "YAML, but better", as a way to deprecate YAML.
Now, it is clear from the start that this cannot deprecate YAML, because it doesn't even have booleans and numbers. But, surprisingly, I can accept this as well: ok, let's just assume that being good at dealing with strings may be enough.
The problem is, it isn't clear at all from the docs, if this is better than YAML at anything. It raises dozens of questions. I'll start with the most basic ones (using [] as a wrapper/delimiter): how do I represent values [ a], [a ], ["a"] and [""] in this file format?
What kind of values are [<whitespace>a] and [a<whitespace>] supposed to be? They look like typical YAML syntax traps to me.
Why should JSON never be used for configuration? It is sufficient for declaratively expressing anything I have encountered. Do we really need references or other stuff from YAML? For configuration this seems unnessecary, provided that the program, which interprets the result of parsing the JSON is well written.
JSON lacks comments and will fail for a missing or extra comma, so it's not great for configuration written by humans.
You can use HJSON which is the json with comments. It's fully compatible with json so easy to introduce into anything that does json. https://hjson.github.io/
JSON5 also supports comments and multiline strings with `\`-escaped newlines: https://json5.org/
Triple-quoted multiline strings like HJSON would be great, too.
From "The description of YAML in the README is inaccurate" https://github.com/KenKundert/nestedtext/issues/10 :
> I will mention something else. The section about the "Norway problem" is not quite accurate. Some YAML loaders do in fact load no as false. These are usually YAML 1.1 loaders. YAML 1.2's default schema is the same as JSON's (only true, false, 'null and numbers are non-strings).
> Any YAML loader is free to use any schema it wants. That is, no loader is required to to load no as false. Good loaders should support multiple schemas and custom schemas. The Norway problem isn't technically a YAML problem but a schema problem.
> imho, YAML's biggest failing to date is not making things like this clear enough to the community.
> Note: PyYAML has a BaseLoader schema that loads all scalar values as strings.
Algorithm discovers how six molecules could evolve into life’s building blocks
It'd be nice if they could do the same for protein synthesis, but that's obviously a much harder problem.
Folding@home https://en.wikipedia.org/wiki/Folding@home :
> Folding@home (FAH or F@h) is a distributed computing project aimed to help scientists develop new therapeutics to a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements of proteins, and is reliant on the simulations run on the volunteers' personal computers.
"AlphaFold: Using AI for scientific discovery" (2020) https://deepmind.com/blog/article/AlphaFold-Using-AI-for-sci...
https://www.kdnuggets.com/2019/07/deepmind-protein-folding-u... :
> At last year’s Critical Assessment of protein Structure Prediction competition (CASP13), researchers from DeepMind made headlines by taking the top position in the free modeling category by a considerable margin, essentially doubling the rate of progress in CASP predictions of recent competitions. This is impressive, and a surprising result in the same vein as if a molecular biology lab with no previous involvement in deep learning were to solidly trounce experienced practitioners at modern machine learning benchmarks.
Citations of "Resource-efficient quantum algorithm for protein folding" (2019) https://scholar.google.com/scholar?cites=1037213034434902738...
Protein folding: https://en.wikipedia.org/wiki/Protein_folding
I always wonder how many open research problems could benefit from a working hand from computing. Sometimes I wish research would be as available as an open-source project.
As a computer scientist who worked in a physics lab: a lot. Other sciences could desperately use the help of talented computer scientists. Unfortunately those other sciences are mostly mired in academia nonsense and don't pay well.
"Applied CS"
Computational science: https://en.wikipedia.org/wiki/Computational_science
Computational biology: https://en.wikipedia.org/wiki/Computational_biology
Computational thinking: https://en.wikipedia.org/wiki/Computational_thinking :
> The characteristics that define computational thinking are decomposition, pattern recognition / data representation, generalization/abstraction, and algorithms.
Additional skills useful for STEM fields: system administration / DevOps / DevSecOps, HPC: High Performance Computing (distributed systems, distributed algorithms, performance optimization; rewriting code that is designed to test unknown things with tests and for performance), research a graph of linked resources and reproducibly publish in LaTeX and/or computational notebooks such as Jupyter notebooks, dask-labextension, open source tool development (& sustainable funding) that lasts beyond one grant
Physicists build circuit that generates clean, limitless power from graphene
"Fluctuation-induced current from freestanding graphene" (2020) https://journals.aps.org/pre/abstract/10.1103/PhysRevE.102.0... https://doi.org/10.1103/PhysRevE.102.042101
> In the 1950s, physicist Léon Brillouin published a landmark paper refuting the idea that adding a single diode, a one-way electrical gate, to a circuit is the solution to harvesting energy from Brownian motion. Knowing this, Thibado's group built their circuit with two diodes for converting AC into a direct current (DC). With the diodes in opposition allowing the current to flow both ways, they provide separate paths through the circuit, producing a pulsing DC current that performs work on a load resistor.
> Additionally, they discovered that their design increased the amount of power delivered. "We also found that the on-off, switch-like behavior of the diodes actually amplifies the power delivered, rather than reducing it, as previously thought," said Thibado. "The rate of change in resistance provided by the diodes adds an extra factor to the power."
> The team used a relatively new field of physics to prove the diodes increased the circuit's power. "In proving this power enhancement, we drew from the emergent field of stochastic thermodynamics and extended the nearly century-old, celebrated theory of Nyquist," said coauthor Pradeep Kumar, associate professor of physics and coauthor.
> According to Kumar, the graphene and circuit share a symbiotic relationship. Though the thermal environment is performing work on the load resistor, the graphene and circuit are at the same temperature and heat does not flow between the two.
> That's an important distinction, said Thibado, because a temperature difference between the graphene and circuit, in a circuit producing power, would contradict the second law of thermodynamics. "This means that the second law of thermodynamics is not violated, nor is there any need to argue that 'Maxwell's Demon' is separating hot and cold electrons," Thibado said.
I don't understand. It sounds like they're drawing useful work (I think?) from a system that's already at thermodynamic equilibrium. Doesn't that violate the 2nd law of thermodynamics?
I'm not sure that I understand either. From the abstract (which phys.org failed to link to):
> The system reaches thermal equilibrium and the rates of heat, work, and entropy production tend quickly to zero. However, there is power generated by graphene which is equal to the power dissipated by the load resistor.
Looks like the article is also on ArXiV: https://arxiv.org/abs/2002.09947
https://scholar.google.com/scholar?oi=bibs&hl=en&cluster=103...
Is it really a closed system at equilibrium?
Mozilla shuts project Iodide: Datascience documents in browsers
I did this! I killed it and I didn't mean to.
Ten (10) days ago, I filed an issue in the iodide project: "Compatibility with 'percent' notebook format" https://github.com/iodide-project/iodide/issues/2942
And then six (6) days ago, I added this comment to that issue: https://github.com/iodide-project/iodide/issues/2942#issueco...
And now it's almost dead, and I didn't mean to kill it.
But I also suggested that it would be great if conda-forge had a WASM build target:
- "Consider moving CPython patches upstream" https://github.com/iodide-project/pyodide/issues/635#issueco...
For students, being able to go to a URL and have a notebook interface with the SciPy stack preinstalled without needing to have an organization manage shell accounts and/or e.g. JupyterHub for every student should be worth the necessary budget allocation. Their local machines have plenty of CPU, storage, and memory for all but big data workloads.
Iodide is/was really cool. Pyiodide (much of the SciPy stack compiled to WASM) is also a great idea.
Jyve with latest JupyterLab, nbgrader, and configurable cloud storage could also solve.
Google Colab is an alternative to consider if you need to share Jupyter notebooks: https://colab.research.google.com/notebooks/intro.ipynb.
There are many ways to share reproducible Jupyter notebooks.
Google Colab now supports ipywidgets (js) in notebooks. While you can install additional packages in Colab, additional packages must be installed by each user (e.g. with `! pip install sympy` in an initial input cell) for each new kernel.
repo2docker builds a docker image from software dependency versions specified in e.g. requirements.txt, environment.yml, and/or a postInstall script and then installs a current version of JupyterLab in the container. Zero-to-BinderHub describes how to get BinderHub (which builds and launches containers) running on a hosting provider w/ k8s. awesome-python-in-education/blob/master/README.md#jupyter
Google AI Platform Notebooks is hosted JupyterLab.
awesome-jupyter > Hosted Notebook Solutions lists a number of services: https://github.com/markusschanta/awesome-jupyter#hosted-note...
awesome-python-in-education > Jupyter links to many Jupyter resources like nbgrader and BinderHub but not yet Jyve: https://github.com/quobit/awesome-python-in-education#jupyte...
Ask HN: What are good life skills for people to learn?
My initial thoughts; learn to drive, first aid, a sport, play an instrument, a language, how to manage finances, to speak in front of people.
- "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894550
In no particular order:
- Food science; Nutrition
- Family planning: https://en.wikipedia.org/wiki/Family_planning
- Personal finance (see the link above for resources)
- How to learn
- How to teach [reading and writing, STEM, respect, compassion]
- Compassion for others' suffering
- How to considerately escape from unhealthy situations
- Coping strategies: https://en.wikipedia.org/wiki/Coping
- Defense mechanisms: https://en.wikipedia.org/wiki/Defence_mechanism
- Prioritization; productivity
- Goal setting; n-year planning; strategic alignment
Life skills: https://en.wikipedia.org/wiki/Life_skills
Khan Academy > Life Skills: https://www.khanacademy.org/college-careers-more
Four Keys Project metrics for DevOps team performance
> […] four key metrics that indicate the performance of a software development team:
> Deployment Frequency - How often an organization successfully releases to production
> Lead Time for Changes - The amount of time it takes a commit to get into production
> Change Failure Rate - The percentage of deployments causing a failure in production
> Time to Restore Service - How long it takes an organization to recover from a failure in production
Ask HN: Resources to encourage teen on becoming computer engineer?
Howdy HN
A teenager I am close with would like to become a computer engineer. Whet resources, books, podcasts, camps, or experiences do you recommend to support this teen's endeavor?
"Ask HN: Something like Khan Academy but full curriculum for grade schoolers?" [through undergrads] https://news.ycombinator.com/item?id=23794001
"Ask HN: How to introduce someone to programming concepts during 12-hour drive?" https://news.ycombinator.com/item?id=15454071
"Ask HN: Any detailed explanation of computer science" https://news.ycombinator.com/item?id=15270458 : topologically-sorted? Information Theory and Constructor Theory are probably at the top:
> A bottom-up (topologically sorted) computer science curriculum (a depth-first traversal of a Thing graph) ontology would be a great teaching resource.
> One could start with e.g. "Outline of Computer Science", add concept dependency edges, and then topologically (and alphabetically or chronologically) sort.
> https://en.wikipedia.org/wiki/Outline_of_computer_science
> There are many potential starting points and traversals toward specialization for such a curriculum graph of schema:Things/skos:Concepts with URIs.
> How to handle classical computation as a "collapsed" subset of quantum computation? Maybe Constructor Theory?
> https://en.wikipedia.org/wiki/Constructor_theory
https://westurner.github.io/hnlog/ ... Ctrl-F "interview", "curriculum"
CadQuery: A Python parametric CAD scripting framework based on OCCT
I wish more people would try this over openscad. I like openscad but the UI and with cadquery you use python! and now I noticed "CadQuery supports Jupyter notebook out of the box"
The jupyter-cadquery extension renders models with three.js via pythreejs in a sidebar with jupyterlab-sidecar: https://github.com/bernhard-42/jupyter-cadquery#b-using-a-do...
https://github.com/bernhard-42/jupyter-cadquery/blob/master/...
Array Programming with NumPy
Looks like there's a new citation for NumPy in town.
"Citing packages in the SciPy ecosystem" lists the existing citations for SciPy, NumPy, scikits, and other -Py things: https://www.scipy.org/citing.html ( source: https://github.com/scipy/scipy.org/blob/master/www/citing.rs... )
A better way to cite requisite software might involve referencing a https://schema.org/SoftwareApplication record in JSON-LD, RDFa, or Microdata; for example: https://news.ycombinator.com/item?id=24489651
But there's as of yet no way to publish JSON-LD, RDFa, or Microdata Linked Data from LaTeX with Computer Modern.
For some reason this struck me as inappropriate for the outlet. It's a nice piece as an introduction to array programming with numpy, but seemed out of place to me.
If, going forward, 5% of all papers that use NumPy to get their results actually cite this paper, it will be one of Nature's most cited papers every year.
There's an interesting trend of what content gets published in peer-reviewed journals vs. blogs/github/etc. I suspect there is an audience segment that strongly values peer reviewed pieces that are equivalent content wise to introductory material in a variety of formats.
I wonder if github should add a "Review" feature to provide a similar content authoring experience.
It would be nice if citing repositories were easier-- either for generating a reference for my own code or acknowledging when I've used someone else's code in my research.
There's tons of math and physics blogs that contain useful results that the author wanted to make available but didn't manage to incorporate into a paper. I wonder if there'd be any interest in a sort of GitHub for proofs? It could even use git, since (assuming consistency) isn't math just a DAG anyways (and therefore isomorphic to a neural net, as are all things).
You can get a free DOI for and archive a tag of a Git repo with FigShare or Zenodo.
If you have repo2docker REES dependency scripts (requirements.txt, environment.yml, postInstall,) in your repo, a BinderHub like https://mybinder.org can build and cache a container image and launch a (free) instance in a k8s cloud.
Journals haven't yet integrated with BinderHub.
Putting the suggested citation and DOI URI/URL in your README and cataloging citations in an e.g. wiki page may increase the crucial frequency of citation.
A Linked Data format for presenting well-formed arguments with #StructuredPremises would help to realize the potential of the web as a graph of resources which may satisfy formal inclusion criteria for #LinkedMetaAnalyses.
The issue is that none of the citation count engines (Google scholar, scopus, Web of Science...) count citations on those DOIs. So for a researcher who needs to somehow demonstrate impact through citation counts, it does not really help unfortunately.
We could reason about sites that index https://schema.org/ScholarlyArticle according to our own and others' observations. Google Scholar, Semantic Scholar, and Meta all index Scholarly Articles: they copy the bibliographic metadata and the abstract for archival and schoarly purposes.
AFAIU, e.g. Zotero and Mendeley do not crawl and index articles or attempt to parse bibliographic citations from the astounding plethora of citation styles [citationstyles, citationstyles_stylerepo] into a citation graph suitable for representative metrics [zenodo_newmetrics].
bitcoin.org/bitcoin.pdf does not have a DOI, does not have an ORCID [orcid], and is not published in any journal but is indexed by e.g. Google Scholar; though there are apparently multiple records referring to a ScholarlyArticle with the same name and author. Something like "Hell's Angels" (1930)? No DOI, no ORCID, no parseable PDF structure: not indexed.
AFAIU, Google Scholar does not yet index ScholarlyArticle (or SoftwareApplication < CreativeWork) bibliographic metadata. GScholar indexes an older set of bibliographic metadata from HTML <meta> tags and also attempts to parse PDFs. [gscholar_inclusion]
Google Scholar is also not (yet?) integrated with Google Dataset Search (which indexes https://schema.org/Dataset metadata).
FigShare DOIs and Zenodo DOIs are DataCite DOIs [figshare_howtocite, zenodo_principles]; which apparently aren't (yet?) all indexed by Google Scholar [rescience_gscholar].
IIUC, all papers uploaded to https://arxiv.org are indexed by Google Scholar. In order for arxiv-vanity.org [arxiv_vanity] to render a mobile-ready, font-resizeable HTML5 version of a paper uploaded to ArXiV, the PostScript source must be uploaded. Arxiv hosts certain categories of ScholarlyArticles.
JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:
> Assuming a publication rate of 200 papers per year this works out at ~$4.75 per paper
[citationstyles]: https://citationstyles.org
[citationstyles_stylerepo]: https://github.com/citation-style-language/styles
[gscholar_inclusion]: https://scholar.google.com/intl/en/scholar/inclusion.html#in...
[figshare_howtocite]: https://knowledge.figshare.com/articles/item/how-to-share-ci...
[zenodo_principles]: https://about.zenodo.org/principles/
[zenodo_newmetrics]: https://www.frontiersin.org/articles/10.3389/frma.2017.00013...
[rescience_gscholar]: https://github.com/ReScience/ReScience/issues/38
[arxiv_vanity]: https://www.arxiv-vanity.com/
[joss_costs]: https://joss.theoj.org/about#costs
[orcid]: https://en.wikipedia.org/wiki/ORCID
Do you like the browser bookmark manager?
How do you think it compares to services like webcull.com, raindrop.io, or getpocket.com? Have they advanced the field to the point that it's worth switching?
Things I'd add to browser bookmark managers someday:
- Support for (persisting) bookmarks tags. From the post re: the re-launch of del.icio.us: https://news.ycombinator.com/item?id=23985623
> "Allow reading and writing bookmark tags" https://bugzilla.mozilla.org/show_bug.cgi?id=1225916
> Notes re: how this could be standardized with JSON-LD: https://bugzilla.mozilla.org/show_bug.cgi?id=1225916#c116
> The existing Web Experiment for persisting bookmark tags: https://github.com/azappella/webextension-experiment-tags/bl...
- Standard search features like operators: ((term) AND (term2)) OR term3
- Regex search
- (Chrome) show the createdDate and allow (non-destructive) sort by date
- Native sync API for syncing to zero or more bookmarks / personal data storage providers
- Support for integration with extensions that support actual resource metadata like Zotero
- Linked Data support: extract and store bibliographic metadata like Zotero and OpenLink Structured Data Sniffer
What are the current limitations of the WebExtensions Bookmarks API (now supported by Firefox, Chrome, Edge, and hopefully eventually Safari)?: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
NIST Samate – Source Code Security Analyzers
Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools:
OWAP > Source Code Analysis Tools: https://owasp.org/www-community/Source_Code_Analysis_Tools
https://analysis-tools.dev/ (supports upvotes and downvotes)
analysis-tools-dev/static-analysis: https://github.com/analysis-tools-dev/static-analysis
analysis-tools-dev/dynamic-analysis: https://github.com/analysis-tools-dev/dynamic-analysis
devsecops/awesome-devsecops: https://github.com/devsecops/awesome-devsecops , https://github.com/TaptuIT/awesome-devsecops
kai5263499/awesome-container-security: https://github.com/kai5263499/awesome-container-security
https://en.wikipedia.org/wiki/DevOps#DevSecOps,_Shifting_Sec... :
> DevSecOps is an augmentation of DevOps to allow for security practices to be integrated into the DevOps approach. The traditional centralised security team model must adopt a federated model allowing each delivery team the ability to factor in the correct security controls into their DevOps practices.
awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
A Handwritten Math Parser in 100 lines of Python
Slightly more amusingly, but non-equivalently:
import sys
from functools import reduce
from operator import add, sub, mul, truediv as div
def calc(s, ops=[(add, '+'), (sub, '-'), (mul, '*'), (div, '/')]):
if not ops:
return float(s)
(func, token), *remaining_ops = ops
return reduce(func, (calc(part, remaining_ops) for part in s.split(token)))
if __name__ == '__main__':
print(calc(''.join(sys.argv[1:])))
Does support order of operations, in spite of appearance. Extending to support parentheses is left as an exercise for the reader :)This is pretty clever. There's a video by Jonathan Blow, creator of Braid and currently making a programming language called Jai, that talks about parsing expressions with order of operations. He gives an simple approach that apparently was discovered recently (though he didn't remember the paper name) that does this in a single pass and hardly any more complicated than what you wrote. You can see it here [1]. He also trashes on using yacc/bison before that and on most academic parsing theory.
Reverse Polish notation (RPN) > Converting from infix notation https://en.wikipedia.org/wiki/Reverse_Polish_notation#Conver... > Shunting-yard algorithm https://en.wikipedia.org/wiki/Shunting-yard_algorithm
Infix notation supports parentheses.
Infix notation: 3 + 4 × (2 − 1)
RPN: 3 4 2 1 − × +
PEP – An open source PDF editor for Mac
I have a dream... that one day people will name their projects with names that don't exist on google yet.
If you search for PEP now you'll find python enhancement proposals, and the "Philippine Entertainment Portal" and the stock code for PepsiCo.
I wish that once people do name their project, they would assign it a 128-bit random number in lower case hex, and include that number on any web page that they would like people searching for their project to find.
That way once I know that say PEP the PDF editor exists and find its 128-bit number (let's say that is 379dd864b16eaca3ce94c15a6bdfcc73), at least I can subsequently toss a +379dd864b16eaca3ce94c15a6bdfcc73 on my searches to effectively let the search engine know I want PEP the PDF editor results rather than PEP the python enhancement results or PEP the entertainment portal results or PEP that refreshing beverage company stock symbol.
"xxd -l 16 -p /dev/urandom" is a handy way to get a 128-bit random hex number. A UUID generator works, too, although they usually include some punctuation you will need to delete and you might have to lower case their output.
> I wish that once people do name their project, they would assign it a 128-bit random number in lower case hex
We already have something similar: URLs.
tzs is proposing URNs rather than textual program names. A URL is unnecessarily specific (though I suppose you could anycast URL resolution)
> RFC 4122 defines a Uniform Resource Name (URN) namespace for UUIDs. A UUID presented as a URN appears as follows:[1]
> > urn:uuid:123e4567-e89b-12d3-a456-426655440000
https://en.wikipedia.org/wiki/Universally_unique_identifier#...
Version 4 UUIDs have 122 random bits (out of 128 bits total).
In Python:
>>> import uuid
>>> _id = uuid.uuid4()
>>> _id.urn
'urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee'
Whether search engines will consider a URL or a URN or a random str without dashes to be one searchable-for token is pretty ironic in terms of extracting relations between resources in a Linked Data hypergraph. >>> _id.hex
'4c466878a81b4f22a112c704655fa4ee'
The relation between a resource and a Thing with a URI/URN/URL can be expressed with https://schema.org/about . In JSON-LD ("JSONLD"): {"@context": "https://schema.org",
"@type": "WebPage",
"about": {
"@type": "SoftwareApplication",
"identifier": "urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee",
"url": ["", ""],
"name": [
"a schema.org/SoftwareApplication < CreativeWork < Thing",
{"@value": "a rose by any other name",
"@language": "en"}]}}
Or with RDFa: <body vocab="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" target="_blank" rel="nofollow noopener">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" typeof="WebPage">
<div property="about" typeof="SoftwareApplication">
<meta property="identifier" content="urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee"/>
<span property="name">a schema.org/SoftwareApplication < CreativeWork < Thing</span>
<span property="name" lang="en">a rose by any other name</span>
</div>
</body>
Or with Microdata: <div itemtype="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" target="_blank" rel="nofollow noopener">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" itemscope>
<link itemprop="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" target="_blank" rel="nofollow noopener">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" href="https://schema.org/" />
<div itemprop="about" itemtype="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" itemscope>
<meta itemprop="identifier" content="urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee" />
<meta itemprop="name" content="a <a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" target="_blank" rel="nofollow noopener">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">schema.org/SoftwareApplication < CreativeWork < Thing"/>
<meta itemprop="name" content="a rose by any other name" lang="en"/>
</div>
</div>
The Unix timestamp will begin with 16 this Sunday
Redox: Unix-Like Operating System in Rust
I'm Jeremy Soller, the creator of Redox OS. Let me know if you have questions!
Are there tools to support static analysis and formal methods in Rust yet?
From https://news.ycombinator.com/item?id=21839514 re: awesome-safety-critical https://awesome-safety-critical.readthedocs.io/en/latest/ :
> > Does Rust have a chance in mission-critical software? (currently Ada and proven C niches) https://www.reddit.com/r/rust/comments/5iv5j7/does_rust_have...
FWIU, Sealed Rust is in progress.
And there's also RustPython for the userspace.
Ask HN: How are online communities established?
HN, Reddit, Stack Overflow, etc. are all established communities with users. How do you start a community when you don't have any users?
You should read the book People Powered by Jono Bacon. Has some really good insights and is a 101 course on exactly this.
Seconded. "People Powered: How Communities Can Supercharge Your Business, Brand, and Teams" (2019) https://g.co/kgs/CF5TEk
"The Art of Community: Building the New Age of Participation" (2012) https://g.co/kgs/P2V1kn
"Tribes: We need you to lead us" (2011) https://g.co/kgs/T8jaFS
The 1% 'rule' https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture) :
> In Internet culture, the 1% rule is a rule of thumb pertaining to participation in an internet community, stating that only 1% of the users of a website add content, while the other 99% of the participants only lurk. Variants include the 1–9–90 rule (sometimes 90–9–1 principle or the 89:10:1 ratio),[1] which states that in a collaborative website such as a wiki, 90% of the participants of a community only consume content, 9% of the participants change or update content, and 1% of the participants add content.
... Relevant metrics:
- Marginal cost of service https://en.wikipedia.org/wiki/Marginal_cost
- Customer acquisition cost: https://en.wikipedia.org/wiki/Customer_acquisition_cost
- [Quantifiable and non-quantifiable] Customer Lifetime Value: https://en.wikipedia.org/wiki/Customer_lifetime_value
Last words of the almost-cliche community organizer surrounded by dormant accounts: "Network effects will result in sufficient (grant) funding"
Business model examples that may be useful for building and supporting sustainable communities with clear Missions, Objectives, and Criteria for Success: https://gist.github.com/ndarville/4295324
Python Documentation Using Sphinx
I usually generate new Python projects with a cookiecutter; such as cookiecutter-pypackage. I like the way that cookiecutter-pypackage includes a Makefile which has a `docs` task so that I can call `make docs` to build the sphinx docs in the docs/ directory which include:
- a /docs/readme.rst that includes the /README.rst as the first document in the toctree
- a sensible set of default documents: readme (.. include:: /README.rst), installation, usage, modules (sphinx-autodoc output), contributing, authors, history (.. include:: /HISTORY.rst)
- a sphinx conf.py that sets the docs' version and release attributes to pkgname.__version__; so that the version number only needs to be changed in one place (as long as setup.py or setup.cfg also read the version string from pkgname.__version__)
- a default set of extensions: ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] that generates API docs and includes '[source]' hyperlinks from the generated API docs to the transcluded syntax-highlighted source code and links back to the API docs from the source code
https://github.com/audreyfeldroy/cookiecutter-pypackage/tree...
There are a few styles of docstrings that Sphinx can parse and include in docs with e.g. sphinx-autodoc:
`:param, :type, :returns, :rtype` docstrings (which OP uses; and which pycontracts can read runtime parameter and return type contracts from https://andreacensi.github.io/contracts/ (though Python 3 annotations are now the preferred style for compile or editing-time typechecks))
Numpydoc docstrings: https://numpydoc.readthedocs.io/en/latest/format.html
Googledoc docstrings: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/
You can use Markdown with Sphinx in at least three ways:
MyST Markdown supports Sphinx and Docutils roles and directives. Jupyter Book builds upon MyST Markdown. With Jupyter Book, you can include Jupyter notebooks (which can include MyST Markdown) in your Sphinx docs. Executable notebooks are a much easier way to include up-to-date code outputs in docs. https://myst-parser.readthedocs.io/en/latest/
Sphinx (& ReadTheDocs) w/ recommonmark: https://docs.readthedocs.io/en/stable/intro/getting-started-...
Nbsphinx predates Jupyter Book and doesn't yet support MyST Markdown, but does support Markdown cells in Jupyter notebooks. Nbsphinx includes a parser for including .ipynb Jupyter notebooks in Sphinx docs. nbsphinx supports raw RST (ReST) cells in Jupyter notebooks and has great docs: https://nbsphinx.readthedocs.io/en/latest/
Nbdev is another approach; though it's not Sphinx:
> nbdev is a library that allows you to fully develop a library in Jupyter Notebooks, putting all your code, tests and documentation in one place.
> [...] Add %nbdev_export flags to the cells that define the functions you want to include in your python modules
https://github.com/fastai/nbdev
A few additional sources of docs for Sphinx and ReStructuredText:
Read The Docs docs > Getting Started with Sphinx > External Resources https://docs.readthedocs.io/en/stable/intro/getting-started-...
CPython Devguide > "Documenting Python" https://devguide.python.org/documenting/
"How to write [Linux] kernel documentation" https://www.kernel.org/doc/html/latest/doc-guide/index.html
awesome-sphinxdoc: https://github.com/yoloseem/awesome-sphinxdoc
... "Ask HN: Recommendations for Books on Writing [for engineers]?" https://news.ycombinator.com/item?id=23945580
Traits of good remote leaders
And the evidence for these considerations is what?
They reference a study: https://link.springer.com/article/10.1007%2Fs10869-020-09698..., titled: "Who Emerges into Virtual Team Leadership Roles? The Role of Achievement and Ascription Antecedents for Leadership Emergence Across the Virtuality Spectrum".
Fortunately the references are free to view.
"Table 4 – Correlation of Development Phases, Coping Stages and Comfort Zone transitions and the Performance Model" in "From Comfort Zone to Performance Management" White (2008) tabularly correlates the Tuckman group development phases (Forming, Storming, Norming, Performing, Adjourning) with the Carnall coping cycle (Denial, Defense, Discarding, Adaptation, Internalization) and Comfort Zone Theory (First Performance Level, Transition Zone, Second Performance Level), and the White-Fairhurst TPR model (Transforming, Performing, Reforming). The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages.
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2...
IDK what's different about online teams in regards to performance management?
Show HN: Eiten – open-source tool for portfolio optimization
Is it possible to factor (e.g. GRI) sustainability criteria into the portfolio fitness function? https://news.ycombinator.com/item?id=21922558
My concern is that - like any other portfolio optimization algorithm - blindly optimizing on fundamentals and short term returns will lead to investing in firms who just dump external costs onto people in the present and future; so, screening with sustainability criteria is important to me.
From https://news.ycombinator.com/item?id=19111911 :
> awesome-quant lists a bunch of other tools for algos and superalgos: https://github.com/wilsonfreitas/awesome-quant
This is perfect. Thank you for sharing this.
I might start implementing some of these but would love for someone else to add a few PRs as well. The code is pretty modular especially if we want to add new strategies.
(Sustainable) Index ETFs in the stocks.txt universe would likely be less sensitive to single performers' effects in unbalanced portfolios.
> pyfolio.tears.create_interesting_times_tear_sheet measures algorithmic trading algorithm performance during "stress events" https://github.com/quantopian/pyfolio/blob/03568e0f328783a6a...
Ask HN: Any well funded tech companies tackling big, meaningful problems?
Are there any well funded tech startups / companies tackling major societal problems? Any of these fair game: https://en.wikipedia.org/wiki/List_of_global_issues
----
I don't see or hear of any and want to know if this is just my bias or if there really is a shortage of resources in tech being allocated to solving the worlds most important problems. I'm sure I'm not the only engineer that's looking out for companies like this.
Ran into this previous Ask HN (https://news.ycombinator.com/item?id=24168902) that asked a similar question. However, here I wanna focus on the better funded efforts (not side projects, philanthropy etc).
One example I've heard so far is Tesla. Any others?
You can make an impact by solving important local and global problems by investing your time, career, and savings; by listing and comparing solutions.
As a labor market participant, you can choose to work for places that have an organizational mission that strategically aligns with local, domestic, and international objectives.
https://en.wikipedia.org/wiki/Strategic_alignment ... "Schema.org: Mission, Project, Goal, Objective, Task" https://news.ycombinator.com/item?id=12525141
As an investor, you can choose to invest in organizations that are making the sort of impact you're looking for: you can impact invest.
https://en.wikipedia.org/wiki/Impact_investing
You mentioned "List of global issues"; which didn't yet have a link to the UN Sustainable Development Goals (the #GlobalGoals). I just added this to the linked article:
> As part of the 2030 Agenda for Sustainable Development, the UN Millenium Development Goals (2000-2015) were superseded by the UN Sustainable Development Goals (2016-2030), which are also known as The Global Goals. There are associated Targets and Indicators for each Global Goal.
There are 17 Global Goals.
Sustainability reporting standards can align with the Sustainable Development Goals. For example, the GRI standards are now aligned with the UN Sustainable Development Goals.
https://en.wikipedia.org/wiki/Sustainable_Development_Goals
Investors, fund managers, and potential employees can identify companies which are making an impact by reviewing corporate sustainability and ESG reports.
From https://www.undp.org/content/undp/en/home/sustainable-develo... :
> SDG Target 12.6: "Encourage companies, especially large and transnational companies, to adopt sustainable practices and to integrate sustainability information into their reporting cycle"
From https://news.ycombinator.com/item?id=21302926 :
> > What are some of the corporate sustainability reporting standards?
> > From https://en.wikipedia.org/wiki/Sustainability_reporting#Initi... :
> >> Organizations can improve their sustainability performance by measuring (EthicalQuote (CEQ)), monitoring and reporting on it, helping them have a positive impact on society, the economy, and a sustainable future. The key drivers for the quality of sustainability reports are the guidelines of the Global Reporting Initiative (GRI),[3] (ACCA) award schemes or rankings. The GRI Sustainability Reporting Guidelines enable all organizations worldwide to assess their sustainability performance and disclose the results in a similar way to financial reporting.[4] The largest database of corporate sustainability reports can be found on the website of the United Nations Global Compact initiative.
> >The GRI (Global Reporting Initiative) Standards are now aligned with the UN Sustainable Development Goals (#GlobalGoals). https://en.wikipedia.org/wiki/Global_Reporting_Initiative
> >> In 2017, 63 percent of the largest 100 companies (N100), and 75 percent of the Global Fortune 250 (G250) reported applying the GRI reporting framework.[3]
What are some good ways to search for companies who (1) do sustainability reports, (2) engage in strategic alignment in corporate planning sessions, (3) make sustainability a front-and-center issue in their company's internal and external communications?
What are some examples of companies who have a focus on sustainability and/or who have developed a nonprofit organization for philanthropic missions which are sometimes best accounted for as a distinct organization or a business unit (which can accept and offer receipts for donations as a non-profit)?
How can an employee drive change in a small or a large company? Identify opportunities to deliver value and goodwill. Read through the Global Goals, Targets, and Indicators; and get into the habit of writing down problems and solutions.
3 pillars of [Corporate] Sustainability: (Environment (Society (Economy))). https://en.wikipedia.org/wiki/Sustainability#Three_dimension...
"Launch HN: Charityvest (YC S20) – Employee charitable funds and gift matching" https://news.ycombinator.com/item?id=23907902 :
> We created a modern, simple, and affordable way for companies to include charitable giving in their suite of employee benefits.
> We give employees their own tax-deductible charitable giving fund, like an “HSA for Charity.” They can make contributions into their fund and, from their fund, support any of the 1.4M charities in the US, all on one tax receipt.
> Using the funds, we enable companies to operate gift matching programs that run on autopilot. Each donation to a charity from an employee is matched automatically by the company in our system.
> A company can set up a matching gift program and launch giving funds to employees in about 10 minutes of work.
"Salesforce Sustainability Cloud Becomes Generally Available" https://news.ycombinator.com/item?id=22068522 :
> Are there similar services for Sustainability Reporting and accountability?
Column Names as Contracts
I really like the idea of being more thoughtful about naming columns and being more explicit about the “type” of data contained in them.
Is this idea already known among data modelers or data engineers?
I’d love to read any other references, if available.
I am a data architect in my day job. Within the realm of data management, I'd say "metadata management" [1] is the general category this fits within.
I would say, yes this idea is known/very common, as data architecture is as much about the descriptive language we use as anything. I mean, "business glossaries", taxonomy, even just naming conventions [2] in coding, these are all related.
If you build enough databases/tables or even code yourself, you inevitably come across the "how to name things" problem [3]. If all you have to sort on for the known meaning of a thing (column, table, file, etc.) is a single string value, then encoding meaning into it is quite common. This way, a sort creates a kind of "grouping". Many database vendors follow standard naming conventions - such as Oracle, for example [4]. It is considered a best practice when designing/building the metadata for a large system, to establish a naming convention. Among other things, it makes finding things easier, as well as all the potential for automation.
You get all kinds of variations on this, such as, should the "ID_" come as a prefix or a suffix (i.e. "_ID"). One's initial thought is to use it as a prefix so all the related types group together, but then that becomes much more difficult if you want to sort items by their functional area (e.g. DRIVER_ID, DRIVER_IND, etc.).
One other place you see something similar is in "smart numbers" which is an eternal argument - should I use a "dumb identifier" (GUID, integer) or a "smart one" (one encoding additional meaning) [5].
I mean, basically, any time you can encode information in the meta-data of data, I think you can then operate on it by following "convention over configuration" (as mentioned elsewhere in the discussion comments).
The only problem I see is that such conventions can, at times be limiting - depending on the length of your metadata columns, and the variability you are trying to capture - which is why I believe, generally, metadata is often better separated and linked to the data it describes - this decoupling allows for much more descriptive metadata than one could encode in simple a single string value. Certainly, you can get a long way with an approach like this, but I suspect you would run into 80/20 rule limitations.
Using naming in this way is a form of tight coupling, which could be seen as an anti-pattern in terms of meta-data flexibility, in some cases.
[1] https://en.wikipedia.org/wiki/Metadata_management
[2] https://en.wikipedia.org/wiki/Naming_convention_(programming...
[3] https://martinfowler.com/bliki/TwoHardThings.html
[4] https://oracle-base.com/articles/misc/naming-conventions
In terms of database normalization, delimiting multiple fields within a column name field violates the "atomic columns" requirement of the first though sixth normal forms (1NF - 6NF)
https://en.wikipedia.org/wiki/Database_normalization
Are there standards for storing columnar metadata (that is, metadata about the columns; or column-level metadata)?
In terms of columns, SQL has (implicit ordinal, name, type) and then primary key, index, and [foreign key] constraints.
RDFS (RDF Schema) is an open W3C linked data standard. An rdf:Property may have a rdfs:domain and a rdfs:range; where the possible datatypes are listed as instances of rdfs:range. Primitive datatypes are often drawn from XSD (XML Schema Definition), or https://schema.org/ . An rdfs:Class instance may be within the rdfs:domain and/or the rdfs:range of an rdf:Property.
RDFS is generally not sufficient for data validation; there are a number of standards which build upon RDFS: W3C SHACL (Shapes and Constraint Language), W3C CSVW (CSV on the Web).
There is some existing work on merging JSON Schema and SHACL.
CSVW builds upon the W3C "Model for Tabular Data and Metadata on the Web"; which supports arbitrary "annotations" on columns. CSVW can be represented as any RDF representation: Turtle/Trig/M3, RDF/XML, JSON-LD.
https://www.w3.org/TR/tabular-data-primer/
https://www.w3.org/TR/tabular-data-model/ :
> an annotated tabular data model: a model for tables that are annotated with metadata. Annotations provide information about the cells, rows, columns, tables, and groups of tables […]
...
From https://twitter.com/westurner/status/901992073846456321 :
> "7 metadata header rows (column label, property URI path, DataType, unit, accuracy, precision, significant figures)" https://wrdrd.github.io/docs/consulting/linkedreproducibilit...
...
From https://twitter.com/westurner/status/1295774405923147778 :
> Relevant: https://discuss.ossdata.org/ topics: "Linked Data formats, tools, challenges, opportunities; CSVW, https://schema.org/Dataset , https://schema.org/ScholarlyArticle " https://discuss.ossdata.org/t/linked-data-formats-tools-chal...
> "A dataframe protocol for the PyData ecosystem" https://discuss.ossdata.org/t/a-dataframe-protocol-for-the-p...
> A .meta protocol should implement the W3C Tabular Data Model: [...]
...
The various methods of doing CSV2RDF and R2RML (SQL / RDB to RDF Mapping) each have a way to specify additional metadata annotations. None stuff data into a column name (which I'm also guilty of doing with e.g. "columnspecs" in a small line-parsing utility called pyline that can cast columns to Python types and output JSON lines).
...
Even JSON5 is insufficient when it comes to representing e.g. complex fractions: there must be a tbox (schema) in order to read the data out of the abox (assertions; e.g. JSON). JSON-LD is sufficient for representation; and there are also specs like RDFS, SHACL, and CSVW.
Abox: https://en.wikipedia.org/wiki/Abox
I see the line of thinking you're going down. There are ISO standards for data types, in a sense I could see why one would seek a standard language for defining the metadata/specification of a type as data. Have to really think about that some more.. in a way a regex could be seen as a compact form of expressing the capability of a column in terms of value ranges or domains, but to define the meaning of the data, not so much.
Your interpretation of the atomic columns requirement is a little different than my understanding. That requirement of normalization only applies to the "cells" of columnar data, it says nothing about encoding meaning into column names, which are themselves simply descriptive metadata.
I mean, for sure you wouldn't want to encode many values/meanings into a column name (some systems have length restrictions that would make that impossible, I'm not sure it makes sense anyway), but just pointing out that technically the spec does not make that illegal. Certainly, adding minor annotations within the name of a column separated by a supported delimiter does not, in my opinion, violate normalization rules at all. I mean things like "ID_" or similar.
Have you looked at INFORMATION_SCHEMA in SQL databases? [1] You mentioned SQL metadata and constraints, that is as close to a standard feature for querying that information there is, some databases do it using similar but non-standard ways (Oracle for example).
Also, not standard but, many relational databases support extended properties or Metadata for objects (tables, views, columns, etc.) - you can often come up with your own scheme although rarely do I see people utilize these features. [2] [3]
At some point it feels like we are more talking about type definitions and annotations, applied to data columns.
Maybe like, BNF [4] for purely data table columns (which are essentially types)?
[1] https://en.wikipedia.org/wiki/Information_schema
[2] http://www.postgresql.org/docs/current/static/sql-comment.ht...
[3] https://docs.microsoft.com/en-us/sql/relational-databases/sy...
Graph Representations for Higher-Order Logic and Theorem Proving (2019)
ONNX (and maybe RIF) are worth mentioning.
ONNX: https://onnx.ai/ :
> ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers
RIF (~FOL): https://en.wikipedia.org/wiki/Rule_Interchange_Format
Datalog (not Turing-complete): https://en.wikipedia.org/wiki/Datalog
HOList Benchmark: https://sites.google.com/view/holist/home
"HOList: An Environment for Machine Learning of Higher-Order Theorem Proving" (2019) https://arxiv.org/abs/1904.03241
> Abstract: We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary mathematical theories and thereby present an interesting, open-ended challenge for deep learning. We provide an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment. HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. We also present a deep reinforcement learning driven automated theorem prover, DeepHOL, with strong initial results on this benchmark.
I wonder why they don't mention any work based on transformer architectures? The recent work on solving differential equations based on expressions in reverse polish notation seemed like a reasonable idea to apply to theorem proving as well.
Really cool work though!
A transformer is unable to really represent logic, let alone higher order logic and theorem proving.
A transformer is a universal function approximator. The question is whether it can do so reasonably efficiently. Trained on natural language linear sequences, I’m with you. Trained on abstract logical graph representations? I don’t think that question’s answered yet, unless I’m missing something.
How do transformers handle with truth tables, logical connectives, and propositional logic / rules of inference, and first-order logic?
Truth table: https://en.wikipedia.org/wiki/Truth_table
Logical connective: https://en.wikipedia.org/wiki/Logical_connective
Propositional logic: https://en.wikipedia.org/wiki/Propositional_calculus
Rules of inference: https://en.wikipedia.org/wiki/Rule_of_inference
DL: Description logic: https://en.wikipedia.org/wiki/Description_logic (... The OWL 2 profiles (EL, QR, RL; DL, Full) have established decideability and complexity: https://www.w3.org/TR/owl2-profiles/ )
FOL: First-order logic: https://en.wikipedia.org/wiki/First-order_logic
HOL: Higher-order logic: https://en.wikipedia.org/wiki/Higher-order_logic
In terms of regurgitating without critical reasoning?
Critical reasoning: https://en.wikipedia.org/wiki/Critical_thinking
Think about a program that can write code but not execute it. It’s not hard to get a transformer to learn to write python code like merge sort or simple arithmetic code, even though a transformer can’t reasonably learn to sort, nor can it learn simple arithmetic. It’s an important disambiguation. In one view it appears it can’t (learn to sort) and in another it demonstrably can (learn to code up a sort function). It can learn to usefully manipulate the language without needing the capacity to execute the language. They probably can’t learn to “execute” what you’re talking about (execute being a loose analogy), but I’d say the jury’s out on whether they can learn to usefully manipulate it.
Transformer is a function from seq of symbols to seq of symbols. For example, truth table is exactly similar kind of function from variables to [True, False] alphabet.
Transformer can represent some very complex logical operations, and per this article is turing complete: https://arxiv.org/abs/1901.03429, meaning any computable function, including theorem prover can be represented as transformer.
Another question is if it is feasible/viable/rational to build transformer for this? My intuition says: no.
Show HN: Linux sysadmin course, eight years on
Almost eight years ago I launched an online “Linux sysadmin course for newbies” here at HN.
It was a side-project that went well, but never generated enough money to allow me to fully commit to leaving the Day Job. After surviving the Big C, and getting made redundant I thought I might improve and relaunch it commercially – but my doctors are a pessimistic bunch, so it looked like I didn’t have the time.
Instead, I rejigged/relaunched it via a Reddit forum this February as free and open - and have now gathered a team of helpers to ensure that it keeps going each month even after I can’t be involved any longer.
It’s a month-long course which restarts each month, so “Day 1” of September is this coming Monday.
It would be great if you could pass the word on to anyone you know who may be the target market of those who: “...aspire to get Linux-related jobs in industry - junior Linux sysadmin, devops-related work and similar”.
[0] http://www.linuxupskillchallenge.org/
[1] https://www.reddit.com/r/linuxupskillchallenge/
[2] http://snori74.blogspot.com/2020/04/health-status.html
There are a number of resources that may be useful for your curriculum for this project listed in "Is there a program like codeacademy but for learning sysadmin?" https://news.ycombinator.com/item?id=19469266 :
> [ http://www.opsschool.org/ , https://github.com/kahun/awesome-sysadmin/blob/master/README... , https://github.com/stack72/ops-books , https://landing.google.com/sre/books/ , https://response.pagerduty.com/ (Incident Response training)]
To that I'd add that K3D (based on K3S, which is now a CNCF project) runs Kubernetes (k8s) in Docker containers. https://github.com/rancher/k3d
For zero-downtime (HA: High availability) deployments, "Zero-Downtime Deployments To a Docker Swarm Cluster" describes Rolling Updates and Blue-Green Deployments; with illustrations: https://github.com/vfarcic/vfarcic.github.io/blob/master/doc...
For git-push style deployment with more of a least privileges approach (which also has more moving parts) you could take a look at: https://github.com/dokku/dokku-scheduler-kubernetes#function...
And also reference ansible molecule and testinfra for writing sysadmin tests and the molecule vagrant driver for testing docker configurations. https://www.jeffgeerling.com/blog/2018/testing-your-ansible-...
https://molecule.readthedocs.io/en/latest/
https://testinfra.readthedocs.io/en/latest/ :
> With Testinfra you can write unit tests in Python to test actual state of your servers configured by management tools like Salt, Ansible, Puppet, Chef and so on.
> Testinfra aims to be a Serverspec equivalent in python and is written as a plugin to the powerful Pytest test engine.
I wasn't able to find a syllabus or a list of all of the daily posts? Are you focusing on DevOps and/or DevSecOps skills?
EDIT: The lessons are Markdown files in a Git repo: https://github.com/snori74/linuxupskillchallenge
Links to each lesson, the title and/or subjects of the lesson, and the associated reddit posts might be useful in a Table of Contents in the README.md.
Thanks, but most of that would be way over the top for my "newbies".
However, You must be the third or fourth person today to suggest that I add a TOC - so that is something I think I'll need to look at!
Maybe most useful as resources for further study.
Looks like Day 20 covers shell scripting. A few things worth mentioning:
You can write tests for shell scripts and write TAP (Test Anything Protocol) -formatted output: https://testanything.org/producers.html#shell
Quoting in shell scripts is something to be really careful about:
> This and this do different things:
# prints a newline
echo $(echo "-e a\nb")
# prints "-e a\nb"
echo "$(echo "-e a\nb")"
Shellcheck can identify some of those types of (security) bugs/errors/vulns in shell scripts: https://www.shellcheck.net/LearnXinYminutes has a good bash reference: https://learnxinyminutes.com/docs/bash/
And an okay Ansible reference, which (like Ops School) we should contribute to: https://learnxinyminutes.com/docs/ansible/
Why do so many pros avoid maintaining shell scripts and writing one-off commands that they'll never remember to run again later?
...
It may be helpful to format these as Jupyter notebooks with input and output cells.
- Ctrl-Shift-Minus splits a cell at the cursor
- M and Y toggle a cell between Markdown and code
If you don't want to prefix every code cell line with a '!' so that the ipykernel Jupyter python kernel (the default kernel) executes the line with $SHELL, you can instead install and select bash_kernel; though users attempting to run the notebooks interactively would then need to also have bash_kernel installed: https://github.com/takluyver/bash_kernel
You can save a notebook .ipynb to any of a number of Markdown and non-Markdown formats https://jupytext.readthedocs.io/en/latest/formats.html#markd... ; unfortunately jupytext only auto-saves to md without output cell content for now: https://github.com/mwouts/jupytext/issues/220
You can make reveal.js slides (that do include outputs) from a notebook: https://gist.github.com/mwouts/04a6dfa571bda5cc59fa1429d1309...
With nbconvert, you can manually save an .ipynb Jupyter notebook as Markdown which includes the cell outputs w/ File > "Download as / Export Notebook as" > "Export notebook to Markdown" or with the CLI: https://nbconvert.readthedocs.io/en/latest/usage.html#conver...
jupyter convert --to markdown
jupyter convert --help
With Jupyter Book,
you can build an [interactive] book as HTML and/or PDF from multiple Jupyter notebooks as e.g. Markdown documents
https://jupyterbook.org/intro.html : jupyter-book build mybook/
...From https://westurner.github.io/tools/#bash :
type bash
bash --help
help help
help type
apropos bash
info bash
man bash
man man
info info
From https://news.ycombinator.com/item?id=22980353 ; this is how dotfiles work: info bash -n "Bash Startup Files"
> https://www.gnu.org/software/bash/manual/html_node/Bash-Star... ...
Re: dotfiles, losing commands that should've been logged to HISTFILE when running multiple bash sessions and why I wrote usrlog.sh: https://westurner.github.io/hnlog/#comment-20671184 (Ctrl-F for: "dotfiles", "usrlog.sh", "inputrc")
https://github.com/webpro/awesome-dotfiles
...
awesome-sysadmin > resources: https://github.com/kahun/awesome-sysadmin#resources
Software supply chain security
Estimates of prevalence do assume detection. How would we detect that a dependency that was installed a few deployments and reboots ago was compromised?
How does the classic infosec triad (Confidentiality, Integrity, Availability) apply to software supply chain security?
Confidentiality: Presumably we're talking about open source projects; which aren't confidential. Projects may request responsible disclosure in an e.g. security.txt; and vuln reports may be confidential for at least a little while.
Integrity: Secure transport protocols, checksums, and cryptographic code signing are ways to mitigate data integrity risks. GitHub supports SSH, 2FA, and GPG keys. Can all keys in the package signature keyring be used to sign any package? Can we verify a public key over a different channel? When we specify exact versions of software dependencies, can we also record package hashes which the package installer(s) will verify?
Availability: What are the internal and external data, network, and service dependencies for the development and deployment DevSecOps workflows? Can we deploy from local package mirrors? Who is responsible for securing and updating local package mirrors? Are these service dependencies all HA? Does everything in this system also depend upon the load balancer? Does our container registry support e.g. Docker Notary (TUF)? How should we mirror TUF package repos?
See also: "Guidance for [[transparent] proxy cache] partial mirrors?" https://github.com/theupdateframework/specification/issues/1...
A toolset that answers some of your questions is grafeas- a metadata store at https://github.com/grafeas/grafeas- and kritis, a policy engine at https://github.com/grafeas/kritis.
Cheers.
Thanks for the links. Do you know how this toolset helps to mitigate/prevent what is called in the GitHub blogpost "Supply chain compromises". Quickly checked around and couldn't find anything that applies to the dependencies of applications/binaries before they land into the target runtime (i.e k8s).
Have you seen these preso slides
https://www.slideshare.net/mobile/aysylu/q-con-sp-software-s....
They walk through one of the workflows (end state is deploying to k8s).
Grafeas is a metadata store, Kritis is a policy engine that plugs into k8s as an admission controller- blessing the "admission" (running) of an image in a namespace.
There are existing tools for each language/runtime that produce known vuln lists for individual artifacts in the language ecosystem. These you feed into Grafeas. And you have your CI pipeline providing manifests for each of your built images that contain all upstream dependencies (these produced from each app's build tool). Then at deploy time, Kritis checks the manifest on the image, and for each artifact in the image, checks for vulns and determines whether the vuln should keep the image from being deployed.
Hope that helps. There are many other workflows but that one is the most direct.
Cheers.
OUTSTANDING comment; excellent questions. Bookmarked. Thanks for this concise high-level infosec punchlist.
This Sir is senior.
Mind Emulation Foundation
"While we are far from understanding how the mind works, most philosophers and scientists agree that your mind is an emergent property of your body. In particular, your body’s connectome. Your connectome is the comprehensive network of neural connections in your brain and nervous system. Today your connectome is biological. "
This is a pretty speculative thesis. It's not at all clear that everything relevant to the mind is found in the connections rather than the particular biochemical processes of the brain. It's a very reductionist view that drastically underestimates the biological complexity of even individual cells. There's a good book, Wetware: A Computer in Every Living Cell, by Dennis Bray going into detail on how much functionality and physical processes are at work even in the most simplest cells that is routinely ignored by these analogies of the brain to a digital computer.
There is this extreme, and I would argue unscientific bias towards treating the mind as something that's recreatable in a digital system probably because it enables this science-fiction speculation and dreams of immortality of people living in the cloud.
Indeed. We humans largely create devices that function either through calculation or through physical reaction, relying on the underlying rules of the universe to "do the math" of, say, launching a cannonball and having it follow a consistent arc. The brain combines both at almost every level. It may be fundamentally impossible to emulate a human personality equal to a real one without a physics simulation of a human brain and its chemistry.
A dragonfly brain takes the input from thirty thousand visual receptor cells and uses it to track prey movement using only sixteen neurons. Could we do the same using an equal volume of transistors?
No one is saying a neuron is a one to one equivalent with a transistor. That behavior does seem like it's possible to emulate with many transistors, however.
Was just talking about quantum cognition and memristors (in context to GIT) a few days ago: https://news.ycombinator.com/item?id=24317768
Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition
Memristor: https://en.wikipedia.org/wiki/Memristor
It may yet be possible to sufficiently functionally emulate the mind with (orders of magnitude more) transistors. Though, is it necessary to emulate e.g. autonomic functions? Do we consider the immune system to be part of the mind (and gut)?
Perhaps there's something like an amplituhedron - or some happenstance correspondence - that will enable more efficient simulation of quantum systems on classical silicon pending orders of magnitude increases in coherence and also error rate in whichever computation medium.
For abstract formalisms (which do incorporate transistors as a computation medium sufficient for certain tasks), is there a more comprehensive set than Constructor Theory?
Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory
Amplituhedron: https://en.wikipedia.org/wiki/Amplituhedron
What is the universe using our brains to compute? Is abstract reasoning even necessary for this job?
Something worth emulating: Critical reasoning. https://en.wikipedia.org/wiki/Critical_reasoning
13 Beautiful Tools to Enhance Online Teaching and Learning Skills
"Options for giving math talks and lectures online" https://news.ycombinator.com/item?id=22541754 also lists a number of resources for teaching math online.
How close are computers to automating mathematical reasoning?
It's irresponsible to not mention that a series of results from the 1930s through the 1950s proved that generalized automated proof search is impossible, in the sense that we cannot just ask a computer to find a proof or refutation of Goldbach's Conjecture or other "Goldbach-type" statement and have it do any better than a person. In that sense, no, computers will never fully automate mathematical reasoning, because mathematical reasoning cannot be fully automated.
What we can automate, and have successfully automated many times over, is proof checking. In that sense, yes, computers have already fully automated mathematical reasoning, because checking proofs is fully formalized and mechanized.
I think that what people really ought to be asking for is the ease with which we find new results and communicate them to others. In that sense, computer-aided proofs can be hard to read and so there is much work to be done in making them easier to communicate to humans. Similarly, there is interesting work being done on how to make computer-generated proofs which use extremely-high-level axiom schemata to generate human-like handwaving abstractions.
I'm still not sure how I feel about the ATP/ITP terminology used here. ATPs are either of the weak sort that crunch through SAT, graphs, and other complete-but-hard problems, or the strong sort which are impossible. Meanwhile, folks have drifted from "interactive" to "assistant", and talk of "proof assistants" as tools which, like a human, can write down and look at sections of a proof in isolation, but cannot summon complete arbitrary proofs from its own mind.
Edit: One edit will be quicker than two replies and I tend to be "posting too fast". The main point is captured well by [0], but they immediately link to [1], the central result, which links to [2], an important corollary. At this point, I'm just going to quote WP:
> The first [Gödel] incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
> The second [Gödel] incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.
> Informally, [Tarski's Undefinability] theorem states that arithmetical truth cannot be defined in arithmetic.
I recognize that these statements may seem surprising, but they are provable and you should convince yourself of them. I recently reviewed [3] and found it to be a very precise and complete introduction to all of the relevant ideas; there's also GEB if you want something more fun.
[0] https://en.wikipedia.org/wiki/Automated_theorem_proving#Deci...
[1] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
[2] https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theo...
[3] https://www.logicmatters.net/resources/pdfs/godelbook/GodelB...
As someone unfamiliar with the results you describe, why can’t a machine do better than a human? A smart human is better at writing proofs than a dumb one. If we had a highly advanced general artificial intelligence, why couldn’t it generate better results than the smart humans?
Or is automated proof search impossible for humans as well?
Arguably, humans require more energy per operation. So, presumably such an argument hinges upon what types of operations are performed in conducting automated proof search?
What would "automated proof search" even mean in the context of being performed by a human being?
The context here is that certain problems cannot be solved by an algorithm, which doesn't necessarily translate into "cannot be performed by a machine".
It only means that there cannot be any Turing machine capable of solving them, no matter what "operations" it represents.
The task (in terms of constructor theory) is: Find the functions that sufficiently approximate the observations and record their reproducible derivations.
Either the (unreferenced) study was actually arguing that "automated proof search" can't be done at all, or that human neural computation is categorically non-algorothmic.
Grid search of all combinations of bits that correspond to [symbolic] classical or quantum models.
Or better: evolutionary algorithms and/or neural nets.
Roger Penrose argues that human neural computation is indeed non-algorithmic in nature (see his book "The Emperor's New Mind"; 1989) and speculates that quantum processes are involved.
I don't quite understand the role of neural nets in that context, though. Those just classical computations in the end and should be bound by the same limits that every other algorithms are, shouldn't they?
That human cognition is quantum in nature - that e.g. entanglement is necessary - may be unfalsifiable.
Neuromorphic engineering has expanded since the 1980s. https://en.wikipedia.org/wiki/Neuromorphic_engineering
Quantum computing is the best known method for simulating chemical reactions and thereby possibly also neurochemical reactions. But, Is quantum computing necessary to functionally emulate human cognition?
It may be that a different computation medium can accomplish the same tasks without emulating all of the complexity of the brain.
If the brain is only classical and some people are using their brains to perform quantum computations, there may be something there.
Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition
Quantum memristors are still elusive.
From "Quantum Memristors in Frequency-Entangled Optical Fields" (2020) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7079656/ :
> Apart from the advantages of using these devices for computation [12] (such as energy efficiency [13], compared to transistor-based computers), memristors can be also used in machine learning schemes [14,15]. The relevance of the memristor lies in its ubiquitous presence in models which describe natural processes, especially those involving biological systems. For example, memristors inherently describe voltage-dependent ion-channel conductances in the axon membrane in neurons, present in the Hodgkin–Huxley model [16,17].
> Due to the inherent linearity of quantum mechanics, it is not straightforward to describe a dissipative non-linear memory element, such as the memristor, in the quantum realm, since nonlinearities usually lead to the violation of fundamental quantum principles, such as no-cloning theorem. Nonetheless, the challenge was already constructively addressed in Ref. [18]. This consists of a harmonic oscillator coupled to a dissipative environment, where the coupling is changed based on the results of a weak measurement scheme with classical feedback. As a result of the development of quantum platforms in recent years, and their improvement in controllability and scalability, different constructions of a quantum memristor in such platforms have been presented. There is a proposal for implementing it in superconducting circuits [7], exploiting memory effects that naturally arise in Josephson junctions. The second proposal is based on integrated photonics [19]: a Mach–Zehnder interferometer can behave as a beam splitter with a tunable reflectivity by introducing a phase in one of the beams, which can be manipulated to study the system as a quantum memristor subject to different quantum state inputs.
Quantum harmonic oscillators have also found application in modeling financial markets. Quantum harmonic oscillator: https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
New framework for natural capital approach to transform policy decisions
Natural capital: https://en.wikipedia.org/wiki/Natural_capital
> Natural capital is the world's stock of natural resources, which includes geology, soils, air, water and all living organisms. Some natural capital assets provide people with free goods and services, often called ecosystem services. Two of these (clean water and fertile soil) underpin our economy and society, and thus make human life possible.
Natural capital accounting: https://en.wikipedia.org/wiki/Natural_capital_accounting
> Natural capital accounting is the process of calculating the total stocks and flows of natural resources and services in a given ecosystem or region.[1] Accounting for such goods may occur in physical or monetary terms. This process can subsequently inform government, corporate and consumer decision making as each relates to the use or consumption of natural resources and land, and sustainable behaviour.
Opportunity cost: https://en.wikipedia.org/wiki/Opportunity_cost
> When an option is chosen from alternatives, the opportunity cost is the "cost" incurred by not enjoying the benefit associated with the best alternative choice.[1] The New Oxford American Dictionary defines it as "the loss of potential gain from other alternatives when one alternative is chosen."[2] In simple terms, opportunity cost is the benefit not received as a result of not selecting the next best option. Opportunity cost is a key concept in economics, and has been described as expressing "the basic relationship between scarcity and choice". [3] The notion of opportunity cost plays a crucial part in attempts to ensure that scarce resources are used efficiently.[4] Opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure or any other benefit that provides utility should also be considered an opportunity cost. The opportunity cost of a product or service is the revenue that could be earned by its alternative use.
How do we value essential dependencies in terms of future opportunity costs?
In terms of just mental health?
"National parks a boost to mental health worth trillions: study" https://phys.org/news/2019-11-national-boost-mental-health-w...
> Visits to national parks around the world may result in improved mental health valued at about $US6 trillion (5.4 trillion euros), according to a team of ecologists, psychologists and economists
> Professor Bateman's decision-making framework focuses on the links between the environment and economy and has three components: efficiency, assessing which option generates the greatest benefit; sustainability, the effects of each option on natural capital stocks; and equity, regarding who receives the benefits of a decision and when.
Ian J. Bateman et al. "The natural capital framework for sustainably efficient and equitable decision making", Nature Sustainability (2020). DOI: 10.1038/s41893-020-0552-3 https://www.nature.com/articles/s41893-020-0552-3
Challenge to scientists: does your ten-year-old code still run?
"Ten Simple Rules for Reproducible Computational Research" http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fj... :
> Rule 1: For Every Result, Keep Track of How It Was Produced
> Rule 2: Avoid Manual Data Manipulation Steps
> Rule 3: Archive the Exact Versions of All External Programs Used
> Rule 4: Version Control All Custom Scripts
> Rule 5: Record All Intermediate Results, When Possible in Standardized Formats
> Rule 6: For Analyses That Include Randomness, Note Underlying Random Seeds
> Rule 7: Always Store Raw Data behind Plots
> Rule 8: Generate Hierarchical Analysis Output, Allowing Layers of Increasing Detail to Be Inspected
> Rule 9: Connect Textual Statements to Underlying Results
> Rule 10: Provide Public Access to Scripts, Runs, and Results
... You can get a free DOI for and archive a tag of a Git repo with FigShare or Zenodo.
... re: [Conda and] Docker container images https://news.ycombinator.com/item?id=24226604 :
> - repo2docker (and thus BinderHub) can build an up-to-date container from requirements.txt, environment.yml, install.R, postBuild and any of the other dependency specification formats supported by REES: Reproducible Execution Environment Standard; which may be helpful as Docker Hub images will soon be deleted if they're not retrieved at least once every 6 months (possibly with a GitHub Actions cron task)
BinderHub builds a container with the specified versions of software and installs a current version of Jupyter Notebook with repo2docker, and then launches an instance of that container in a cloud.
“Ten Simple Rules for Creating a Good Data Management Plan” http://journals.plos.org/ploscompbiol/article?id=10.1371/jou... :
> Rule 6: Present a Sound Data Storage and Preservation Strategy
> Rule 8: Describe How the Data Will Be Disseminated
... DVC: https://github.com/iterative/dvc
> Data Version Control or DVC is an open-source tool for data science and machine learning projects. Key features:
> - Simple command line Git-like experience. Does not require installing and maintaining any databases. Does not depend on any proprietary online services. Management and versioning of datasets and machine learning models. Data is saved in S3, Google cloud, Azure, Alibaba cloud, SSH server, HDFS, or even local HDD RAID.
> - Makes projects reproducible and shareable; helping to answer questions about how a model was built.
There are a number of great solutions for storing and sharing datasets.
... "#LinkedReproducibility"
Open textual formats for data and open source application and system software (more precisely, FLOSS), are just as important.
Imagine that x86 - and with it, the PC platform - gets replaced by ARM within a decade. For binary software, this would be a kind of geological extinction event.
The likelihood of there being a [security] bug discovered in a given software project over any significant period of time is near 100%.
It's definitely a good idea to archive source and binaries and later confirm that the output hasn't changed with and without upgrading the kernel, build userspace, execution userspace, and PUT/SUT Package/Software Under Test.
- Specify which versions of which constituent software libraries are utilized. (And hope that a package repository continues to serve those versions of those packages indefinitely). Examples: Software dependency specification formats like requirements.txt, environment.yml, install.R
- Mirror and archive all dependencies and sign the collection. Examples: {z3c.pypimirror, eggbasket, bandersnatch, devpi as a transparent proxy cache}, apt-cacher-ng, pulp, squid as a transparent proxy cache
- Produce a signed archive which includes all requisite software. (And host that download on a server such that data integrity can be verified with cryptographic checksums and/or signatures.) Examples: Docker image, statically-linked binaries, GPG-signed tarball of a virtualenv (which can be made into a proper package with e.g. fpm), ZIP + GPG signature of a directory which includes all dependencies
- Archive (1) the data, (2) the source code of all libraries, and (3) the compiled binary packages, and (4) the compiler and build userspace, and (5) the execution userspace, and (6) the kernel. Examples: Docker can solve for 1-5, but not 6. A VM (virtual machine) can solve for 1-5. OVF (Open Virtualization Format) is an open spec for virtual machine images, which can be built with a tool like Vagrant or Packer (optionally in conjunction with a configuration management tool like Puppet, Salt, Ansible).
When the application requires (7) a multi-node distributed system configuration, something like docker-compose/vagrant/terraform and/or a configuration management tool are pretty much necessary to ensure that it will be possible to reproducibly confirm the experiment output at a different point in spacetime.
A deep dive into the official Docker image for Python
One thing I've learned is that rather than use a docker-entrypoint.sh, most Linux software can be ran just using the `--user 1000:1000` or whatever UID/GID you want to use, as long as you map a volume that can use those permissions. It is a lot cleaner this way.
> Why Tini?
> Using Tini has several benefits:
> - It protects you from software that accidentally creates zombie processes, which can (over time!) starve your entire system for PIDs (and make it unusable).
> - It ensures that the default signal handlers work for the software you run in your Docker image. For example, with Tini, SIGTERM properly terminates your process even if you didn't explicitly install a signal handler for it.
> - It does so completely transparently! Docker images that work without Tini will work with Tini without any changes.
[...]
> NOTE: If you are using Docker 1.13 or greater, Tini is included in Docker itself. This includes all versions of Docker CE. To enable Tini, just pass the `--init` flag to docker run.
https://github.com/krallin/tini#why-tini
I didn't know it was included by default. I'll check it out, thanks!
There are Alpine [1] and Debian [2] miniconda images (within which you can `conda install python==3.8` and 2.7 and 3.4 in different conda envs)
[1] https://github.com/ContinuumIO/docker-images/blob/master/min...
[2] https://github.com/ContinuumIO/docker-images/blob/master/min...
If you build manylinux wheels with auditwheel [3], they should install without needing compilation for {CentOS, Debian, Ubuntu, and Alpine}; though standard Alpine images have MUSL instead of glibc by default, this [4] may work:
echo "manylinux1_compatible = True" > $PYTHON_PATH/_manylinux.py
[3] https://github.com/pypa/auditwheel[4] https://github.com/docker-library/docs/issues/904#issuecomme...
The miniforge docker images aren't yet [5][6] multi-arch, which means it's not as easy to take advantage of all of the ARM64 / aarch64 packages that conda-forge builds now.
[5] https://github.com/conda-forge/docker-images/issues/102#issu...
[6] https://github.com/conda-forge/miniforge/issues/20
There are i686 and x86-64 docker containers for building manylinux wheels that work with many distros: https://github.com/pypa/manylinux/tree/master/docker
A multi-stage Dockerfile build can produce a wheel in the first stage and install that wheel (with `COPY --from=0`) in a later stage; leaving build dependencies out of the production environment for security and performance: https://docs.docker.com/develop/develop-images/multistage-bu...
Interesting! I use miniconda extensively for local development to manage virtual environments for different python versions and love it. I hardly ever actually use the conda packages though.
I assume the main benefit of using these images would be if you are installing from conda repos instead of pip? Otherwise just using the official python images would be as good if not better
Edit: I guess if you needed multiple python versions in a single container this would be a good solution for that as well
Use cases for conda or conda+pip:
- Already-compiled packages (where there may not be binary wheels) instead of requiring reinstallation and subsequent removal of e.g. build-essentials for every install
- Support for R, Julia, NodeJS, Qt, ROS, CUDA, MKL, etc.
- Here's what the Kaggle docker-python Dockerfile installs with conda and with pip: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...
- Build matrix in one container with conda envs
Disadvantages of the official python images as compared with conda+pip:
- Necessary to (re)install build dependencies and a compiler for every build (if there's not a bdist or a wheel for the given architecture) and then uninstall all unnecessary transitive dependencies. This is where a [multi-stage] build of a manylinux wheel may be the best approach.
- No LSM (AppArmor, SELinux, ) for one or more processes in the container (which may have read access to /etc or environment variables and/or --privileged)
- Necessary to build basically everything on non x86[-64] architectures for every container build
Disadvantages of conda / conda+pip:
- Different package repo infrastructure to mirror
- Users complaining that they don't need conda who then proceed to re-download and re-build wheels locally multiple times a day
Additional attributes for comparison:
- The new pip solver (which is slower than the traditional iterative non-solver), conda, and mamba
- repo2docker (and thus BinderHub) can build an up-to-date container from requirements.txt, environment.yml, install.R, postBuild and any of the other dependency specification formats supported by REES: Reproducible Environment Execution Standard; which may be helpful as Docker Hub images will soon be deleted if they're not retrieved at least once every 6 months (possibly with a GitHub Actions cron task)
Quite a few conda packages have patches added by the conda team to help fix problems in packages relying on native code or binaries. Particularly on Windows. If something is available on the primary conda repos it will almost assuredly work with few of any problems cross-platform, whereas pip is hit or miss.
If you’re always on Linux you may never appreciate it but some pip packages are a nightmare to get working properly on Windows.
If you look through the source of the conda repos, you’ll see all kinds of small patches to fix weird and breaking edge cases, particularly in libs with significant C back ends.
Here's the meta.yml for the conda-forge/python-feedstock: https://github.com/conda-forge/python-feedstock/blob/master/...
It includes patches just like distro packages often do.
The Consortium for Python Data API Standards
This actually strikes me as a huuuge waste of effort. I work with every one of the different technologies they mention every day.
The differences and idiosyncrasies are truly, truly not a big deal and really what you want is to allow different library maintainers to do it all differently and just build your own adapters over top of them.
This allows library developers to worry about narrow use cases and have their own separate processes to introduce breaking changes and features that deal with extreme specifics like GPU or TPU interop, heterogeneous distributed backed arrays, jagged arrays, etc. etc.
Let a thousand flowers blossom and a thousand different Python array and record APIs contend.
End users can write their own adapters to mollify irregularities between them, possibly writing different adapters on a case by case basis.
If any “standard” adapters gain popularity as open source projects, great - but don’t try to bake that in from an RFC point of view into the array & data structure libraries themselves. Let them be free / whatever they want to be. That diversity of API approach is super valuable and easily mollified by your own custom adapter logic.
Personally, I like how the Julia ecosystem coalesced around a Tables.jl package that gives some consistency to tabular data packages that choose to implement it. Row-oriented or column-oriented, doesn't matter! Using Tables.jl they can be interchangeable.
There’s a huge difference between “I personally like it when some libraries choose to adhere to a convention” vs “let’s bake this into every tool with a wide, bureaucratic shared RFC process.”
But I don't think people are forced into following right? There's still choice. But if I was developing something in Python, I would rather join than not join.
I’m saying as a consumer of these libraries, I don’t want joiners, I want lots of different APIs that make different trade-offs for each use case, and I will write (or use) adapters if I need to (that’s my responsibility as a consumer).
Just because I personally either do or do not get some value out of a shared API standard doesn’t mean you do or do not, so personal feelings of what one likes need to be set aside, so the service boundary is at the level of consumers choosing their own adapters and making their own trade-offs.
If lots of libraries sign up, we all lose. Much slower compliance-constrained development cycles, less freedom for libraries to break the shared API contract in ways many users would be super happy to abide for the sake of a faster / better feature. Instead of making these trade-offs library by library, case by case, it’s made for you with standardization compliance as the highest constraint, regardless of the user base for which that does / doesn’t work well.
This strikes me as a bit pessimistic. I think you are mostly against _bad_ standards than standards per se.
If the consortium does its job well it will produce a good standard that all the major libraries like: from a UX as well as a performance viewpoint.
If it produces a bad standard, all that will happen is that nobody will sign up. No libraries are going to be _forced_ to do anything.
That’s fair except I’d say even good standards can cause a big slowdown in feature releases, and not everyone values the adherence to the standard as much as the features. It still seems better to just spin out the standard into a set of optional adapters and let library maintainers and users pick their own trade-offs in terms of when to adopt breaking changes, how / whether to smooth over idiosyncrasies across multiple libraries.
Even a good standard has costs and this particular case does not seem like it has good arguments in favor of a wide standard, but many arguments against it.
No, it's easy for library maintainers to offer a compat API in addition to however else they feel they need to differentiate and optimize the interfaces for array operations. People can contribute such APIs directly to libraries once instead of creating many conditionals in every library-utilizing project or requiring yet another dependency on an adapter / facade package that's not kept in sync with the libraries it abstracts.
If a library chooses to implement a spec compatability API, they do that once (optimally, as compared with somebody's hackish adapter facade which has very little comprehension of each library's internals) and everyone else's code doesn't need to have conditionals.
Each of L libraries implements a compat API: O(L)
Each of U library utilizers implements conditionals for every N places arrays are utilized: O(U x N_)
Each of U library utilizers uses the common denominator compat API: O(U)
L < U < (L + U) < (U x N_)
Tech giants let the Web's metadata schemas and infrastructure languish
It's "langushing" and they should do it for us? It's flourishing and they're doing it for us and they have lots of open issues and I want more for free without any work.
Wow! Nobody else does anything to collaboratively, inclusively develop schema and the problem is that search engines aren't just doing it for us?
1) Search engines do not owe us anything. They are not obligated to dominate us or the schema that we may voluntarily decide to include on our pages.
We've paid them nothing. They have no contract for service or agreement with us which compels them to please us or contribute greater resources to an open standard that hundreds of people are contributing to.
2) You people don't know anything about linked data and structured data.
Here's a list of schema: https://lov.linkeddata.es/dataset/lov/ .
Here's the Linked Open Data Cloud: https://lod-cloud.net/
Does your or this publisher's domain include any linked data?
Does this article include any linked data?
Do data quality issues pervade promising, comparatively-expensive, redundant approaches to natural-language comprehension, reasoning, and summarization?
Here, in contributing this example PR adding RDFa to the codeforantarctica web page, I probably made a mistake. https://github.com/CodeForAntarctica/codeforantarctica.githu... . Can you spot the mistake?
There should have been review.
https://schema.org/ClaimReview, W3C Verifiable Claims / Credentials, ld-signatures, and lds-merkleproof2017.
Which brings us to reification, truth values, property graphs, and the new RDF* and SPARQL* and JSON-LD* (which don't yet have repos with ongoing issues to tend to).
3) Get to work. This article does nothing to teach people how to contribute to slow, collaborative schema standards work.
Here's the link to the GitHub Issues so that you can contribute to schema.org: https://github.com/schemaorg/schemaorg
...
"Standards should be better and they should pay for it"
Who are the major contributors to the (W3C) open standard in question?
Is telling them to put up more money or step down going to result in getting what we want? Why or why not?
Who would merge PRs and close issues?
Have you misunderstood the scope of the project? What do the editors of the schema feel in regards to more specific domain vocabularies? Is it feasible or even advisable to attempt to out-schema domain experts who know how to develop and revise an ontology or even just a vocabulary with Protegé?
To give you a sense of how much work goes into creating a few classes and properties defined with RDFS in RDFa in HTML: here's the https://schema.org/Course , https://schema.org/CourseInstance , and https://schema.org/EducationEvent issue: https://github.com/schemaorg/schemaorg/issues/195
Can you find the link to the Use Cases wiki (which was the real work)? What strategy did you use to find it?
...
"Well, Google just does what's good for Google."
Are you arguing that Google.org should make charitable contributions to this project? Is that an advisable or effective way to influence a W3C open standard (where conflicts of interest by people just donating time are disclosed)?
Anyone can use something like extruct or OSDS to extract RDFa, Microdata, and/or JSON-LD from a page.
Everyone can include structured data and linked data in their pages.
There are surveys quantifying how many people have included which types in their pages. Some of that data is included on schema.org types pages.
...
Some written interview questions:
> Which issues have you contributed to? Which issues have you seen all the way to closed? Have you contributed a pull request to the project? Have you published linked data? What is the URL to the docs which explain how to contribute resources? How would you improve them?
https://twitter.com/westurner/status/1291903926007209984
...
After all that's happened here, I think Dan (who built FOAF, which all profitable companies could use instead of https://schema.org/Person ) deserves a week off to add more linked data to the internet now please.
I think that might be fair, but when she makes org came out it pitched itself as trust us we will take care of things, so yeah they don’t owe us anything but track record matters for trust in future ventures by these search engine orgs
schemaorg/schemaorg/CONTRIBUTING.md https://github.com/schemaorg/schemaorg/blob/main/CONTRIBUTIN... explains how you and your organization can contribute resources to the Schema.org W3C project.
If you or your organization can justify contributing one or more people at full or part time due to ROI or goodwill, by all means start sending Pull Requests and/or commenting on Issues.
"Give us more for free or step down". Wow. What PRs have you contributed to justify such demands?
https://schema.org/docs/documents.html links to the releases.
Time-reversal of an unknown quantum state
T-symmetry https://en.wikipedia.org/wiki/T-symmetry > See also links to "reversible computing" but not the "time reversal" disambiguation page?
The "teeter-totter" thought experiment on that page is interesting to me in that it seems to illustrate how the future has more possibilities than the past. But it also occurs to me that the future possibilities are fractal with the past - ie. the toy could fall onto one of many other lower pedestals, teetering as it was before. In this way the "many arbitrary" possibilities could be perceived as anything but.
Electric cooker an easy, efficient way to sanitize N95 masks, study finds
An autoclave is what the electrical cooker is being used to emulate. A pressure cooker is a better option if you'll be on the road, camping, or otherwise without electricity. Researchers in Canada already proved several months ago that N95 masks could be sanitized this way:
Unfortunately the referenced NewsArticle does not link to the ScholarlyArticle https://schema.org/ScholarlyArticle :
"N95 Mask Decontamination using Standard Hospital Sterilization Technologies" (2020-04) https://www.medrxiv.org/content/10.1101/2020.04.05.20049346v... :
> We sought to test the ability of 4 different decontamination methods including autoclave treatment, ethylene oxide gassing, ionized hydrogen peroxide fogging and vaporized hydrogen peroxide exposure to decontaminate 4 different N95 masks of experimental contamination with SARS-CoV-2 or vesicular stomatitis virus as a surrogate. In addition, we sought to determine whether masks would tolerate repeated cycles of decontamination while maintaining structural and functional integrity. We found that one cycle of treatment with all modalities was effective in decontamination and was associated with no structural or functional deterioration. Vaporized hydrogen peroxide treatment was tolerated to at least 5 cycles by masks. Most notably, standard autoclave treatment was associated with no loss of structural or functional integrity to a minimum of 10 cycles for the 3 pleated mask models. The molded N95 mask however tolerated only 1 cycle. This last finding may be of particular use to institutions globally due to the virtually universal accessibility of autoclaves in health care settings.
The ScholarlyArticle referenced by and linked to by the OP NewsArticle is "Dry Heat as a Decontamination Method for N95 Respirator Reuse" (2020-07) https://pubs.acs.org/doi/full/10.1021/acs.estlett.0c00534 . Said article does not reference "N95 Mask Decontamination using Standard Hospital Sterilization Technologies" DOI: 10.1101/2020.04.05.20049346v2 . We would do well to record that (article A, seemsToConfirm, Article B) as third-party linked data (only if both articles do specifically test the efficacy of the given sterilization method with the COVID-19 coronavirus)
None of this is necessary for non-healthcare workers, since the virus will die off on a fabric mask in a few hours.
1) Not a single study has demonstrated that viable Sars-Cov-2 virus survives on porous materials in the real world
2) Even before Covid, it was known for decades (common medical knowledge) that human coronaviruses and flu viruses do not remain viable on porous materials for more than several hours.
That this common knowledge is not so common knowledge among the public is a failure of public health communication. The one or two alarmist studies showing that the virus "survives" X number of days don't reflect the real world because 1) the researchers literally directly douse or soak the surface with a huge viral load, and 2) the researchers usually only look for viral genetic material, not whether the virus can infect cells. Viability != detecting viral genetic material (same story for people - we shed viral genetic material long after we stop being contagious).
There is a reason the CDC has always said surface transmission is rare: https://www.nytimes.com/2020/05/22/health/cdc-coronavirus-to... there have been no documented cases of surface transmission: https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-si... Fortunately some in the media are catching on to the hygiene (security) theater: https://www.theatlantic.com/ideas/archive/2020/07/scourge-hy...
Citations for 1) and 2):
Most studies use unrealistic starting doses: https://www.thelancet.com/pdfs/journals/laninf/PIIS1473-3099...
Other human coronaviruses don't survive long on porous surfaces (gone by 6 hours): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7134510/. A hospital randomly collected patient and real surface swabs (including non-porous surfaces) for original SARS, a minority were PCR positive, none of the swabs were infectious (viral culture): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7134510/. Same story for flu (virus can be detected for days, but inactive and not viable after a few hours): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3222642/.
I could only find this study on Sars-Cov-2 that cultured the virus (still used a huge viral dose in a lab setting, not the real world). Even though they were able to culture the virus, only 1% of virus remains after 6 hours on a surgical mask, several orders of magnitude less for cotton clothing: https://www.medrxiv.org/content/10.1101/2020.05.07.20094805v...
This long post was originally meant to be a reply to someone asking for a citation; it's yet another example of how much effort is required to combat misinformation.
"Interim Recommendations for U.S. Households with Suspected or Confirmed Coronavirus Disease 2019 (COVID-19)" https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-si... :
> On the other hand, transmission of novel coronavirus to persons from surfaces contaminated with the virus has not been documented. Recent studies indicate that people who are infected but do not have symptoms likely also play a role in the spread of COVID-19. Transmission of coronavirus occurs much more commonly through respiratory droplets than through objects and surfaces, like doorknobs, countertops, keyboards, toys, etc. Current evidence suggests that SARS-CoV-2 may remain viable for hours to days on surfaces made from a variety of materials. Cleaning of visibly dirty surfaces followed by disinfection is a best practice measure for prevention of COVID-19 and other viral respiratory illnesses in households and community settings
Fed announces details of new interbank service to support instant payments
If anyone from the FED is reading this I have this advice: don't use ISO20022! (I've been working on the brazilian instant payments project, known as PIX). It's an overly-complicated-try-to-solve-everything standard that promises interoperability but only delivers pain and misery. Since each country has its own standards for person and account identification, in dealing with these local realities, the ISO20022 adds quite a lot of complexity. But all this cost does not result in interoperability. In fact, each payment scheme end up with a localized version of the standard, incompatible with others. What could have been a clean API, ends up a mess.
What alternative do you suggest? Keeping in mind all the parties that will be involved, besides your project.
Create a fit to purpose, simpler, protocol using a good interface specification language (OpenAPI or protocol buffers). Maybe borrow concepts and patterns from the ISO standard, but not adopt it. It will not deliver its promise of interoperability. It might actually make it harder to interoperate with other payment schemes.
Interledger Protocol (ILP, ILPv4).
Interledger Architecture:
https://interledger.org/rfcs/0001-interledger-architecture/#... :
> For purposes of Interledger, we call all settlement systems ledgers. These can include banks, blockchains, peer-to-peer payment schemes, automated clearing house (ACH), mobile money institutions, central-bank operated real-time gross settlement (RTGS) systems, and even more.
[...]
> Interledger provides for secure payments across multiple assets on different ledgers. The architecture consists of a conceptual model for interledger payments, a mechanism for securing payments, and a suite of protocols that implement this design.
> The Interledger Protocol (ILP) is the core of the Interledger protocol suite. Colloquially, the whole Interledger stack is sometimes referred to as "ILP". Technically, however, the Interledger Protocol is only one layer in the stack.
> Interledger is not a blockchain, a token, nor a central service. Interledger is a standard way of bridging financial systems. The Interledger architecture is heavily inspired by the Internet architecture described in RFC 1122, RFC 1123 and RFC 1009.
[...]
> You can envision the Interledger as a graph where the points are individual nodes and the edges are accounts between two parties. Parties with only one account can send or receive through the party on the other side of that account. Parties with two or more accounts are connectors, who can facilitate payments to or from anyone they're connected to.
> Connectors [AKA routers] provide a service of forwarding packets and relaying money, and they take on some risk when they do so. In exchange, connectors can charge fees and derive a profit from these services. In the open network of the Interledger, connectors are expected to compete among one another to offer the best balance of speed, reliability, coverage, and cost.
ILP > Peering, Clearing and Settling: https://interledger.org/rfcs/0032-peering-clearing-settlemen...
ILP > Simple Payment Setup Protocol (SPSP): https://interledger.org/rfcs/0009-simple-payment-setup-proto...
> This document describes the Simple Payment Setup Protocol (SPSP), a basic protocol for exchanging payment information between payee and payer to facilitate payment over Interledger. SPSP uses the STREAM transport protocol for condition generation and data encoding.
> (Introduction > Motivation) STREAM does not specify how payment details, such as the ILP address or shared secret, should be exchanged between the counterparties. SPSP is a minimal protocol that uses HTTPS for communicating these details.
[...]
GET /.well-known/pay HTTP/1.1
Host: example.com
Accept: application/spsp4+json, application/spsp+json
Shrinking deep learning’s carbon footprint
"Unlearning" is one algorithmic approach that may yield substantial energy consumption gains.
With many deep learning models, it's not possible to determine when or from what source something was learned: it's not possible to "back out" a change to the network and so the whole model has to be re-trained from scratch; which is O(n) instead of O(1.x).
The article covers software approaches (more energy-efficient algorithms) and mentions GPUs but not TPUs or ASICs.
Specialized chips (built with dynamic fabrication capacities) are far more energy efficient for specific types of workloads. We see this with mining ASICs, SSL accelerators, and also with Tensor Processing Units (for deep learning).
The externalities of energy production are the ultimate concern. If you're using cheap, clean energy with minimized external costs ("sustainable energy"), the energy-efficiency of the algorithm and the chips is of much less concern.
Could we recognize products, services, and data centers that were produced with and/or run on directly sourced clean energy as "200% Green"; with a logo on the box and/or the footer? 100% offset by PPAs is certainly progress.
Show HN: Starboard – Fully in-browser literate notebooks like Jupyter Notebook
Hi HN, I developed Starboard over the past months.
Cell-by-cell notebooks like Jupyter are great for prototyping, explaining and exploration, but their dependence on a Python server (with often undocumented dependencies) limits their ability to be shared and remixed. Now that browsers support dynamic imports, it has become possible to create a similar workflow entirely in the browser.
That motivated me to build Starboard Notebook, a tool I wished existed. It's:
* Run entirely in the browser, there is no server or setup, it's all static files.
* Web-native, so no widget system is necessary. There is nearly no magic, it's all web tech (HTML, CSS, JS).
* Stores as a plaintext file, it will play nicely with version control systems.
* Hackable: the sandbox that your code gets run in contains the editor itself, so you can metaprogram the editor itself (e.g. adding support for other languages such as Python through WASM).
* Open source (https://github.com/gzuidhof/starboard-notebook).
You can import any code that targets the browser directly (e.g. puts stuff on the window object), or that has exports in ES module format.
I'm happy to answer any questions!
Neat! There's a project called Jyve that compiles Jupyter Lab to WASM (using iodide). https://github.com/deathbeds/jyve There are kernels for JS, CoffeeScript, Brython, TypeScript, and P5. FWIU, the kernels are marked as unsafe because, unfortunately, there seems to be no good way to sandbox user-supplied notebook code from the application instance. The README describes some of the vulnerabilities that this entails.
The jyve project issues discuss various ideas for repacking Python packages beyond the set already included with Pyodide and supporting loading modules from remote sources.
https://developer.mozilla.org/en-US/docs/Web/Security/Subres... : "Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match."
There's a new Native Filesystem API: "The new Native File System API allows web apps to read or save changes directly to files and folders on the user's device." https://web.dev/native-file-system/
We'll need a way to grant specific URLs specific, limited amounts of storage.
https://github.com/iodide-project/pyodide :
> The Python scientific stack, compiled to WebAssembly
> [...] Pyodide brings the Python 3.8 runtime to the browser via WebAssembly, along with the Python scientific stack including NumPy, Pandas, Matplotlib, parts of SciPy, and NetworkX. The packages directory lists over 35 packages which are currently available.
> Pyodide provides transparent conversion of objects between Javascript and Python. When used inside a browser, Python has full access to the Web APIs.
https://github.com/deathbeds/jyve/issues/46 :
> Would miniforge and conda-forge build a WASM architecture target?
> Emscripten or WASI?
Ask HN: Learning about distributed systems?
I used to love Operating Systems during my undergrads, Modern Operating Systems by Tanenbaum is till date the only academic book I've read entirely. I recently read an article about how Amazon built Aurora by Werner Vogels and I was captivated by it. I want to start reading about Distributed Systems. What would be a good start/Road Map?
The book “Designing Data-Intensive Applications” by Martin Kleppman is a fantastic read with such a concise train of thought. It builds up from basics, adds another thing, and another thing.
I kept asking myself, what would happen if I were to extend on the feature currently presented in the chapter I was reading, only to find out my answers in the next chapter.
Brilliant book
> "Designing Data-Intensive Applications” by Martin Kleppman: https://dataintensive.net/ https://g.co/kgs/xJ73FS
From a previous question re: "Ask HN: CS papers for software architecture and design?" (https://news.ycombinator.com/item?id=15778396 and distributed systems we eventually realize were needed in the first place:
> Bulk Synchronous Parallel: https://en.wikipedia.org/wiki/Bulk_synchronous_parallel .
Many/most (?) distributed systems can be described in terms of BSP primitives.
> Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science) .
> Raft: https://en.wikipedia.org/wiki/Raft_(computer_science) #Safety
> CAP theorem: https://en.wikipedia.org/wiki/CAP_theorem .
Papers-we-love > Distributed Systems: https://github.com/papers-we-love/papers-we-love/tree/master...
awesome-distributed-systems also has many links to theory: https://github.com/theanalyst/awesome-distributed-systems
- Byzantine fault: https://en.wikipedia.org/wiki/Byzantine_fault :
> A [Byzantine fault] is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine Generals Problem",[2] developed to describe a situation in which, in order to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable.
awesome-bigdata lists a number of tools: https://github.com/onurakpolat/awesome-bigdata
Practically, dask.distributed (joblib -> SLURM,), dask ML, dask-labextension (a JupyterLab extension for dask), and the Rapids.ai tools (e.g. cuDF) scale from one to many nodes.
Not without a sense of irony, as the lists above list many papers that could be readings with quizzes,
Distributed systems -> Distributed computing: https://en.wikipedia.org/wiki/Distributed_computing
Category: Distributed computing: https://en.wikipedia.org/wiki/Category:Distributed_computing
Category:Distributed_computing_architecture : https://en.wikipedia.org/wiki/Category:Distributed_computing...
DLT: Distributed Ledger Technology: https://en.wikipedia.org/wiki/Distributed_ledger
Consensus (computer science) https://en.wikipedia.org/wiki/Consensus_(computer_science)
Ask HN: How can I “work-out” critical thinking skills as I age?
As I get older, I realized I’m not as sharp as I used to be. Maybe it’s from the fatigue of juggling 2 kids, but I’m very ill prepared for interviews because I simply can’t answer “product questions” and brain teasers. It’s a skill I need, and truthfully I was never good at consultant type questions to begin with but I’m seeing a lot of these questions in Data Science interviews.
Any help or resources will be tremendously appreciated.
Problem solving: https://en.wikipedia.org/wiki/Problem_solving
Critical thinking: https://en.wikipedia.org/wiki/Critical_thinking
Computational Thinking: https://en.wikipedia.org/wiki/Computational_thinking
> 1. Problem formulation (abstraction);
> 2. Solution expression (automation);
> 3. Solution execution and evaluation (analyses).
Interviewers may be more interested in demonstrating problem solving methods and f thinking aloud than an actual solution in an anxiety-producing scenario.
https://en.wikipedia.org/wiki/Brilliant_(website) ;
> Brilliant offers guided problem-solving based courses in math, science, and engineering, based on National Science Foundation research supporting active learning.[14]
Coding Interview University: https://github.com/jwasham/coding-interview-university
Programmer Competency Matrix: https://github.com/hltbra/programmer-competency-checklist
Inference > See also: https://en.wikipedia.org/wiki/Inference
- Deductive reasoning: https://en.wikipedia.org/wiki/Deductive_reasoning
- Inductive reasoning: https://en.wikipedia.org/wiki/Inductive_reasoning
> This is the [open] textbook for the Foundations of Data Science class at UC Berkeley: "Computational and Inferential Thinking: The Foundations of Data Science" http://inferentialthinking.com/
The tragedy of FireWire: Collaborative tech torpedoed by corporations
Due to DMA (Direct Memory Access) in most implementations, IEEE 1394 ("FireWire") can be used to directly read from and write to RAM.
See: IEEE 1394 > Security issues https://en.wikipedia.org/wiki/IEEE_1394#Security_issues
FWIU, USB 3 is faster than FireWire; there are standard, interchangeable USB connectors and adapters; and USB implementations do not use DMA. https://en.wikipedia.org/wiki/USB_3.0
> FWIU, USB 3 is faster than FireWire
You've missed the point. The title of the article reads "The tragedy of FireWire" - how a good standard was victimized due to corporate politics. FireWire was 10 years ahead of USB. In 2002, FireWire 800 provided a data rate of 800 Mbps, full-duplex. On the other hand, USB 2 only had 480 Mbps, half-duplex. USB didn't catch up until 2010, after USB 3.0 was released. FireWire uses 30 V, 1.5 A power, and it supports power delivery up to 45 watts. USB didn't offer anything similar, not even the latest USB 3, until 2014 after Power Delivery and Type-C's standardization - and real applications only started to ramp up by ~2017. Both limitations of USB 2 were detrimental to some applications, e.g. using hard drives on a PC was painful due to speed and power constraints, had FireWire became more common, it would not be an issue. FireWire can also do everything Ethernet can do (in fact, 1394b's signaling is better than 100Base-TX Ethernet, and has lower radiation), networking is natively supported, you can network two computers together directly with a 1394 cable! On USB, it's only possible using an external controller. One application was NAS, a hard drive can be shared directly over 1394, and mounted as a local drive. Not possible with USB. Finally, 1394b does not have USB's 5-meter limitation due to protocol timing, and natively supports signaling over CAT-5 and fiber optics cable for a distance of ~50-100 meters - on USB, it requires a media converter running vendor-specific custom protocols and not interoperable. All of these features are standardized in 2004 in IEEE 1394b.
Of course, limitations are not inherently USB 2's failures, they have different applications, FireWire was more expensive, and USB's goal was low-cost. But the tragedy was that bad business decision prevented FireWire to achieve its full potential, slowly fading out, and eventually became irrelevant after USB 3 despite its initial good engineering.
FireWire was a tragedy.
> Due to DMA (Direct Memory Access) in most implementations, IEEE 1394 ("FireWire") can be used to directly read from and write to RAM.
This is why we need IOMMU.
You should not blame IEEE 1394 for DMA attacks just because it supports DMA. I agree that the tragedy of FireWire probably prevented a common PC exploit vector as an lucky unintentional consequence, but it's just a symptom, not the actual issue - and the symptom has came back in another form.
Most sufficiently fast hardware interfaces support DMA, and they're equally vulnerable - this includes PCI-E, ExpressCard (which exposes PCI-E), USB 4 (which exposes PCI-E port), Thunderbolt (which exposes PCI-E). Practical exploits are already widespread, not limited to arbitrary memory reads/writes, but also OptionROM arbitrary code execution if a Thunderbolt device is plugged in at boot time.
And this is not only a problem for external ports like 1394, USB 4 or Thunderbolt. Threats also exist from internal devices, like a Ethernet or Wi-Fi controller on the motherboard. While you cannot initiate a DMA transaction via Ethernet (without RDMA) or Wi-Fi since they don't expose PCI-E, the fact that those are connected directly to PCI-E implies any vulnerability in the controller firmware potentially allows an attacker to launch a DMA attack. Exploits already exist for Wi-Fi controllers.
This is the real problem. While the mechanism of IOMMU exists, CPU manufacturers and operating systems previously did little to systematically protect the system from DMA. In the past, Intel intentionally removed IOMMU functionality from low-end CPUs and motherboard chipsets in order to sell more Xeon CPUs to enterprise users. Also, while IOMMU is supported by operating systems, kernel code and drivers were not systematic audited for security - serious IOMMU bypass vulnerabilities have been previously discovered (and patched) in macOS and Linux's IOMMU implementations.
With continued proliferation of USB 4 and Thunderbolt, the lack of IOMMU and buggy driver implementation will become a bigger problem.
So far, QubesOS is most most protected operating system from DMA attacks. Untrusted hardware are assigned to a dedicated VM, and IOMMU is enforced by the hypervisor, not the individual device drivers in operating systems. Compromising a device driver in an untrusted domain does not directly expose the rest of QubesOS to attackers.
> and USB implementations do not use DMA.
USB 4 does now.
So your argument is that not security but cost is the reason that USB "won" the external device interface competition with FireWire?
Good to know that USB4 implementations are making the same mistake as FireWire implementors did in choosing performance over security . Unfortunately it looks like there will be no alternative except for maybe to use a USB3 hub (or an OS with fuzzed IOMMU and also controller firmwares)?
Could an NX bit for data coming from buses with and without DMA help at all?
Hot gluing external ports now seems a bit more rational and justified for systems where physical access is less controlled.
> So your argument is that not security, but cost is the reason that USB "won" the external device interface competition with FireWire?
Sorry, but I heavily suspect that you didn't read the original article and I suggest you to do so.
Security has nothing to do with 1394's market adoption, in the 2010s, some desktops still have 1394, and some laptops still have ExpressCard, and you don't see anyone complains about security, only some security researchers.
The reason was partially cost, but also a bad business decision by Steve Jobs, which made Intel and Microsoft stopped their efforts for pushing 1394, largely damaged it before it even got a chance for wider adoption.
> FireWire's principal creator, Apple, nearly killed it before it could appear in a single device. And eventually the Cupertino company effectively did kill FireWire, just as it seemed poised to dominate the industry.
> Intel sent its CTO to talk to Jobs about the change, but the meeting went badly. Intel decided to withdraw its support for FireWire—to pull the plug on efforts to build FireWire into its chipsets—and instead throw its weight behind USB 2.0.
> Sirkin believes that Microsoft could have reversed the new licensing policy by citing the prior signed agreement. "Microsoft must have thrown it away," he speculated, because it would have "stopped Apple in its tracks."*
--
> Good to know that USB4 implementations are making the same mistake as FireWire implementors did in choosing performance over security.
DMA is optional and can be disabled.
Speaking of usability, DMA is limited to the Thunderbolt part (which is ultimately the PCI-E part) of USB 4 standard. You don't have to use USB 4's PCI-E. Unless the USB device really needs PCI-E, like RAID storage box or a GPU, disabling DMA is not a problem. Requiring explicit user consent for PCI-E is also a solution, operating systems already does it for Thunderbolt devices. Same treatment can be applied to USB 4.
And again, calling out DMA support is only shooting the messenger while ignoring the underlying problem. DMA is essential for any high-performance system bus, and it can a problem regardless of who supported it, or whether it's connected to an external port. A Wi-Fi controller handles untrusted data, but despite being an internal device on the motherboard, it's still potentially an exploit vector for DMA attackers.
> (or an OS with fuzzed IOMMU and also controller firmwares)
Only IOMMU is needed (and possibly a good driver, but not strictly required, see QubesOS). If your IOMMU is good, you don't have to trust firmware or hardware, since arbitrary DMAs are blocked by IOMMU, just like how memory protection (MMU) works but for I/O. Memory spaces are isolated.
The IOMMU vulnerabilities I previous mentioned are already patched, and a complete bypass of IOMMU is unlikely to occur in the future. Driver bugs are still an issue, more RAM regions can be exposed by drivers than what's really needed, and potentially you can pretend to be any hardware and feed bad data to trick the most vulnerable driver to expose more RAM regions, and I expect to see more bugs.
Turn on IOMMU, and apply OS patches often, and you'll probably be fine.
I read much of the article (which assumed that "FireWire" failed because of issues with suppliers failing to work together instead of waning demand (due in part to corporate customers' knowledge of the security risks of most implementations)).
Thanks for the info on USB-4, DMA, IOMMU.
IOMMU: https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_ma...
Looks like there are a number of iommu Linux kernel parameters: https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...
Wonder what the defaults are and what the comparable parameters are for common consumer OSes.
Looks like NX bit support is optional in IOMMUs.
Can I configure the amount of RAM allocated to this?
> Wonder what the defaults
See the Thunderclap FAQ, most of your questions are explained. Thunderclap is the DMA attack on Thunderbolt, and everything should be applicable to USB 4 as well, since USB 4 incorporated Thunderbolt directly.
From the FAQ,
> In macOS 10.12.4 and later, Apple addressed the specific network card vulnerability we used to achieve a root shell. However the general scope of our work still applies; in particular that Thunderbolt devices have access to all network traffic and sometimes keystrokes and framebuffer data.
> Microsoft have enabled support for the IOMMU for Thunderbolt devices in Windows 10 version 1803, which shipped in 2018. Earlier hardware upgraded to 1803 requires a firmware update from the vendor. This brings them into line with the baseline for our work, however the more complex vulnerabilities we describe remain relevant.
> Recently, Intel have contributed patches to version 5.0 of the Linux kernel (shortly to be released) that enable the IOMMU for Thunderbolt and prevent the protection-bypass vulnerability that uses the ATS feature of PCI Express.
But note that Microsoft says on its website,
> This feature does not protect against DMA attacks via 1394/FireWire, PCMCIA, CardBus, ExpressCard, and so on.
Basically, IOMMU is enabled by default in most systems now. However, only on Thunderbolt devices (and USB 4 devices, since Thunderbolt is now its subset). But not on other PCI-E ports, such as the internal PCI-E ports, or external ones like 1394 or ExpressCard. I think it's due to fears of performance and compatibility issues, there's no reason why it cannot be implemented if one insists. Also, from the macOS example, you see that bad drivers can be a problem, even with IOMMU, if a driver is bad, attackers can exploit them and trick them to give more RAM access (If my memory is correct, Linux's implementation was better, but since it's an early FAQ, I don't know if macOS has improved).
> what the comparable parameters are for common consumer OSes.
On the Linux kernel, with an Intel CPU, the kernel option "intel_iommu=on" will enable IOMMU completely for everything, internal and external, which is the most ideal option. You should see
DMAR: IOMMU enabled
in dmesg. Some people report compatibility issues for Intel's integrated GPU that causes glitches or system hang, use "intel_iommu=on,igfx_off" should fix it (which is not really a security problem, the GPU is already in the CPU). I've been running my PC with IOMMU for years by now. didn't notice any problem for me.IOMMU needs support from the CPU and motherboard chipset, Intel's IOMMU is called VT-D and marketed it as I/O virtualization for virtual machines. Most i3, i5, and i7 CPUs are supported. You may need to turn it on in BIOS if there's an option. It should work on most laptops, even an old Ivy Bridge laptop should work - which is good news, laptops are the most vulnerable platform. Unfortunately, on desktops, Intel intentionally removed IOMMU from high-end CPUs preferred by PC enthusiasts in the 1st (Nehalem), 2nd (Sandy Bridge), 3rd (Ivy Bridge), and 4th (Haswell) generations, possibly as a strategy to sell more Xeon chips to enterprise virtualization users (likely in line with their cripple-ECC strategy). Most high-end overclockable K-series CPUs have their IOMMU removed, while the non-K counterparts are supported. Also, in these generations, some motherboard chipsets are crippled and don't have IOMMU. Fortunately, Intel no longer does it since 5/6th gen (Broadwell/Skylake) CPUs.
No experience on AMD yet, but it should be similar. I think possibly better, my impression is that AMD doesn't intentional crippling IOMMU or ECC like Intel does (a nasty strategy...). Check the docs.
The Developer’s Guide to Audit Logs / SIEM
This article suggests that there should be separate data collection systems for: analytics, SIEM logs, and performance metrics.
The article mentions the CEF (Common Event Format) standard but not syslog or GELF or other JSON formats.
[ArcSight] Common Event Format [PDF]: https://kc.mcafee.com/resources/sites/MCAFEE/content/live/CO...
GELF: Graylog Extended Log Format: https://docs.graylog.org/en/latest/pages/gelf.html
Wikipedia > Syslog lists a few limitations of Syslog (no message delivery confirmation, though there is a reliable delivery RFC; and insufficient payload standardization) and also links to the existing Syslog RFCs. https://en.wikipedia.org/wiki/Syslog
Are push-style systems ideal for security logshipping systems? What sort of a message broker is ideal? AMQP has reliable delivery; while, for example, ZeroMQ does not and will drop messages due to resource exhaustion.
Developers simply need an API for their particular framework to non-blockingly queue and then log structs to a remote server. This typically means moving beyond a single-threaded application architecture so that the singular main [green] thread is not blocked when the remote log server is not responding.
SIEM: Security information and event management: https://en.wikipedia.org/wiki/Security_information_and_event...
Del.icio.us
The Firefox (and Chromium) bookmarks storage and sync systems still don't persist tags!
"Allow reading and writing bookmark tags" https://bugzilla.mozilla.org/show_bug.cgi?id=1225916
Notes re: how this could be standardized with JSON-LD: https://bugzilla.mozilla.org/show_bug.cgi?id=1225916#c116
The existing Web Experiment for persisting bookmark tags: https://github.com/azappella/webextension-experiment-tags/bl...
Ask HN: Recommendations for Books on Writing?
I want to propose a book club for writing as an engineer. Writing is fundamentally and critically important, but it seems that we don't emphasize it as much as we should for engineers (outside Amazon, where apparently it is a prominent member of the leadership pantheon).
I'm interested in any suggestions that HN has for great books on writing as an engineer! Accessibility and ease are important factors for a book club as well.
Technical Writing: https://en.wikipedia.org/wiki/Technical_writing
Google Technical Writing courses (1 & 2) and resources: https://developers.google.com/tech-writing :
- Google developer documentation style guide: https://developers.google.com/style
- Microsoft Writing Style Guide: https://docs.microsoft.com/en-us/style-guide/welcome/
Season of Docs is a program where applicants write documentation for open source projects: https://developers.google.com/season-of-docs/
Many open source projects are happy to accept necessary contributions of docs and editing; but do keep in mind that maintaining narrative documentation can be far more burdensome than maintaining API documentation that's kept next to the actual code. Systems like doxygen, epidoc, javadoc, and sphinx-apidoc enable developers to generate API documentation for a particular version of the software project as one or more HTML pages.
ReadTheDocs builds documentation from ReStructuredText and now also Markdown sources using Sphinx and the ReadTheDocs Docker image. ReadTheDocs organizes docs with URLs of the form <projectname>.rtfd.io/<language>/<version|latest>: https://docs.readthedocs.io/en/latest/ . The ReadTheDocs URL scheme reduces the prevalence of broken external links to documentation; though authors are indeed free to delete and rename docs pages and change which VCS tags are archived with RTD.
Write the Docs is a conference for technical documentation authors which is supported in part by ReadTheDocs: https://www.writethedocs.org/
Write the Docs > Learning Resources > All our videos and articles: https://www.writethedocs.org/topics/ :
> This page links to the topics that have been covered by conference talks or in the newsletter.
You might say that UX (User Experience) includes UI design and marketing: the objective is to imagine yourself as a customer experiencing the product or service afresh.
Writing dialogue is an activity we often associate more with creative writing exercises; where the objective is to meditate upon compassion for others.
One must imagine themself as ones/people/persons who interact with the team.
Cognitive walkthrough: https://en.wikipedia.org/wiki/Cognitive_walkthrough
The William Golding, Jung, and Joseph Campbell books on screenwriting, archetypes, and the hero's journey monomyth are excellent; if you're looking for creative writing resources.
Ask HN: How did you learn x86-64 assembly?
I'm an experienced C/C++ programmer and I occasionally look at the generated assembly to check for optimizations, loop unrolling, vectorization, etc. I understand what's going on the surface level, but I have a hard time understand what's going on in detail, especially with high optimization levels, where the compiler would do all kinds of clever tricks. I experiment with code in godbolt.org and look up the various opcodes, but I would like to take a more structured way of learning x86-64 assembly, especially when it comes to common patterns, tips and tricks, etc.
Are there any good books or tutorials you can recommend which go beyond the very beginner level?
High Level Assembly (HLA) https://en.wikipedia.org/wiki/High_Level_Assembly
> HLA was originally conceived as a tool to teach assembly language programming at the college-university level. The goal is to leverage students' existing programming knowledge when learning assembly language to get them up to speed as fast as possible. Most students taking an assembly language programming course have already been introduced to high-level control flow structures, such as IF, WHILE, FOR, etc. HLA allows students to immediately apply that programming knowledge to assembly language coding early in their course, allowing them to master other prerequisite subjects in assembly before learning how to code low-level forms of these control structures. The book The Art of Assembly Language Programming by Randall Hyde uses HLA for this purpose
Web: https://plantation-productions.com/Webster/
Book: "The Art of Assembly Language Programming" https://plantation-productions.com/Webster/www.artofasm.com/
Portable, Opensource, IA-32, Standard Library: https://sourceforge.net/projects/hla-stdlib/
"12.4 Programming in C/C++ and HLA" in the Linux 32 bit edition: https://plantation-productions.com/Webster/www.artofasm.com/...
... A chapter(s) about wider registers, WASM, and LLVM bitcode etc might be useful?
... Many awesome lists link to OllyDbg and other great resources for ASM; like such as ghidra: https://www.google.com/search?q=ollydbg+site%3Agithub.com+in...
Brain connectivity levels are equal in all mammals, including humans: study
This is outrageously misleading. To make a claim about the number of synapses between neurons based on MRI data is completely unwarranted. Voxel size (single volumetric pixel) in MRI is approximately 1mm, while synapse size is way less than a micron. You need resolution per pixel on the order of 10s of nanometers to identify synapses.
I wouldn’t be surprised if a dead salmon also has “equal” connectivity: https://www.wired.com/2009/09/fmrisalmon/
Surely there's something to learn here though. I haven't read the original paper but a quantity that's preserved across brain scales is either an artifact or a neat insight.
Your criticism reads like someone accusing economists of being outrageously misleading when they don't sample individual households but measure macro indicators. It's like saying Ramon y cajal was ridiculous because he couldn't image the neuropil effectively. Or like saying early optogenetics experiments were ridiculous because who knows if you're stimulating a neuron in a realistic manner?
And in any case, it's true that synapses are comically small relative to voxel size, but we also have some reasonable information about projection patterns and synapse number from various tracer or rabies studies with which you are no doubt familiar.
I haven't read the nature paper the press release is about and I'm not a huge fan of many d/fMRI practices or derived claims. And I've worked with enough mammalian dwi data to be skeptical of specific connection claims. But this strikes me as a rather interesting result even if you can't measure all the synapses at the right resolution: either the tractography method has connectivity conservation artifacts baked in, or there's something interesting going on.
"fNIRS Compared with other neuroimaging techniques" https://en.wikipedia.org/wiki/Functional_near-infrared_spect...
> When comparing and contrasting these devices it is important to look at the temporal resolution, spatial resolution, and the degree of immobility.
I think I missed your overall point?
OP suggests that the spatial resolution of existing MRI neuroimaging capabilities is insufficient to observe or so characterize or so generalize about neuronal activity in mammalian species. fNIRS (functional near-infrared spectroscopy) is one alternative neuroimaging capability that we could compare fMRI with according to the criteria for comparison suggested in the cited Wikipedia article: "temporal resolution, spatial resolution, and the degree of immobility".
Ask HN: Resources to start learning about quantum computing?
Hi there,
I'm an experienced software engineer (+15 years dev experience, MsC in Computer Science) and quantum computing is the first thing in my experience that is being hard to grasp/understand. I'd love to fix that ;)
What resources would you recommend to start learning about quantum computing?
Ideally resources that touch both the theoretical base and evolve to more practical usages.
"What are some good resources to learn about Quantum Computing?" https://news.ycombinator.com/item?id=16052193 https://westurner.github.io/hnlog/#comment-16052193
Launch HN: Charityvest (YC S20) – Employee charitable funds and gift matching
Stephen, Jon, and Ashby here, the co-founders of Charityvest (https://charityvest.org). We created a modern, simple, and affordable way for companies to include charitable giving in their suite of employee benefits.
We give employees their own tax-deductible charitable giving fund, like an “HSA for Charity.” They can make contributions into their fund and, from their fund, support any of the 1.4M charities in the US, all on one tax receipt.
Using the funds, we enable companies to operate gift matching programs that run on autopilot. Each donation to a charity from an employee is matched automatically by the company in our system.
A company can set up a matching gift program and launch giving funds to employees in about 10 minutes of work.
Historically, corporate charitable giving matching programs have been administratively painful to operate. Making payments to charities, maintaining tax records, and doing due diligence on charitable compliance is taxing on HR / finance teams. The necessary software to help has historically been quite expensive and not very useful for employees beyond the matching features.
This is one example of an observation Stephen made after working for years as a philanthropic consultant. Consumer fintech products aren’t built to make great giving experiences for donors. Instead, they are built for buyers — e.g., nonprofits (fundraising) or corporations (gift matching) — without a ton of consideration for the everyday user experience.
A few years back, my wife and I made a commitment to give a portion of our income away every year, and we found it administratively painful to give regularly. The tech that nonprofits typically use hardly inspires generosity — e.g., high fees, poor user flows, and questionable information flow (like tax receipts). Giving platforms try to compensate for poor functionality with bright pictures of happy kids in developing countries, but when the technology is not a good financial experience it puts a damper on things.
Charityvest started when I noticed a particular opportunity with donor-advised funds, which are tax-deductible giving funds recognized by the IRS. They are growing quickly (20% CAGR), but mainly among the high-net worth demographic. We believe they are powerful tools. They enable donors to have a giving portfolio all from one place (on one tax receipt) and have full control over their payment information/frequency, etc. Most of all, they enable a donor to split the decisions of committing to give and supporting a specific organization. Excitement about each of these decisions often strikes at different times for donors—particularly those who desire to give on a budget.
We believe everyone should have their own charitable giving fund no matter their net worth. We’ve created technology that has democratized donor-advised funds.
We also believe good technology should be available for every company, big and small. Employers can offer Charityvest for $2.49 / employee / month subscription, and we charge no fees on any of the giving — charities receive 100% of the money given.
Lastly, we send the program administrator a fun report every month to let them know all the awesome giving their company and its employees did in one dashboard. This info can be leveraged for internal culture or external brand building.
We’re just launching our workplace giving product, but we’ve already built a good portfolio of trusted customers, including Eric Ries’ (author of The Lean Startup) company, LTSE. We’ve particularly seen a number of companies use us as a meaningful part of their corporate decision to join the fight for racial justice in substantive ways.
Our endgame is that the world becomes more generous, starting with the culture of every company. We believe giving is fundamentally good and we want to build technology that encourages more of it by making it more simple and accessible.
You can check out our workplace giving product at (https://charityvest.org/workplace-giving). If you’re interested, we can get your company up and running in 10 minutes. Or, please feel free to forward us on to your HR leadership at your company.
Our giving funds are also available for free for any individual on https://charityvest.org — without gift matching and reporting. We’d invite you to check out the experience. For individuals, we make gifts of cash and stock to any charity fee-free.
Happy to share this with you all, and we’d love to know what you think.
What a great idea!
Are there two separate donations or does it add the company's name after the donor's name? Some way to notify recipients about the low cost of managing a charitable donation match program with your service would be great.
Have you encountered any charitable foundations which prefer to receive cryptoassets? Red Cross and UNICEF accept cryptocurrency donations for the children, for example.
Do you have integration with other onboarding and HR/benefits tools on your roadmap? As a potential employee, I would like to work for a place that matches charitable donations, so mentioning as much in job descriptions would be helpful.
Thanks! Our matching system issues an identical grant from the fund of the matching company. It goes out in the same grant cycle as the employee grant so they go together.
We haven't yet encountered any charity that prefers to receive cryptoassets.
We have lots of dreams about thoughtful integrations with HR software, but we want the experience to be excellent, and we want to keep the experience of our existing app excellent. We'll be balancing those priorities as we grow.
> Our matching system issues an identical grant from the fund of the matching company. It goes out in the same grant cycle as the employee grant so they go together.
So the system creates a separate transaction for the original and the matched donation with each donor's name on the respective gift?
How do users sync which elements of their HR information with your service? IDK what the monthly admin cost there is.
There are a few HR, benefits, contracts, and payroll YC companies with privacy regulation compliance and APIs https://www.ycombinator.com/companies/?query=Payroll
https://founderkit.com/people-and-recruiting/health-insuranc...
Separate data records, yes, separate payment, no. The charity receives one consolidated check in the mail each month across all donors (one payment), and the original donor's grant will be on the data sheet as well as the matching grant from the company's corporate fund (separate records).
Today, users create their accounts directly with our app, and they are affiliated with their corporation in our app via their email address. So we don't integrate with any HR information.
Administrators of the program can add and remove employees via copying and pasting email addresses (can add/remove many at a time). We aim to integrate with HR systems of record in the future to make this seamless.
Thanks for clarifying.
Do you offer a CSV containing donor information to the charity?
Do you support anonymous matched donations?
Can donors specify that a donation is strongly recommended for a specific effort?
...
3% * $1000/yr == $2.50/mo * 12mo
CSV: yes if they request we’ll happily provide.
Anonymous: yes it’s an option on our grant screen
Specific efforts: yes on the grant screen we enable donors to add a “specific need” they’d like their grant to fund.
:-)
It may be helpful to integrate with charity evaluation services to help donors assess various opportunities to give.
Charity Navigator > Evaluation method https://en.wikipedia.org/wiki/Charity_Navigator#Evaluation_m...
We love the idea of layering in rich information to help donors make informed decisions! We just want to be really thoughtful about the experience. So we plan to get there, it just may take us some time.
We Need a Yelp for Doctoral Programs
How are the data needs for such a doctoral and post-doctoral evaluation program different from the data needs for https://collegescorecard.ed.gov ?
Data: https://collegescorecard.ed.gov/data/
Data documentation: https://collegescorecard.ed.gov/data/documentation/
Reviews seem like the biggest need.
“The culture in the X department is terrible, everyone works 80 hours a week and is miserable”
“Night life is great at least, the grad students usually go out for student nights at blah blah after seminar”
“Underrated X program because at university Y, don’t be fooled we literally have 4 of the 5 top researchers in field Z”
Etc.
But it is so hard for people to give reviews. People seem to give good review when are they are very pissed off or very happy. Later seems to be a very rare case.
I have always thought that bad reviews are sufficient for all purposes, and the only reason to even have good reviews is so the reviewees don't feel overly persecuted.
Any product or service will have occasional customers who are either unreasonable or have bad luck. If you have a lot of bad reviews, then as a prospective customer you want to see if all of them fall under the two categories or whether there is a pattern that is relevant to you.
It seems to me that a review database works fine even when all the reviews are bad, and the only regulation it really needs is an effort to prevent one single individual with a vendetta from overwhelming it.
All of the World’s Money and Markets in One Visualization
> Derivatives top the list, estimated at $1 quadrillion or more in notional value according to a variety of unofficial sources.
1 Quadrillion: 1,000,000,000,000,000 (10^15)
Derivative (finance) https://en.wikipedia.org/wiki/Derivative_(finance)
Derivatives market https://en.wikipedia.org/wiki/Derivatives_market :
> The market can be divided into two, that for exchange-traded derivatives and that for over-the-counter derivatives.
Why companies lose their best innovators (2019)
https://news.ycombinator.com/item?id=23886158
> Three reasons companies lose their best innovators.
> 1. They fail to recognize and support the innovators
> 2. Innovation becomes a herculean task
> 3. Corporations don’t match rewards with outcomes
While the paragraphs under point 2 do discuss risk and the paragraphs under point 3 do discuss rewards, I'm not sure this article belongs here.
Risk and Reward.
Large corporations are able to pay people by doing things at scale; with sufficient margin at sufficient volume to justify continued investment. Risk is minimized by focusing on ROI.
Startups assume lots of risk and lots of debt and most don't make it. Liquidation preference applies as the startup team adjourns (and maybe open-sources what remains). In a large corporation, that burnt capital is reported to the board (which represents the shareholders) who aren't "gambling" per se. "You win some and you lose some" is across the street; and they don't have foosball and snacks.
How can large organizations (nonprofit, for-profit, governmental) foster intrapreneurial mindsets without just continuing to say "innovation" more times and expecting things to happen? Drink. "Innovators welcome!". Drink water.
"Intrapreneurial." What does that even mean? The employee, within their specialized department, spends resources (time, money, equipment) on something that their superior managers have not allocated funding for because they want: (a) recognition; (b) job security; (c) to save resources such as time and money; (d) to work on something else instead of this wasteful process; (e) more money.
Very few organizations have anything like "20% time". Why was 20% time thrown off the island to a new island where they had room to run? Do they have foosball? Or is the work so fun that they don't even need foosball? Or is it worth long days and nights because the potential return is enough money to retire tomorrow and then work on what?
Step 1. Steal innovators to work on our one thing
Step 2.
Step 3. Profit.
20% Project: https://en.wikipedia.org/wiki/20%25_Project
Intrapreneurship: https://en.wikipedia.org/wiki/Intrapreneurship
Internal entrepreneur: https://en.wikipedia.org/wiki/Internal_entrepreneur
CINO: Chief Innovation Officer / CTIO: Chief Technology Innovation Officer https://en.wikipedia.org/wiki/Chief_innovation_officer
... Is acquiring innovation and bringing it to scale a top-down process? How do we capture creative solutions and then allocate willing and available resources to making that happen?
awesome-ideation-tools: https://github.com/zazaalaza/awesome-ideation-tools
Powerful AI Can Now Be Trained on a Single Computer
Lots of people are focusing on this being done on a particularly powerful workstation, but the computer described seems to have power at a similar order of magnitude to the many servers which would be clustered together in a more traditional large ML computation. Either those industrial research departments could massively cut costs/increase output by just “magically keeping things in ram,” or these researchers have actually found a way to reduce the computational power that is necessary.
I find the efforts of modern academics to do ML research on relatively underpowered hardware by being more clever about it to be reminiscent of soviet researchers who, lacking anything like the access to computation of their American counterparts, were forced to be much more thorough and clever in their analysis of problems in the hope of making them tractable.
If anything it seems to me that doing the most work under constraint of resources is precisely what intelligence is about. I've always wondered why the consumption of compute resources is itself not treated as a significant part of the 'reward' in ML tasks.
At least if you're taking inspiration from biological systems, it clearly is part of the equation, a really important one even.
Ask HN: Something like Khan Academy but full curriculum for grade schoolers?
Khan Academy continually gets held up as a great resource for online courses across the age spectrum for math related subjects. With the continuing pandemic continuing to grow in the US and schools not really sure how to handle things, the GF and I are looking into other options.
Is there a recommended resource that gives unbiased (as possible) reviews for middle school (7-8th grade) curriculum? Searching these days really doesn't bring up quality, just options one has to comb through.
K12 CS Framework (ACM, Code.org, [...]) https://k12cs.org/
Computing Curricula 2020 (ACM, IEEE,) http://www.cc2020.net/
Official SAT Practice (College Board, Khan Academy) https://www.khanacademy.org/sat
http://wrdrd.github.io/docs/consulting/software-development#... (TODO: add link to cc2020 draft)
Programmer Competency Matrix: http://sijinjoseph.com/programmer-competency-matrix/ , https://competency-checklist.appspot.com/ , https://github.com/hltbra/programmer-competency-checklist
Re: Computational thinking https://westurner.github.io/hnlog/#comment-15454421
Coding Interview University: https://github.com/jwasham/coding-interview-university
AutoML-Zero: Evolving Code That Learns
They use evolutionary search [0] to create programs for image classification, specifically 'binary classification tasks extracted from CIFAR-10'. They do it from scratch, though they use a pytorch-ish programming language with differentiation baked in.
Looks like a nice summer intern project.
"AutoML-Zero: Evolving Machine Learning Algorithms From Scratch" (2020) https://arxiv.org/abs/2003.03384 https://scholar.google.com/scholar?cluster=11748751662887361...
How does this compare to MOSES (OpenCog/asmoses) or PLN? https://github.com/opencog/asmoses https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22... (2007)
SymPy - a Python library for symbolic mathematics
SymPy is the best!
In addition to the already-excellent official tutorial https://docs.sympy.org/latest/tutorial/index.html , I have written a short printable tutorial summary of the functions most useful for students https://minireference.com/static/tutorials/sympy_tutorial.pd...
For anyone interested in trying SymPy without installing, there is also the online shell https://live.sympy.org/ which works great and allows you to send entire computations as a URL, see for example this comment that links to an important linear algebra calculation: https://news.ycombinator.com/item?id=23158095
How does it compare to Matlab's symbolic toolbox, aside from not being affordable outside of academia?
NumPy for Matlab users: https://numpy.org/doc/stable/user/numpy-for-matlab-users.htm...
SymPy vs Matlab: https://github.com/sympy/sympy/wiki/SymPy-vs.-Matlab
If you then or later need to do distributed ML, it is advantageous to be working in Python. Dask Distributed, Dask-ML, RAPIDS.ai (CuDF), PyArrow, xeus-cling
Good to know: sympy does not use numpy so you can use pypy instead of python.
In my case I got a 7x speedup.
You can however use sympy to convert symbolic expressions to numeric numpy functions, using the "lambdify" feature. It's awesome because it lets you symbolically generate and manipulate a numerical program.
SymEngine https://github.com/symengine/symengine
> SymEngine is a standalone fast C++ symbolic manipulation library. Optional thin wrappers allow usage of the library from other languages, e.g.:
> [...] Python wrappers allow easy usage from Python and integration with SymPy and Sage (the symengine.py repository)
https://en.wikipedia.org/wiki/SymPy > Related Projects:
> SymEngine: a rewriting of SymPy's core in C++, in order to increase its performance. Work is currently in progress to make SymEngine the underlying engine of Sage too
How does it compare to sage math? I used sage math today and was pleasantly surprised. Even has a latex option. If you name your variables correctly, it even outputs Greek symbols and subscript correctly!
User 29athrowaway mentions in another comment that they use Sympy through Sagemath, so they probably could answer your question best. It sounds like Sympy is a subset of what Sage offers, but I'm not familiar enough with either tool to know.
Ask HN: Are there any messaging apps supporting Markdown?
I'd like to easily send formatted code, and bullet points, etc. through a messaging app without having to resort to a heavy app like Slack.
Mattermost supports CommonMark Markdown: https://docs.mattermost.com/help/messaging/formatting-text.h...
Zulip supports ~CommonMark Markdown: https://zulip.readthedocs.io/en/latest/subsystems/markdown.h...
Reddit supports Markdown. https://www.reddit.com/wiki/markdown
Discourse now supports CommonMark Markdown.
GitHub, BitBucket, GitLab and Gogs/Gitea support Markdown.
I wouldn't consider Discourse/Reddit/Github/etc. to be a messaging app per se, even if some of those have messaging functionality between users...
I digress on the category definition. Public messaging (without PM or DM features) is still messaging; and often far more useful than trying to forward 1:1 messages in order to bring additional participants onboard.
It's worth noting that GH/BB/GL have all foregone PM features; probably for the better in terms of productivity: messaging @all is likely more productive.
What vertical farming and ag startups don't understand about agriculture
My father is an ag soil chemist of 50+ years.
I'm an industrial systems eng. w/ a specialty in polymer-textile-fiber engineering. (Mostly useless skillsets in the US now)
Gonna share a few lessons here about agriculture that I try to convey to EECS, econ, Neuroscience, and the web developer crowd.
- You can only grow non-calorically dense foods in vertical farms
- It takes 10-14 kwh/1000 gallons of water to desalinate. More if it gets periodically polluted at an increasing rate.
- Large majority Agrarian populations exist because the countries are stuck in a purgatory of <1 MWh/capita annum whereby the country doesn't have scaleable nitrogen and steel manufacturing.
- Sweet potatoes and sweet potatoes are some of the highest satiety lowest input to output ratio produce. High efficiency.
- In civilizations where you are at < 1MWh/capita annum - there is not enough electricity to produce tools for farming, steel for roads, and concrete for building things. The end result is that the optimal decision is to have more children to harvest more calories per an acre.
- Property, bankruptcy, and inheritance law have an immense influence on the farmer population of a country.
I remember telling some "ag tech" VCs my insights and offering to introduce my father who has an immense amount of insight on the topic from having grown things for as long as he has....My thoughts were tossed aside.
So you can't grow potatoes vertically? Can you elaborate? Is it a function of physiology, i.e. calorie dense vegetables need far more leaves and supporting stems than can be practically stacked vertically?
I imagine space is a factor, but energy will be a big one as well. Calorie dense foods will likely need more space and energy (light) inputs. Vertical farms are very water efficient, so I don't think that matters much.
Vertical farms make a lot more sense with fresh vegetables like leafy greens that grow quickly, command high prices if grown organically, and benefit from being closer to market.
Potatoes are the exact opposite. If it ever becomes more cost effective to grow corn, wheat, and potatoes in virtual farms then outdoor agriculture is dead. While I don't agree with the article that it will never happen, it might require energy advances like fusion power or drastically higher _rural_ land values and water prices.
Greenhouses make sense long before vertical farming, just look at agriculture in the Netherlands, it's mind boggling how much they produce for such a tiny country.
Can you expand on this?
I get that to store a calorie in a potato I need to supply a calorie of energy from somewhere else.
But why is fusion power required instead of better UV lamps in my vertical farm? (Assuming I had enough electricity to run them)
The total amount of electricity to power those UV lamps should be on par with what the Sun sends to the potatoes fields. Maybe that's the reason for fusion. It didn't do the math.
Actually no not really. Plants only absorb two wavelengths of light. It's currently more efficient to convert sun into solar power via panels and then to light LEDs supplying only the wavelengths that plants use. Despite the seeming inefficiency here, the fact is that plants are even more inefficient at absorbing light not at the right wavelengths than solar panels.
Could one imagine a material that would absorb solar spectrum and emit the preferred frequencies? Something like a polymer one could stretch over fields to get more from the suns rays.
>> "Actually no not really. Plants only absorb two wavelengths of light. It's currently more efficient to convert sun into solar power via panels and then to light LEDs supplying only the wavelengths that plants use. Despite the seeming inefficiency here, the fact is that plants are even more inefficient at absorbing light not at the right wavelengths than solar panels."
> Could one imagine a material that would absorb solar spectrum and emit the preferred frequencies? Something like a polymer one could stretch over fields to get more from the suns rays.
Would you call that a "solar transmitter"?
https://en.wikipedia.org/wiki/Transmitter :
> Generators of radio waves for heating or industrial purposes, such as microwave ovens or diathermy equipment, are not usually called transmitters, even though they often have similar circuits.
Would "absorption spectroscopy" specialists have insight into whether this is possible without solar cells, energy storage, and UV LEDs? https://en.wikipedia.org/wiki/Absorption_spectroscopy
(edit) The thermal energy from sunlight (from the FREE radiation from the nuclear reaction at the center of our solar system) is also useful to and necessary for plants. There's probably a passive heat pipe / solar panel cooling solution that could harvest such heat for colder seasons and climates.
Also, UV-C is useful for sanitizing (UVGI) but not really for plant growth. https://en.wikipedia.org/wiki/Ultraviolet_germicidal_irradia... :
> UVGI can be coupled with a filtration system to sanitize air and water.
Is that necessary or desirable for plants?
https://www.lumigrow.com/learning-center/blogs/the-definitiv... :
> The light that plants predominately use for photosynthesis ranges from 400–700 nm. This range is referred to as Photosynthetically Active Radiation (PAR) and includes red, blue and green wavebands. Photomorphogenesis occurs in a wider range from approximately 260–780 nm and includes UV and far-red radiation.
Photomorphogenesis: https://en.wikipedia.org/wiki/Photomorphogenesis
PAR: Photosynthetically active radiation: https://en.wikipedia.org/wiki/Photosynthetically_active_radi...
Grow light: https://en.wikipedia.org/wiki/Grow_light
Are there bioluminescent e.g. algae which emit PAR and/or UV? Algae can feed off of waste industrial gases.
Bioluminescence > Light production: https://en.wikipedia.org/wiki/Bioluminescence#Light_producti...
Biophoton: https://en.wikipedia.org/wiki/Biophoton
Chemiluminescence: https://en.wikipedia.org/wiki/Chemiluminescence
Electrochemiluminescence: https://en.wikipedia.org/wiki/Electrochemiluminescence
Quantum dot display / "QLED": https://en.wikipedia.org/wiki/Quantum_dot_display
Could be possible? Analyzing the inputs and outputs is useful in natural systems, as well.
Ask HN: What are your go to SaaS products for startups/MVPs?
Looking for some inspiration. Ive done a lot of MVPs/Early-stage apps over the years and I tend to lean on the same SaaS portfolio for mails, text gateways, payment etc, but Im sure Ive missed a few valuable additions.
Here's a few I use: Mails: Mailchimp / Mandrill Payment: Paylike Search: Algolia
https://StackShare.io and https://FounderKit.com are great places to find reviews of SaaS services:
> mails,
https://founderkit.com/growth-marketing/email-marketing/revi...
https://stackshare.io/email-marketing
https://zapier.com/learn/email-marketing/
> text gateways,
https://founderkit.com/apis/sms/reviews
https://stackshare.io/voice-and-sms
> payments
https://founderkit.com/apis/credit-card-processing/reviews
https://stackshare.io/payment-services
Both have categories:
https://stackshare.io/categories
https://founderkit.com/reviews
If you're looking for market/competition research, I'd recommend https://coscout.com as well, for info about companies, funding, competitors, tech stack etc
It's in alpha right now, so you'll have to get in the waitlist, though they're only a few weeks away from launching AFAIK.
Full disclosure: A friend of mine is building this :)
Long term viability of SaaS solutions is definitely worth researching.
Is this something that's going to get acquired and be extinguished?
What are our switching costs?
How do we get our data in a format that can be: read into our data warehouse/lake and imported into an alternate service if necessary in the future?
How does Coscout compare to e.g. Crunchbase, PitchBook (Morningstar), YCharts, AngelList?
Ask HN: Do you read aloud or silently in your minds?
Most times while reading a new topic that I am not familiar with, I tend to read aloud in my mind. Yet that changes based on the content and the way it is written.
When I'm focused, I notice that reading silently helps increase my reading speed and cognition, like everything is flowing in.
Other times I don't seem to understand anything if I'm not reading it aloud in my mind.
Has anyone noticed such a thing and if so can you share any tips or information you've learned about this behavior
Subvocalization https://en.wikipedia.org/wiki/Subvocalization
Speed reading https://en.wikipedia.org/wiki/Speed_reading
Ask HN: How do you deploy a Django app in 2020?
Hi. I'm a mid-level software engineer trying to deploy a small (2000 max users) django app to production.
If I Google: How to deploy a Django app. I get 10+ different answers.
Can anyone on HN help me please.
You'll probably get 10 different answers here too.
I use uwsgi in emperor mode. I have a Makefile that runs a few SSH commands on production to clone my git repo at a specific commit, builds a virtual environment from requirements.txt, and atomically swaps a symlink to a uwsgi socket. Nginx in front of it all. It's pretty much the Python equivalent of FTPing HTML files to a web server.
Uwsgi is terrifying because it has so many options (and I wonder how secure it is), but it has never failed me. Packaging and containerizing things sounds cool, but I just can't justify spending time on it when my setup works fine (I'm a solo dev).
If you have only one production server, dokku is "A docker-powered PaaS that helps you build and manage the lifecycle of applications." Dokku supports Heroku buildpack deployment (buildstep), Procfiles, Dockerfile deployment, Docker image deployment, git deployment (gitreceive), or tarfile deployments. https://github.com/dokku/dokku
There are a number of plugins for Dokku. Dokku ships with the nginx plugin as the HTTP frontend proxy. Dokku supports SSL certs with the certs plugin.
When you need to move to more than one server, what do you do? There's now a dokku-scheduler-kubernetes plugin which can do HA (high availability) which is worth reading about before you develop and document your own deployment workflow. https://github.com/dokku/dokku-scheduler-kubernetes
I also always put build, test, and deployment commands in a Makefile.
Package it; as a container or as containers that install a RPM/DEB/APK/Condapkg/Pythonpkg (possibly containing a zipapp). Zipapps are fast.
If you have any non-python dependencies, a Pythonpkg only solves for part of the packaging needs.
Producing a packaged artifact should be easy and part of your CI build script.
Here's the cookiecutter-django production docker-compose.yml with containers for django, celery, postgres, redis, and traefik as a load balancer: https://github.com/pydanny/cookiecutter-django/blob/master/%...
Cookiecutter-django also includes a Procfile.
With k8s, you have an ingress (~load balancer + SSL termination proxy) other than traefik.
You can generate k8s YML from docker-compose.yml with Kompose.
I just found this which describes using GitLab CI with Helm: https://davidmburke.com/2020/01/24/deploy-django-with-helm-t...
What is the command to scale up or down? Do you need a geodistributed setup (on multiple providers' clouds)? Who has those credentials and experience?
How do you do red/green or rolling deployments?
Can you run tests in a copy of production?
Can you deploy when the tests that run on git commit pass?
What runs the database migrations in production; while users are using the site?
If something deletes the whole production setup or the bus factor is 1, how long does it take to redeploy from zero; and how much manual work does it take?
CI + Ansible + Terraform + Kubernetes.
Whatever tools you settle on, django-eviron for a 12 Factor App may be advisable. https://github.com/joke2k/django-environ
The Twelve-Factor App: https://12factor.net/
Containers from first principles
I recently discovered systemd-nspawn and was amazed at how lightweight a basic container can be.
"Docker Without Docker" (2015) explains /sbin/init and systemd-nspawn. Systemd did not exist when docker was first created. https://chimeracoder.github.io/docker-without-docker/
That's not what I remember. Wikipedia backs up that recollection as well [0], marking systemd's initial release in 2010. Docker's initial release is listed as 2013 [1]. Maybe that was true of the dotCloud internal releases, and certainly, not all of the EL and other Linux distros had not adopted systemd during 2010 - 2013. Certainly after 2014 or 2015, systemd had spread to the major Linux distros so Docker could have chosen to take a systemd-based approach at that point.
Are there other systemd + containers solutions?
"Chapter 4. Running containers as Systemd services with Podmam" https://access.redhat.com/documentation/en-us/red_hat_enterp...
AFAIU, when running containers with systemd:
- logs go to journald by default
- there's no docker-compose for just the [name-prefixed] containers in the docker-compose.yml,
- you can use systemd unit template parametrization
- it's not as easy to collect metrics on every container on the system without a read-only docker socket: how many containers are running, how much RAM quota are they assigned and utilizing? What are the filesystem and port mappings?
- you can run containers as non-root
- you can run containers in systemd timer units
- you use runC to handle seccomp
... You can do cgroups and namespaces with just systemd; but keeping chroots/images upgraded is outside the scope of systemd: where is the ideal boundary between systemd and containers?
See this comment regarding per-container MAC MCS labels: https://news.ycombinator.com/item?id=23430959
There's much additional complexity that justifies k8s / OpenShift: when would I want to manage containers with just systemd units?
> Many people might think the word “container” has a specific meaning within the Linux kernel; however the kernel has no notion of a “container”. The word has been synonymous with a variety of Linux tooling which when applied give the resemblance of what we expect a container to be.
Before LXC ( https://LinuxContainers.org ) and CNCF ( https://landscape.cncf.io/ ) and OCI ( https://opencontainers.org/ ), for shared-kernel VPS hosting ("virtual private server"; root on a shared box), there was OpenVZ (which requires a patched kernel and AFAIU still has features, like bursting, not present in cgroups).
Docker no longer has an LXC driver: libcontainer (opencontainers/runc) is the story now. The LXC docs have a great list of utilized kernel features that's also still true for docker-engine = runC + moby. The LXC docs: https://linuxcontainers.org/lxc/introduction/ :
> Current LXC uses the following kernel features to contain processes:
> ## Kernel namespaces (ipc, uts, mount, pid, network and user)
>> Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources. https://en.wikipedia.org/wiki/Linux_namespaces
> ## Apparmor and SELinux profiles https://en.wikipedia.org/wiki/AppArmor / https://en.wikipedia.org/wiki/Security-Enhanced_Linux
udica is an interesting tool for creating SELinux policies for containers.
Is it possible for each container to run confined with a different SELinux label?
> ## Seccomp policies https://en.wikipedia.org/wiki/Seccomp
See below re: Seccomp.
> ## Chroots (using pivot_root) https://en.wikipedia.org/wiki/Chroot
Chroots and symlinks, Chroots and bind mounts, Chroots and overlay filesystems, Chroots and SELinux context labels.
FWIU, Chroots are a native feature of filesystem syscalls in Fuchsia.
> ## Kernel capabilities
https://wiki.archlinux.org/index.php/Capabilities :
>> "Capabilities (POSIX 1003.1e, capabilities(7)) provide fine-grained control over superuser permissions, allowing use of the root user to be avoided. Software developers are encouraged to replace uses of the powerful setuid attribute in a system binary with a more minimal set of capabilities. Many packages make use of capabilities, such as CAP_NET_RAW being used for the ping binary provided by iputils. This enables e.g. ping to be run by a normal user (as with the setuid method), while at the same time limiting the security consequences of a potential vulnerability in ping."
> ## CGroups (control groups)* https://en.wikipedia.org/wiki/Cgroups
Control groups enable per-process (and to thus per-container) resource quotas. Other than limiting the impact of resource exhaustion, cgroups are not a security feature of the Linux kernel.
Here's a helpful explainer of the differences between some of these kernel features; which, combined, have become somewhat ubiquitous:
From "Formally add support for SELinux" (k3s #1372) https://github.com/rancher/k3s/issues/1372#issuecomment-5817... :
> https://blog.openshift.com/securing-kubernetes/*
>> The main thing to understand about SELinux integration with OpenShift is that, by default, OpenShift runs each container as a random uid and is isolated with SELinux MCS labels. The easiest way of thinking about MCS labels is they are a dynamic way of getting SELinux separation without having to create policy files and run restorecon.*
>> If you are wondering why we need SELinux and namespaces at the same time, the way I view it is namespaces provide the nice abstraction but are not designed from a security first perspective. SELinux is the brick wall that’s going to stop you if you manage to break out of (accidentally or on purpose) from the namespace abstraction.
>> CGroups is the remaining piece of the puzzle. Its primary purpose isn’t security, but I list it because it regulates that different containers stay within their allotted space for compute resources (cpu, memory, I/O). So without cgroups, you can’t be confident your application won’t be stomped on by another application on the same node.
From Wikipedia: https://en.wikipedia.org/wiki/Seccomp ::
> seccomp (short for secure computing mode) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a "secure" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors. Should it attempt any other system calls, the kernel will terminate the process with SIGKILL or SIGSYS.[1][2] In this sense, it does not virtualize the system's resources but isolates the process from them entirely.
... SELinux is one implementation of MAC (Mandatory Access Controls) that is built upon the LSM (Linux Security Modules) support in the Linux kernel. Some distros include policy sets for Docker hosts and lots of other packages that could be installed; see: "Formally add support for SELinux" (k3s #1372) https://github.com/rancher/k3s/issues/1372#issuecomment-5817...
How many people did it take to build the Great Pyramid?
> The potential energy of the pyramid—the energy needed to lift the mass above ground level—is simply the product of acceleration due to gravity, mass, and the center of mass, which in a pyramid is one-quarter of its height. The mass cannot be pinpointed because it depends on the specific densities of the Tura limestone and mortar that were used to build the structure; I am assuming a mean of 2.6 metric tons per cubic meter, hence a total mass of about 6.75 million metric tons. That means the pyramid’s potential energy is about 2.4 trillion joules.
In "Lost Technologies of the Great Pyramid" (2010) and "The Great Pyramid Prosperity Machine: Why the Great Pyramid was Built!" (2011), Steven Myers contends that the people who built the pyramids were master hydrologists who built a series of locks from the Nile all the way up the sides of the pyramids and pumped water up to a pool of water on the topmost level; where they used buoyancy and mechanical leverage by way of a floating barge crane in order to place blocks. This would explain how and why the pyramids are water tight, why explosive residue has been found in specific chambers, and why boats have been found buried at the bases of the pyramids.
https://www.amazon.com/dp/B0045Y26CC/
There are videos: http://www.thepump.org/video-series-2
https://www.youtube.com/playlist?list=PLt_DvKGJ_QLYvJ3IdVKXU...
I'm not aware of other explanations for how friction could have been overcome in setting the blocks such that they are watertight (in the later Egyptian pyramids).
AFAIU, the pyramids of South America appear to be of different - possibly older - construction methods.
Solar’s Future is Insanely Cheap
Power markets are more complicated than most people realize. One thing to note is that solar power can be cheaper than gas, but still not be economic. The fact of the matter is that an intermittent kWh is not as valuable as an on-demand reliable kWh to a utility who's number 1 priority is reliability. Even as solar is acquired at lower and lower prices, if evening power is generated by expensive and inefficient gas turbines, the customer might not see costs go down (and emissions might not go down either!). Solar is clearly economic in many markets, but we'll never get grid emissions in a place like California down much more without storage.
As mentioned in the article, solar power is insanely cheap. So while you are right that a kWh of solar is not, on average, as valuable as a kWh of semidispatchable coal, the insane cheapness means that at some point, it is economical to massively overbuild renewable and decommission fossil fuel plants. You provide no evidence for the implausible claim that solar causes emissions to go up: your argument is only that the emissions reducing effect is muted, which may be true.
Also, while storage would be helpful, it is not the only way to enable a renewable transition. Additional transmission is enormously helpful: as the sun goes down on California, the wind is picking up in the Midwest. And don’t forget demand response: if smart thermostats received price signals (maybe we should precool this house...) that would alleviate the evening ramp-up issue.
So I claim we’ll need less storage than “a whole day’s usage”. But the learning curve applies to batteries as well! This storage won’t cost as much anyway.
The whole issue of intermittency is overrated. While a single solar panel might generate intermittently, the solar fleet across a whole state generates more predictably.
I am predicting that grid emissions will come down a lot over the coming decades. Partially I’m predicting the past: they’ve already come down, a lot!
> if smart thermostats received price signals (maybe we should precool this house...) that would alleviate the evening ramp-up issue.
Is there an existing model for retail intraday rates? Would intraday rates be desirable for all market participants?
"Add area for curtailment data?" https://github.com/tmrowco/electricitymap-contrib/issues/236...
Demo of an OpenAI language model applied to code generation [video]
So that's basically program synthesis from natural language (ish) specifications (i.e. the comments).
I can see this being a useful tool [1]. However, I don't expect any ability for innovation. At best this is like having an exceptionally smart autocomplete function that can look up code snippets on SO for you (provided those code snippets are no longer than one line).
That's not to say that it can't write new code, that nobody has quite written before in the same way. But in order for a tool like this to be useful it must stick as close as possible to what is expected- or it will slow development down rather than helping it. Which means it can only do what has already been done before.
For instance- don't expect this to come up with a new sorting algorithm, out of the blue, or to be able to write good code to solve a certain problem when the majority of code solving that problem on github happens to be pretty bad.
In other words: everyone can relax. This will not take your job. Or mine.
____________
[1] I apologise to the people who know me and who will now be falling off their chairs. OK down there?
I think you are underselling the potential of a model which deeply understand programming. Imagine combining such a model with something like AutoML-Zero: https://arxiv.org/abs/2003.03384 It may not be 'creative', but used as tab-completion, it's not being rewarded or incentivized or used in any way which would expose its abilities towards creating a new sort algorithm.
I agree on the tab-completion part. Something like Gmail's smart-compose could have potentially huge benefits here.
But I'm not sure about the "deeply understand programming" part. Language modelling and "AI", in its current form, uncovers only statistical correlations and barely scratches the surface of what "understanding" is. This has restricted deployment of majority of academic research into the real-world and this, I believe, is no different and will work only in constrained settings.
Edit: typo
It would be nice to have an AI that could write unit tests, or look over your code and understand and explain where you might have bugs.
> look over your code and understand and explain where you might have bugs.
This would certainly be interesting. I'm not aware of active research going on in this area (any pointers would be helpful!).
This would require an agent to have thorough understanding of the logic you're trying to implement, and locate the piece of code where it silently fails. For this you'd again need a training dataset where the input is a piece of code and the supervision signal (the output) is location of the bug. I could imagine some sort of self-supervision to tackle this initially where you'd intentionally introduce bugs in your code to generate training data. But not sure how far this can go!
1. Generate test cases from function/class/method definitions.
2. Generate test cases from fuzz results.
3. Run tests and walk outward from symbols around relevant stacktrace frames (line numbers,).
4. Mutate and run the test again.
...
Model-based Testing (MBT) https://en.wikipedia.org/wiki/Model-based_testing
> Models can also be constructed from completed systems
> At best this is like having an exceptionally smart autocomplete function that can look up code snippets on SO for you (provided those code snippets are no longer than one line).
Yeah, all it could do for you is autocomplete around what it thinks the specification might be at that point in time.
> But what if Andy gets another dinosaur, a mean one? -- Toy Story (1995)
Future of the human climate niche
How many degrees Celsius hotter would that be for billions of people in 50 years?
> The Paris Agreement's long-term temperature goal is to keep the increase in global average temperature to well below 2 °C above pre-industrial levels; and to pursue efforts to limit the increase to 1.5 °C, recognizing that this would substantially reduce the risks and impacts of climate change. This should be done by reducing emissions as soon as possible, in order to "achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases" in the second half of the 21st century. It also aims to increase the ability of parties to adapt to the adverse impacts of climate change, and make "finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development."
> Under the Paris Agreement, each country must determine, plan, and regularly report on the contribution that it undertakes to mitigate global warming. [6] No mechanism forces [7] a country to set a specific emissions target by a specific date, [8] but each target should go beyond previously set targets.
And then this is what was decided:
> In June 2017, U.S. President Donald Trump announced his intention to withdraw the United States from the agreement. Under the agreement, the earliest effective date of withdrawal for the U.S. is November 2020, shortly before the end of President Trump's 2016 term. In practice, changes in United States policy that are contrary to the Paris Agreement have already been put in place.[9]
https://en.wikipedia.org/wiki/Paris_Agreement
Ask HN: Best resources for non-technical founders to understand hacker mindset?
Background: technical founder wondering what reading to recommend to a business focused founder for them to grok the hacker mindset. I've thought perhaps Mythical Man Month and How To Become A Hacker (Eric Raymond essay) but not sure they're quite right.
Any suggestions?
(In case it helps an analogue in the mathematical world might be A Mathematician's Apology or Gödel, Escher, Bach.)
Let me sum up all possible books about understanding the "hacker" (terrible word by the way, because of multiple meanings, which meaning are we talking about?) mindset, to a management perspective:
1) True "hackers" value knowledge over money.
2) True "hackers" value doing things once and doing them right, no matter how long that takes. (Compare to the business mindset of "we need it now", or "we needed it yesterday")
3) True "hackers" value taking ownership in their work, that is, whatever they work on becomes an extension of themselves, much like an artist working on a work of art.
4) True "hackers" are not about work-arounds. If/when work-arounds are used, it's because there there's an artificial timeframe (as might be found in the corporate world), and there's a lack of understanding in the infrastructure which created the need for that work-around.
But, all of these virtues run counter to the demands of business, which constantly wants more things done faster, cheaper, with more features, more complexity, less testing, and doesn't want to worry about problems that may be caused by all of those things in the future (less accountability) -- as long as customer revenue can be collected today.
You see, a true "hacker's" values -- are completely different than those of big business...
And business people wonder why there's stress and burnout among tech people...
> 3) True "hackers" value taking ownership in their work, that is, whatever they work on becomes an extension of themselves, much like an artist working on a work of art
There's something to be said about owning your work, but I have to disagree that unhealthy attachment to work products is a universal attribute of technical founder hackers. It's not a kid, it's a thing that was supposed to be the best use of the resources and information available at the time.
I must have confused this point with vanity and retention in projecting my own counterproductive anti-patterns.
Prolific is not the objective for a true hacker, but not me but a guy I know mentioned something about starting projects and seeing the next 5 years of potentially happily working on that project, too.
Dissecting the code responsible for the Bitcoin halving
> The difficulty of the calculations are determined by how many zeroes need to be at the front. [...]
The difficulty is actually not determined by the number of zeroes (as was initially the case).
https://en.bitcoinwiki.org/wiki/Difficulty_in_Mining :
> The Bitcoin network has a global block difficulty. Valid blocks must have a hash below this target. Mining pools also have a pool-specific share difficulty setting a lower limit for shares.
"Less than" instead of "count leading zeroes" makes it possible for the difficulty to be less broadly adjusted in a difficulty retargeting.
Difficulty retargetings occur after up to 2016 blocks (~10 minutes, assuming the mining pool doesn't suddenly disappear resulting in longer block times that could make it take months to get to 2016 blocks according to "What would happen if 90% of the Bitcoin miners suddenly stopped mining?" https://bitcoin.stackexchange.com/questions/22308/what-would... )
Difficulty is adjusted up or down (every up to 2016 blocks) in order to keep the block time to ~10 minutes.
The block reward halving occurs every ~4 years (210,000 blocks).
Relatedly, Moore's law observes/predicts that processing power (as measured by transistor count per unit) will double every 2 years while price stays the same. Is energy efficiency independent of transistor count? https://en.wikipedia.org/wiki/Moore%27s_law
Ask HN: Does mounting servers parallel with the temperature gradient trap heat?
Heat rises. Is heat trapped in the rack? Would mounting servers sideways (vertically) allow heat to transfer out of the rack?
Many systems have taken the vertical mount approach approach over the years: Blade servers, routers, modems, and various gaming systems.
Horizontally-mounted: parallel with the floor
Vertically-mounted: perpendicular to the floor
[deleted]
Thermodynamics https://en.wikipedia.org/wiki/Thermodynamics
Are engine cylinders ever mounted horizontally? Why or why not?
> Heat rises.
Warmer air is less dense / more buoyant; so it floats.
"Does hot air really rise?" https://physics.stackexchange.com/questions/6329/does-hot-ai...
- Water ice floats because – somewhat uniquely – solid water is less dense than liquid water.
> Is heat trapped in the rack?
Probably.
> Would mounting servers sideways (vertically) allow heat to transfer out of the rack?
How could we find studies that have already tested this hypothesis?
What does the 'rc' in `.bashrc`, etc. mean?
For me, the hard part is remembering if .bashrc is supposed to source .bash_profile or vice versa.
info bash -n "Bash Startup Files"
https://www.gnu.org/software/bash/manual/html_node/Bash-Star...Google ditched tipping feature for donating money to sites
> When asked, Google confirmed that the designs were an internal idea it explored last year but decided not to pursue as part of [Google Contributor] and Google Funding Choices, which lets sites ask visitors to disable ad blockers, or instead buy a subscription or pay a per page fee to remove ads.
Could this be built on Web Monetization API (ILP (Interledger Protocol)) and e.g. Google Pay as one of many possible payment/card/cryptocurrency processing backends; just like Coil is built on Web Monetization API?
Innovating on Web Monetization: Coil and Firefox Reality
Coil: $5/mo, Content creators get a proportional cut of that amount according to what is browsed with the browser extension enabled or the Puma browser, and Private: No Tracking
> Coil sends payments via the Interledger Protocol, which allows any currency to be used for sending and receiving.
It looks like the Web Monetization API is not yet listed on the Website Monetization Wikipedia page: https://en.wikipedia.org/wiki/Website_monetization
Quoting from earlier this week:
> Web Monetization API (ILP: Interledger Protocol)
>> A JavaScript browser API which allows the creation of a payment stream from the user agent to the website.
>> Web Monetization is being proposed as a #W3C standard at the Web Platform Incubator Community Group.
> https://webmonetization.org/
> Interledger: Web Monetization API https://interledger.org/rfcs/0028-web-monetization/
Ask HN: Recommendations for online essay grading systems?
Which automated essay grading systems would you recommend? Are they open source?
How can we identify biases in these objective systems?
What are your experiences with these systems as authors and graders?
Who else remembers using the Flesch-Kincaid Grade Level metric in Word to evaluate school essays? https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabi...
Imagine my surprise when I learned that this metric is not one that was created for authors to maximize: reading ease for the widest audience is not an objective in some deparments, but a requirement.
What metrics do and should online essay grading systems present? As continuous feedback to authors, or as final judgement?
I'm reminded of a time in highschool when an essay that I wrote was flagged by an automated essay verification engine as plagiarism. I certainly hadn't plagiarized, and it was up to me to refute that each identified keyword-similar internet resource on the internet was not an uncited source of my paper. I disengaged. I later wrote an essay about how keyword search tools could be helpful to students doing original research. True story.
Decades later, I would guess that human review is still advisable.
This need of mine to have others validate my unpaid work has nothing to do with that traumatic experience.
I still harbor this belief in myself: that what I have to say is worth money to others, and that - someday - I'll pay a journal to consider my ScholarlyArticle for publishing in their prestigious publication with maybe even threaded peer review (and #StructuredPremises linking to Datasets and CreativeWorks that my #LinkedMetaAnalyses are predicated upon). Someday, I'll develop an online persona as a scholar, as a teacher, maybe someday as a TA or an associate professor and connect my CV to any or all of the social networks for academics. I'll work to minimize the costs of interviewing and searching public records. My research will be valued and funded.
Or maybe, like 20% time, I'll find time and money on the side for such worthwhile investigations; and what I produce will be of value to others: more than just an exercise in hearing myself speak.
In my years of internet communications, I've encountered quite a few patrons; lurkers; participants; and ne'er-do-wells who'll order 5 free waters, plaster their posters to the walls, harass paying customers, and just walk out like nothing's going to happen. Moderation costs time and money; and it's a dirty job that sometimes pays okay. There are various systems for grading these comments, these essays, these NewsArticles, these ScholarlyArticles. Human review is still advisable.
> How can we identify biases in these objective systems?
Modern "journalism" recognizes that it's not a one-way monologue but a dialogue: people want to comment. Ignorantly, helpfully, relevantly, insightfully, experiencedly. What separates the "article part" from the "comments part" of the dialogue? Typesetting, CSS, citations, quality of argumentation?
Ask HN: Systems for supporting Evidence-Based Policy?
What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?
Are they open source? Do they work with linked open data?
> Ask HN: Systems for supporting Evidence-Based Policy?
> What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?
> Are they open source? Do they work with linked open data?
I suppose I should clarify that citizens, consumers, voters, and journalists are not acceptable answers
Facebook, Google to be forced to share ad revenue with Australian media
What if Google and Facebook just said “No, thank you.”? I think Google and Facebook have more power in this relationship, which might say something about the tech giants. The only possible thing the Aussies could do is block the websites, and that sounds like political suicide to me. I’m guessing the companies will decide to play ball, but if other countries start to follow suit, it might be interesting to see what happens if they start pushing back.
There would be no political suicide there. It's so easy to make Google/Facebook look evil in this case if they don't comply. Just say they are not paying taxes and getting our money abroad etc.
But there is 0% chance of Google/Facebook not complying.
> I think Google and Facebook have more power in this relationship
Not even close.
Don't the French and Spanish examples indicate otherwise? Spain's link tax in 2014 led to the shutting-down of Google News Spain, and France's attempt to charge for Google News snippets led to the removal of those snippets for French sites (followed by a dip in traffic for pubs that hurt them far more than it hurt Google). What makes you think that FB/G don't have the power here?
If you don't want them to index your content and send you free traffic, you can already specify that in your robots.txt; for free. https://en.wikipedia.org/wiki/Robots_exclusion_standard
There are no ads on Google News.
There is an apparent glut of online news: supply exceeds demand and so the price has fallen.
> There are no ads on Google News.
This is the ironic part of the policy. You're right about the glut, and there was a glut in print 20 years ago, too. Google's definitely hurt businesses, but it's usually though disintermediating them (think Yelp). Google News and search don't really do that for news--snippets and headlines aren't the same as an article. I find it hard to believe that ABC's brand isn't strong enough for them to pull their content from Google and expect people to go to abc.net.au; I just don't think than can sell enough ads to give the content away.
By hurt, do you mean competed with by effectively utilizing technology to help people find information about the world from multiple sources.
There are very many news aggregators and most do serve ads next to the headlines they index. I assume that people typically link out from news aggregation sites more than into vertically-integrated services.
Perhaps the content producers / information service providers could develop additional revenue streams in order to subsidize a news aggregation public service. Micropayments (BAT, Web Monetization (ILP)), ads, paywalls, and public and private grants are sources of revenue for content producers.
I think it's disingenuous to blame news aggregation sites for the unprofitability of extremely redundant journalism. What happened to journalism? Internet. Excessive ads. Aren't we all writers these days.
Unfortunately they killed the "most cited" and was it "most in-depth" source analysis functions of Google News; and now we're stuck with regurgitated news wires and press releases and all of these eyewitness mobile phone videos with two-bit banal commentary and also punditry. How the world has changed.
So, as far as scientific experiments are concerned, it might be interesting to see what the impact of de-listing from free time sites X, Y, and Z is.
Do the papers in Australia and France now intend to compensate journal ScholarlyArticle authors whose work they summarize and hopefully at least cite the titles and URLs of, or the journals themselves?
France rules Google must pay news firms for content
Website monetization https://en.wikipedia.org/wiki/Website_monetization
Web Monetization API (ILP: Interledger Protocol)
> A JavaScript browser API which allows the creation of a payment stream from the user agent to the website.
> Web Monetization is being proposed as a #W3C standard at the Web Platform Incubator Community Group.
Interledger: Web Monetization API https://interledger.org/rfcs/0028-web-monetization/
Khan Academy, for example, accepts BAT (Basic Attention Token) micropayments/microdonations that e.g. Brave browser users can opt to share with the content producers and indexers. https://en.wikipedia.org/wiki/Brave_(web_browser)#Basic_Atte...
Web Monetization w/ Interledger should enable any payments system with low enough transaction costs ("ledger-agnostic, currency agnostic") to be used to pay/tip/donate to content producers who are producing unsensational, unbiased content that people want to pay for.
Paywalls/subscriptions and ads are two other approaches to funding quality journalism.
Should journalists pay ScholarlyArticle authors whose studies they publish summaries of without even citing the DOI/URL and Title; or the journals said ScholarlyArticles are published in? https://schema.org/ScholarlyArticle
Adafruit Thermal Camera Imager for Fever Screening
> Thermal Camera Imager for Fever Screening with USB Video Output - UTi165K. PRODUCT ID: 4579 https://www.adafruit.com/product/4579
> This video camera takes photos of temperatures! This camera is specifically tuned to work in the 30˚C~45˚C / 86˚F~113˚ F range with 0.5˚C / 1˚ F accuracy, so it's excellent for human temperature & fever detection. In fact, this thermal camera is often used by companies/airports/hotels/malls to do a first-pass fever check: If any person has a temperature of over 99˚F an alarm goes off so you can do a secondary check with an accurate handheld temperature meter.
> You may have seen thermal 'FLIR' cameras used to find air leaks in homes, but those cameras have a very wide temperature range, so they're not as accurate in the narrow range used for fever-scanning. This camera is designed specifically for that purpose!
... USB Type-C, SD Card; no price listed yet?
The end of an Era – changing every single instance of a 32-bit time_t in Linux
Thanks!
Year 2038 Problem > Solutions is already updated re: 5.6. https://en.wikipedia.org/wiki/Year_2038_problem
Ask HN: What's the ROI of Y Combinator investments?
To calculate the ROI of YC investments, we could find the terms of the YC investments (x for y%, preference) and find the exit rate (what % of companies exit).
We could search for 'ROI of ycombinator investments' and find valuation numbers from a number of years ago.
From the first page of search results, we'd then learn about "return on capital" and how the standard YC seed terms have changed over the years.
Return on capital: https://en.wikipedia.org/wiki/Return_on_capital
From the See also section of this Wikipedia page, we might discover "Cash flow return on investment" and "Rate of return on a portfolio"
From the "rate of return" Wikipedia page, we might learn that "The return on investment (ROI) is return per dollar invested. It is a measure of investment performance, as opposed to size (c.f. return on equity, return on assets, return on capital employed)." and that "The annualized return of an investment depends on whether or not the return, including interest and dividends, from one period is reinvested in the next period. " https://en.wikipedia.org/wiki/Rate_of_return
From the YCombinator Wikipedia page, we might read that "The combined valuation of the top YC companies was over $155 billion as of October, 2019. [4]" and that "As of late 2019, Y Combinator had invested in >2,000 companies [37], most of which are for-profit. Non-profit organizations can also participate in the main YC program. [38]" and then read about "seed accelerators" and then "business incubators" in search of appropriate metrics for comparing VC performance. https://en.wikipedia.org/wiki/Y_Combinator
ROI is such a frou frou statistic anyway. What does that even mean, ROI? In any case, YC itself is not a public company, per se, AFAICT, so, it's not so easy as going to https://YCharts.com, entering the equity symbol, clicking on "Key Stats", and scrolling down to "Profitability" to review [Gross | EBITDA | Operating] [Profit] Margin.
The LTSE (Long-Term Stock Exchange) is where people who are in this for real are really doing it now.
Microsoft announces Money in Excel powered by Plaid
This looks really useful.
At first glance, I found a number of ways to push transaction data into Google Sheets from the Plaid API:
build-your-own-mint (NodeJS, CircleCI) https://github.com/yyx990803/build-your-own-mint
go-plaid: https://github.com/ebcrowder/go-plaid
Presumably, like the GOOGLEFINANCE function, there's some way to pull data from an API with just Apps Script (~JS) without an auxiliary serverless function to get the txs from Plaid and post to the gsheets API?
Lora-based device-to-device smartphone communication for crisis scenarios [pdf]
Sudomesh has been working on one of these devices, disaster radio: https://disaster.radio
I like the solar integration. My ideal product would be a programmable LoRa transceiver integrates into a waterproof battery bank with solar, gps and a small touch screen. Should be about the size of a large handheld which could mounted on a house, tree, mast or carry it in a pocket/bag. Apps could be, SOS messages, chat and weather information. If it’s extensible (e.g. something like an app store or package manager), I’m sure folks will dream up additional uses.
Sort of like a more hackable/accessible phone and long range radio.
Unfortunately, the Earl tablet never made it to market: https://blog.the-ebook-reader.com/2015/01/26/video-update-ab...
Earl specs: Waterproof; Solar charging; eInk; ANT+; NFC; VHF/UHF transceiver (GMRS, PMR446, UHFCB); GPS; Sensors: Accelerometer, Gyroscope, Magnetometer, Temperature, Barometer, Humidity; AM/FM/SW/LW/WB
LTE, LoRa, 5G, and Hostapd would be great
Being able to plug it into a powerbank and antennas for use as a fixed or portable e.g. BATMAN mesh relay would be great
"LoRa+WiFi ClusterDuck Protocol by Project OWL for Disaster Relief" https://news.ycombinator.com/item?id=22707267
> An opkg (for e.g. OpenWRT) with this mesh software would make it possible to use WiFi/LTE routers with a LoRa transmitter/receiver connected over e.g. USB or Mini-PCIe.
LoRa+WiFi ClusterDuck Protocol by Project OWL for Disaster Relief
> Project OWL (Organization, Whereabouts, and Logistics) creates a mesh network of Internet of Things (IoT) devices called DuckLinks. These Wi-Fi-enabled devices can be deployed or activated in disaster areas to quickly re-establish connectivity and improve communication between first responders and civilians in need.
> In OWL, a central portal connects to solar- and battery-powered, water-resistant DuckLinks. These create a Local Area Network (LAN). In turn, these power up a Wi-Fi captive portal using low-frequency Long-range Radio (LoRa) for Internet connectivity. LoRA has a greater range, about 10km, than cellular networks.
...
> You don't actually need a DuckLink device. The open-source OWL firmware can quickly turn a cheap wireless device into a DuckLink using the -- I swear I'm not making this up -- ClusterDuck Protocol. This is a mesh network node, which can hook up to any other near-by Ducks.
> OWL is more than just hardware and firmware. It's also a cloud-based analytic program. The OWL Data Management Software can be used to facilitate organization, whereabouts, and logistics for disaster response.
Homepage: http://clusterduckprotocol.org/
GitHub: https://github.com/Code-and-Response/ClusterDuck-Protocol
The Linux Foundation > Code and Response https://www.linuxfoundation.org/projects/code-and-response/
An opkg (for e.g. OpenWRT) with this mesh software would make it possible to use WiFi/LTE routers with a LoRa transmitter/receiver connected over e.g. USB or Mini-PCIe.
... cc'ing from https://twitter.com/westurner/status/1238859774567026688 :
OpenWRT is a Make-based embedded Linux distro w/ LuCI (Lua + JSON + UCI) web interface).
#OpenWRT runs on RaspberryPis, ARM, x86, ARM, MIPS; there's a Docker image. OpenWRT Supported Devices: https://openwrt.org/supported_devices
OpenWRT uses opkg packages: https://openwrt.org/docs/guide-user/additional-software/opkg
I searched for "Lora" in OpenWRT/packages: lora-gateway-hal opkg package: https://github.com/openwrt/packages/blob/master/net/lora-gat...
lora-packet-forwarder opkg package (w/ UCI integration): https://github.com/openwrt/packages/pull/8320
https://github.com/xueliu/lora-feed :
> Semtech packages and ChirpStack [(LoRaserver)] Network Server stack for OpenWRT
> > [In addition to providing node2node/2net connectivity, #batman-adv can bridge VLANs over a mesh (or link), such as for “trusted” client, guest, IoT, and mgmt networks. It provides an easy-to-configure alternative to other approaches to “backhaul”, […]] https://openwrt.org/docs/guide-user/network/wifi/mesh/batman
> I have a few different [quad-core, MIMO] ARM devices without 4G. TIL that the @GLiNetWifi devices ship with OpenWRT firmware (and a mobile config app) and some have 1-2 (Mini-PCIe) 4G w/ SIM slots. Also, @turris_cz has OpenWRT w/ LXC in the kernel build. https://t.co/Rz0Uu5uHJQ
A Visual Debugger for Jupyter
Source: jupyterlab/debugger https://github.com/jupyterlab/debugger
When will jupyter have "highlight and execute" functionality? The cell concept is fine, but I'm constantly copy pasting snippets of code into new cells to get that "incremental" coding approach...
So, I went looking for the answer to this because in the past I've installed the scratchpad extension by installing jupyter_contrib_nbextensions, but those don't work with JupyterLab because there's a new extension model for JupyterLab that requires node and npm.
Turns out that with JupyterLab, all you have to to is right-click and select "New Console for Notebook" and it opens a console pane below the notebook already attached to the notebook kernel. You can also instead do File > New > Console and select a kernel listed under "Use Kernel From Other Session".
The "New action runInConsole to allow line by line execution of cell content" "PR adds a notebook command `notebook:run-in-console`" but you have to add the associated keyboard shortcut to your config yourself; e.g. `Ctrl Shift Enter` or `Ctrl-G` that calls `notebook:run-in-console`. https://github.com/jupyterlab/jupyterlab/pull/4330
"In Jupyter Lab, execute editor code in Python console" describes how to add the associated keyboard shortcut to your config: https://stackoverflow.com/questions/38648286/in-jupyter-lab-...
Ask HN: What's the Equivalent of 'Hello, World' for a Quantum Computer?
The 'Hello,World' program is one of the simplest programs to demonstrate how to go about writing a program in a new programming language.
What is an equivalent simple program which demonstrates how to write a very simple program for a quantum computer?
I have tried (and failed) to imagine such a program. Can somebody who has actually used a quantum computer show us an actual quantum computer program?
"Getting Started with Qiskit" https://qiskit.org/documentation/getting_started.html
Qiskit / qiskit-community-tutorials > "1 Hello, Quantum World with Qiskit" https://github.com/Qiskit/qiskit-community-tutorials#1-hello...
"Quantum basics with Q#" https://docs.microsoft.com/en-us/quantum/quickstart?tabs=tab...
Qutip notebooks: https://github.com/qutip/qutip#run-notebooks-online
Jupyter Notebooks labeled quantum-computing: https://github.com/topics/quantum-computing?l=jupyter+notebo...
Ask HN: Communication platforms for intermittent disaster relief?
Are there good platforms for disaster relief that work well with intermittent connectivity (i.e. spotty 3G/4G/WiFi/LoRa)?
How can major networks improve in terms of e.g. indicating message delivery status, most recent sync time, sync throttling status due to load, optionally downloading images/audio/video, referring people to local places and/or forms for help with basic needs, etc?
What are some tools that app developers can use to simulate intermittent connectivity when running tests?
DroneAid: A Symbol Language and ML model for indicating needs to drones, planes
From the README https://github.com/Code-and-Response/DroneAid :
> The DroneAid Symbol Language provides a way for those affected by natural disasters to express their needs and make them visible to drones, planes, and satellites when traditional communications are not available.
> Victims can use a pre-packaged symbol kit that has been manufactured and distributed to them, or recreate the symbols manually with whatever materials they have available.
> These symbols include those below, which represent a subset of the icons provided by The United Nations Office for the Coordination of Humanitarian Affairs (OCHA). These can be complemented with numbers to quantify need, such as the number or people who need water.
Each of the symbols are drawn within a triangle pointing up:
- Immediate Help Needed (orange; downward triangle \n SOS),
- Shelter Needed (cyan; like a guy standing in a tall pentagon without a floor),
- OK: No Help Needed (green; upward triangle \n OK),
- First Aid Kit Needed (yellow; briefcase with a first aid cross),
- Water Needed (blue; rain droplet), Area with Children in Need (lilac; baby looking thing with a diaper on),
- Food Needed (red; pan with wheat drawn above it),
- Area with Elderly in Need (purple; person with a cane)
So, we're going to need some artists; something to write large things with; some orange, cyan, green, yellow, blue, lilac, red, and purple things; some people who can tell me the difference between lilac (light purple: babies) and purple (darker purple: old people); and some drones that can capture location and imagery.
Note that DroneAid is also a project of The Linux Foundation Code and Response organization.
Ask HN: Computer Science/History Books?
Hi guys, can you recommend interesting books on Computer Science or computer history (similar to Dealers of Lightning) to read on this quarantine times? I really like that subject and am looking for something to keep myself away from TV at night.
Thank you.
The Information: a History, a Theory, a Flood by James Gleick is great. Starts with Ada Lovelace/Charles Babbage and goes on from there, I found it fascinating.
"The Information: A History, a Theory, a Flood" starts with "1 | Drums That Talk" re: African drum messaging; a complex coding scheme:
> Here was a messaging system that outpaced the best couriers, the fastest horses on good roads with way stations and relays.
https://en.wikipedia.org/wiki/The_Information:_A_History,_a_...
https://www.goodreads.com/book/show/8701960-the-information
From "Polynesian People Used Binary Numbers 600 Years Ago" https://www.scientificamerican.com/article/polynesian-people... :
> Binary arithmetic, the basis of all virtually digital computation today, is usually said to have been invented at the start of the eighteenth century by the German mathematician Gottfried Leibniz. But a study now shows that a kind of binary system was already in use 300 years earlier among the people of the tiny Pacific island of Mangareva in French Polynesia.
Open-source security tools for cloud and container applications
It's odd they don't list OPA Gatekeeper, which is probably the best tool for enforcing security and other best practices in Kubernetes clusters.
List of CNCF open source security projects without the blog post: https://landscape.cncf.io/category=security-compliance&forma...
> List of CNCF open source security projects without the blog post: https://landscape.cncf.io/category=security-compliance&forma...
Thanks for this.
YC Companies Responding to Covid-19
Are life sciences and healthcare familiar verticals for YC?
Good to see money and talent going to such good use.
(Edit) Here's the YC Companies list; which doesn't yet list these new investments:
Biomedical vertical: https://www.ycombinator.com/companies/?vertical=Biomedical
Healthcare vertical: https://www.ycombinator.com/companies/?vertical=Healthcare
"The Y Combinator Database" https://www.ycdb.co/
Show HN: Neh – Execute any script or program from Nginx location directives
And we've come back round full circle from CGI scripts, although separating out headers and body on different fds sounds neat
Many things we’ve dispensed with long ago seem to be forgotten and the younger coders are learning for themselves the hard way. Aka the same way we did.
One of the things I'm wondering however: How popular is CGI if I don't find it with the query "run script on nginx location directive".
Nginx probably somewhat-deliberately has FastCGI but not regular CGI for a number of reasons.
CGI has process-per-request overhead.
CGI typically runs processes as the user the webserver is running as; said processes can generally read and write to the unsandboxed address space of the calling process (such as x.509 private certs).
Just about any app can be (D)DOS'd. That requires less resources with the process-per-request overhead of CGI.
In order to prevent resource exhaustion due to e.g someone benignly hitting reload a bunch of times and thus creating multiple GET requests, applications should enqueue task messages which a limited number of workers retrieve from a (durable) FIFO or priority queue and update the status of.
Websockets may or may not scale better than long-polling for streaming stdout to a client.
Interesting and really informative! Thanks for sharing!
Ask HN: How can a intermediate-beginner learn Unix/Linux and programming?
For a long time, I’ve been in an awkward position with my knowledge of computers. I know basic JavaScript (syntax and booleans and nothing more). I’ve learned the bare basics of Linux from setting up a Pi-hole. I understand the concept of secure hashes. I even know some regex.
The problem is, I know so little that I can’t actually do anything with this knowledge. I suppose I’m looking for a tutorial that will teach me to be comfortable with the command line and a Unix environment, while also teaching me to code a language. Where should I start?
a really good book is Richard Stevens Advanced Programming in the UNIX environment[1]. (sounds daunting but it's not too bad when you combine it with another introductory text on C). If you stick with C you'll eventually know UNIX/Linux a lot deeper than anyone. It takes time though so go easy on yourself.
Also check github for a bunch of repos that contain biolerplate code that is used in most deamons illustrating signal handling, forking, etc.[2]
Also I suggest taking some open source projects from Daniel Bernstein (DJB) as examples on how to write secure code. qmail, djbdns, daemontools ... there are lots of great ideas there[3]
Write a couple of simple programs that utilize your code and expand from there, learn the plumbing, e.g. write a Makefile, learn autotools/autoconf, how to write tests, how to use gdb, how to create shared libs and link a program LD_LIBRARY_PATH/ldconfig, etc ...
Most important think about something that you like to write and go from there. Back in the late 90ies I studied RFC's (MIME, SMTP, etc) and then implemented things that I could actually use myself. Here is a recent project I did some weeks ago (an extreme edge-case for security maximalists which will never be used by anyone other then myself, ... but I really enjoyed writing this and learned a bunch of new stuff while doing so)[4]
if you need ideas or help with looking at your code, don't hesitate and send me a mail.
[1] https://en.wikipedia.org/wiki/Advanced_Programming_in_the_Un...
> Also check github for a bunch of repos that contain biolerplate code that is used in most deamons illustrating signal handling, forking, etc.[2]
docker-systemctl-replacement is a (partial) reimplementation of systemd as one python script that can be run as the init process of a container that's helpful for understanding how systemd handles processes: https://github.com/gdraheim/docker-systemctl-replacement/blo...
systemd is written in C: https://github.com/systemd/systemd
> examples of how to write secure code
awesome-safety-critical > Coding Guidelines https://github.com/stanislaw/awesome-safety-critical/blob/ma...
Math Symbols Explained with Python
Average of a finite series: There's a statistics module in Python 3.4+:
X = [1, 2, 3]
from statistics import mean, fmean
mean(X)
# may or may not be preferable to
sum(X) / len(X)
https://docs.python.org/3/library/statistics.html#statistics...Product of a terminating iterable:
import operator
from functools import reduce
# from itertools import accumulate
reduce(operator.mul, X)
Vector norm: from numpy import linalg as LA
LA.norm(X)
https://docs.scipy.org/doc/numpy/reference/generated/numpy.l...Function domains and ranges can be specified and checked at compile-time with type annotations or at runtime with type()/isinstance() or with something like pycontracts or icontracts for checking preconditions and postconditions.
Dot product:
Y = [4, 5, 6]
np.dot(X, Y)
https://docs.scipy.org/doc/numpy/reference/generated/numpy.d...Unit vector:
X / np.linalg.norm(X)
Ask HN: Is there way you can covert smartphone to a no contact thermometer?
Wondering is there an infrared dongle that can convert your phone to a no contact thermometer to read body temperature?
Infrared thermometer: https://en.wikipedia.org/wiki/Infrared_thermometer
Thermography: https://en.wikipedia.org/wiki/Thermography
IDK what the standard error is for medical temperature estimation with an e.g. FLIR ONE thermal imaging camera for an Android/iOS device. https://www.flir.com/applications/home-outdoor/
I'd imagine that sanitization would be crucial for any clinical setting.
(Edit) "Prediction of brain tissue temperature using near-infrared spectroscopy" (2017) Neurophotonics https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5469395/
"Nirs body temperature" https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=nir...
"Infrared body temperature" https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=inf...
"Infrared thermometer iOS" https://m.alibaba.com/trade/search?SearchText=infrared%20the...
"Infrared thermometer Android" https://alibaba.com/trade/search?SearchText=infrared%20therm...
Employee Scheduling
From "Ask HN: What algorithms should I research to code a conference scheduling app" https://news.ycombinator.com/item?id=15267804 :
> Resource scheduling, CSP (Constraint Satisfaction programming)
CSP: https://en.wikipedia.org/wiki/Constraint_satisfaction_proble...
Scheduling (production processes):
https://en.wikipedia.org/wiki/Scheduling_(production_process...
Scheduling (computing):
https://en.wikipedia.org/wiki/Scheduling_(computing)
... To an OS, a process thread has a priority and sometimes a CPU affinity.
From http://markmail.org/search/?q=list%3Aorg.python.omaha+pysche... :
Pyschedule:
- Src: https://github.com/timnon/pyschedule
From https://github.com/timnon/pyschedule :
> pyschedule is python package to compute resource-constrained task schedules. Some features are:
- precedence relations: e.g. task A should be done before task B
- resource requirements: e.g. task A can be done by resource X or Y
- resource capacities: e.g. resource X can only process a few tasks
Previous use-cases include:
- school timetables: assign teachers to classes
- beer brewing: assign equipment to brewing stages
- sport schedules: assign stadiums to games
... https://en.wikipedia.org/wiki/Slurm_Workload_Manager :
> Slurm is the workload manager on about 60% of the TOP500 supercomputers.[1]
Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]
... https://en.wikipedia.org/wiki/Hilbert_curve_scheduling :
> [...] the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity.[1] Other space filling curves may also be used in various computing applications for similar purposes.[2]
Show HN: Simulation-based high school physics course notes
I love the train demo for general & special relativity, they intuitively explain the theories from frames of reference (ground/train).
This helped me instantly understand the 2 theories, great work!
WebAssembly brings extensibility to network proxies
FWIW, Ethereum WASM (ewasm) has a cost (in "particles" ("gas")) for each WebAssembly opcode. [1]
Opcode costs help to incentivize efficient code.
ewasm/design /README.md [2] links to the complete WebAssembly instruction set. [3]
[1] https://ewasm.readthedocs.io/en/mkdocs/determining_wasm_gas_...
[2] https://github.com/ewasm/design/blob/master/README.md
[3] https://webassembly.github.io/spec/core/appendix/index-instr...
Pandemic Ventilator Project
> https://www.projectopenair.org/
From https://app.jogl.io/project/121#about
>> Current Status of the project
>> The main bottleneck currently (2020-03-13) is organization / management.
>> […] This is an organization of experts and hobbyists from around the globe.
The link https://app.jogl.io/project/121#about doesn't go anywhere anymore. And there are no results for "ventilator" in the app.jogl.io search.
I see a "Ventilator Project" heading?
(edit) here's the link to their 'Ventilator' document: https://docs.google.com/document/d/1RDihfZIOEYs60kPEIVDe7gms...
Low-cost ventilator wins Sloan health care prize (2019)
Relating to coronavirus (COVID-19), if you require a ventilator, your survival chances are already slim.
> The median time from illness onset (ie, before admission) to discharge was 22·0 days (IQR 18·0–25·0), whereas the median time to death was 18·5 days (15·0–22·0; table 2). 32 patients required invasive mechanical ventilation, of whom 31 (97%) died. The median time from illness onset to invasive mechanical ventilation was 14·5 days (12·0–19·0). Extracorporeal membrane oxygenation was used in three patients, none of whom survived. Sepsis was the most frequently observed complication, followed by respiratory failure, ARDS, heart failure, and septic shock (table 2). Half of non-survivors experienced a secondary infection, and ventilator-associated pneumonia occurred in ten (31%) of 32 patients requiring invasive mechanical ventilation. The frequency of complications were higher in non-survivors than survivors (table 2)
https://www.thelancet.com/action/showPdf?pii=S0140-6736%2820...
Ventilator availability is limiting our ability to get care to the most people we can.
True, but at least Italy claims that the main bottleneck in ventilator availability isn't in the machines themselves but in trained doctors and nurses to support patients on ventilators.
AI can detect coronavirus from CT scans in twenty seconds
Is it possible to detect coronavirus with NIRS (Near-Infrared Spectroscopy)? https://en.wikipedia.org/wiki/Near-infrared_spectroscopy
FWIU, the equipment costs and scan times are lower with NIRS than with CT or MRI? And infrared is zero rads?
(Edit) I think it was this or the TED video that had the sweet demo: "The Science of Visible Thought & Our Translucent Selves | Mary Lou Jepsen | SU Global Summit" https://youtu.be/IRCXNBzfeC4
Are these devices in production?
AutoML-Zero: Evolving machine learning algorithms from scratch
Next:
- Autosuggest database tables to use
- Automatically reserve parallel computing resources
- Autodetect data health issues and auto fix them
- Autodetect concept drift and auto fix it
- Auto engineer features and interactions
- Autodetect leakage and fix it
- Autodetect unfairness and auto fix it
- Autocreate more weakly-labelled training data
- Autocreate descriptive statistics and model eval stats
- Autocreate monitoring
- Autocreate regulations reports
- Autocreate a data infra pipeline
- Autocreate a prediction serving endpoint
- Auto setup a meeting with relevant stakeholders on Google Calendar
- Auto deploy on Google Cloud
- Automatically buy carbon offset
- Auto fire your in-house data scientists
Would be funny but most of those things are already on AutoML Tables, including the carbon offset
> Would be funny but most of those things are already on AutoML Tables, including the carbon offset
GCP datacenters are 100% offset with PPAs. Are you referring to different functionality for costing AutoML instructions in terms of carbon?
...
I'd add:
- Setup a Jupyter Notebook environment
> Jupyter Notebooks are one of the most popular development tools for data scientists. They enable you to create interactive, shareable notebooks with code snippets and markdown for explanations. Without leaving Google Cloud's hosted notebook environment, AI Platform Notebooks, you can leverage the power of AutoML technology.
> There are several benefits of using AutoML technology from a notebook. Each step and setting can be codified so that it runs the same every time by everyone. Also, it's common, even with AutoML, to need to manipulate the source data before training the model with it. By using a notebook, you can use common tools like pandas and numpy to preprocess the data in the same workflow. Finally, you have the option of creating a model with another framework, and ensemble that together with the AutoML model, for potentially better results.
https://cloud.google.com/blog/products/ai-machine-learning/u...
This sounds like the sort of thing that would be useful outside of data science. Which leads to the question of whether it needs to be generalized, or redone differently for different specializations. Which in turn seems like the sort of question that it's tricky to answer with AI.
> This sounds like the sort of thing that would be useful outside of data science.
The instruction/operation costing or the computational essay/notebook environment setup?
Ethereum ("gas") and EOS have per-instruction costing. SingularityNET is a marketplace for AI solutions hosted on a blockchain, where you pay for AI/ML services with the SingularityNET AGI token. E.g. GridCoin and CureCoin compensate compute resource donations with their own tokens; which also have a floating exchange rate.
TLJH: "The Littlest JupyterHub" describes how to setup multi-user JupyterHub with e.g. Docker spawners that isolate workloads running with shared resources like GPUs and TPUs: http://tljh.jupyter.org/en/latest/
"Zero to BinderHub" describes how to setup BinderHub on a k8s cluster: https://binderhub.readthedocs.io/en/latest/zero-to-binderhub...
The notebook/procedure thing. Like, doesn't everybody everywhere operate on a basis of mixed manual/automated procedures, where it needs to fluidly transition from one to another, yet be controlled and recorded and verified and structured?
REES is one solution to reproducibility of the computational environment.
> BinderHub ( https://mybinder.org/ ) creates docker containers from {git repos, Zenodo, FigShare,} and launches them in free cloud instances also running JupyterLab by building containers with repo2docker (with REES (Reproducible Execution Environment Specification)). This means that all I have to do is add an environment.yml to my git repo in order to get Binder support so that people can just click on the badge in the README to launch JupyterLab with all of the dependencies installed.
> REES supports a number of dependency specifications: requirements.txt, Pipfile.lock, environment.yml, aptSources, postBuild. With an environment.yml, I can install the necessary CPython/PyPy version and everything else.
REES: https://repo2docker.readthedocs.io/en/latest/specification.h...
REES configuration files: https://repo2docker.readthedocs.io/en/latest/config_files.ht...
Storing a container built with repo2docker in a container registry is one way to increase the likelihood that it'll be possible to run the same analysis pipeline with the same data and get the same results years later.
...
Pachyderm ( https://pachyderm.io/platform/ ) does Data Versioning, Data Pipelines (with commands that each run in a container), and Data Lineage (~ "data provenance"). What other platforms are there for versioning data and recording data provenance?
...
Recording manual procedures is an area where we've somewhat departed from the "write in a lab notebook with a pen" practice. CoCalc records all (collaborative) inputs to the notebook with a timeslider for review.
In practice, people use notebooks for displaying generated charts, manual exploratory analyses (which does introduce bias), for demonstrating APIs, and for teaching.
Is JupyterLab an ideal IDE? Nope, not by a longshot. nbdev makes it easier to write a function in a notebook, sync it to a module, edit it with a more complete data-science IDE (like RStudio, VSCode, Spyder, etc), and then copy it back into the notebook. https://github.com/fastai/nbdev
> What other platforms are there for versioning data and recording data provenance?
Quilt also versions data and data pipelines: https://medium.com/pytorch/how-to-iterate-faster-in-machine-...
https://github.com/quiltdata/quilt (Python)
Poor data scientists, now whose heads get cut when things go wrong and companies lose billions?
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky.
“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.
“Why is the net wired randomly?”, asked Minsky.
“I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes.
“Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.
Is this an argument in favor of unjustified magic constant arbitrary priors?
A sufficiently large amount of random data contains all the magic constants you could want.
AutoML-Zero aims to automatically discover computer programs that can solve machine learning tasks, starting from empty or random programs and using only basic math operations.
If this system is not using human bias, who is it choosing what good program is? Surely, human labeling data involves humans adding their bias to the data?
It seems like AlphaGoZero was able to do just end-to-end ML because it was able to use a very clear and "objective" standard, whether a program wins or loses at the game of Go.
Would this approach only deal with similarly unambiguous problems?
Edit: also, AlphaGoZero was one of the most ML ever created (at least at the time of its creation). How much computing resources would this require for more fully general learning? Will there be a limit to such an approach?
Options for giving math talks and lectures online
One option: screencast development of a Jupyter notebook.
Jupyter Notebook supports LaTeX (MathTeX) and inline charts. You can create graded notebooks with nbgrader and/or with CoCalc (which records all (optionally multi-user) input such that you can replay it with a time slider).
Jupyter notebooks can be saved to HTML slides with reveal.js, but if you want to execute code cells within a slide, you'll need to install RISE: https://rise.readthedocs.io/en/stable/
Here are the docs for CoCalc Course Management; Handouts, Assignments, nbgrader: https://doc.cocalc.com/teaching-course-management.html
Here are the docs for nbgrader: https://nbgrader.readthedocs.io/en/stable/
You can also grade Jupyter notebooks in Open edX:
> Auto-grade a student assignment created as a Jupyter notebook, using the nbgrader Jupyter extension, and write the score in the Open edX gradebook
https://github.com/ibleducation/jupyter-edx-grader-xblock
Or just show the Jupyter notebook within an edX course: https://github.com/ibleducation/jupyter-edx-viewer-xblock
There are also ways to integrate Jupyter notebooks with various LMS / LRS systems (like Canvas, Blackboard, etc) "nbgrader and LMS / LRS; LTI, xAPI" on the "Teaching with Jupyter Notebooks" mailing list: https://groups.google.com/forum/#!topic/jupyter-education/_U...
"Teaching and Learning with Jupyter" ("An open book about Jupyter and its use in teaching and learning.") https://jupyter4edu.github.io/jupyter-edu-book/
> TLJH: "The Littlest JupyterHub" describes how to setup multi-user JupyterHub with e.g. Docker spawners that isolate workloads running with shared resources like GPUs and TPUs: http://tljh.jupyter.org/en/latest/
> "Zero to BinderHub" describes how to setup BinderHub on a k8s cluster: https://binderhub.readthedocs.io/en/latest/zero-to-binderhub...
If you create a git repository with REES-compatible dependency specification file(s), students can generate a container with all of the same software at home with repo2docker or with BinderHub.
> REES is one solution to reproducibility of the computational environment.
>> BinderHub ( https://mybinder.org/ ) creates docker containers from {git repos, Zenodo, FigShare,} and launches them in free cloud instances also running JupyterLab by building containers with repo2docker (with REES (Reproducible Execution Environment Specification)). This means that all I have to do is add an environment.yml to my git repo in order to get Binder support so that people can just click on the badge in the README to launch JupyterLab with all of the dependencies installed.
>> REES supports a number of dependency specifications: requirements.txt, Pipfile.lock, environment.yml, aptSources, postBuild. With an environment.yml, I can install the necessary CPython/PyPy version and everything else.
> REES: https://repo2docker.readthedocs.io/en/latest/specification.h...
> REES configuration files: https://repo2docker.readthedocs.io/en/latest/config_files.ht...
> Storing a container built with repo2docker in a container registry is one way to increase the likelihood that it'll be possible to run the same analysis pipeline with the same data and get the same results years later.
Aerogel from fruit biowaste produces ultracapacitors
> "Aerogel from fruit biowaste produces ultracapacitors with high energy density and stability" (2020) https://www.sciencedirect.com/science/article/pii/S2352152X1...
Years ago, I remember reading about supercapacitor electrodes made from what would be waste hemp bast fiber. They used graphene as a control. And IIRC, the natural branching structure in hemp (the strongest natural fiber) was considered ideal for an electrode.
"Hemp Carbon Makes Supercapacitors Superfast" https://www.asme.org/topics-resources/content/hemp-carbon-ma...
How do the costs and performance compare? Graphene, hemp, durian, jackfruit
While graphene production costs have fallen due to lots of recent research, IIUC all graphene production is hazardous due to graphene's ability to cross the lungs and the blood-brain barrier?
All my life, I've heard people go on and on about the magical properties of hemp.
It's hilarious.
They have these material science level arguments about hemp, yet the same people would be hard pressed to find an alternative use for cotton aside from clothes, or wood pulp aside from ikea furniture.
It's a desert topping, it's a driveway sealant. And the government won't let us have it!
Hemp textiles are rough, but antimicrobial/antibacterial: hemp textiles resist growth of pneumonia and staph.
AFAIU, when they blend hemp with e.g. rayon it's good enough for underwear, sheets, scrubs.
The government is getting the heck out of the way of hemp, a great rotation crop that can be used for soul remediation.
damn if hemp doesn't make a come back.. from cbd to ultracaps..
Hopefully but it’s only been legal to grow hemp in the US since 2018. https://www.vox.com/the-goods/2018/12/13/18139678/cbd-indust...
Technically, the 2013 farm bill (signed into law in 2014) authorized growing hemp for state-registered research purposes. https://www.votehemp.com/laws-and-legislation/federal-legisl...
Turns out UC Berkeley's got an approach for brewing cannabinoids (and I think terpenes) from yeast, and a company in Germany has a provisional patent application to brew cannabinoids from bacteria. We could be absorbing carbon ("sequestering" carbon) and coal ash acid rain with mostly fields of industrial hemp for which there are indeed thousands of uses.
Ask HN: How to Take Good Notes?
I want to improve my note-taking skill. I've started writing a text file with notes from class, however, I don't have a systematic way of writing. This means at this point I just wrote down, arbitrarily, things the professor said, things the professor wrote, how I understood the information, and everything else, mostly all over the place.
I'm wondering if anyone developed a system like this I could adapt to myself, and how did they do it.
I think you ought to consider whether you should take notes at all. Notetaking is great for remembering actions that you have comitted to doing, or if you need to spread information to people who didn't participate in a meeting. Managers need to do a lot of notetaking.
However, taking notes seriously hinder your ability to engage with the material and build true understanding as you are listening, which would have helped you remember the material right away. If you are in school or are an individual contributor in a company I think you ought to stop taking notes all together.
If you need notes for future practice I would advice you to write them after the meeting/lecture. Actively recalling things from memory is the best form practice.
> In 2009, psychologist Jackie Andrade asked 40 people to monitor a 2-½ minute dull and rambling voice mail message. Half of the group doodled while they did this (they shaded in a shape), and the other half did not. They were not aware that their memories would be tested after the call. Surprisingly, when both groups were asked to recall details from the call, those that doodled were better at paying attention to the message and recalling the details. They recalled 29% more information! https://www.health.harvard.edu/blog/the-thinking-benefits-of...
https://en.wikipedia.org/wiki/Doodle#Effects_on_memory references the same study.
Related articles on GScholar: https://scholar.google.com/scholar?q=related:YVG_-PKhNH4J:sc...
> dull and rambling voice mail message
This is in no way a realistic study. A dull and rambling voice speaking about some random thing not related to you or your work will make people disengage. Doodling presumably keeps people from totally spacing out.
Many lectures and meetings may be experienced as similarly dross and irrelevant and a waste of time (though you can't expect people to just read the necessary information ahead of time, as flipped classrooms expect of committed learners).
What would be a better experimental design for measuring effect on memory retention of passively-absorbed lectures?
Ask HN: STEM toy for a 3 years old?
Hello! Can the HN community recommend me a STEM toy (or similar that would educate and entertain him) for my 3 yo boy? He's highly curious but I can't find many things to play with him :( The things that I like bore him and the things that he likes bore me (or are way too messy and dangerous to let him do them)...
"12 Awesome (& Educational) STEM Subscription Boxes for Kids" https://stemeducationguide.com/subscription-boxes-for-kids/
Tape measure with big numbers, ruler(s)
Measuring cup, water, ice.
"Melissa & Doug Sunny Patch Dilly Dally Tootle Turtle Target Game (Active Play & Outdoor, Two Color Self-Sticking Bean Bags, Great Gift for Girls and Boys - Best for 3, 4, 5, and 6 Year Olds)"
Set of wooden blocks in a wood box; such as "Melissa & Doug Standard Unit Blocks"
...
https://sugarlabs.org/ , GCompris mouse and keyboard games with a trackpad and a mouse, ABCMouse, Khan Academy Kids, Code.org, ScratchJr (5-7), K12 Computer Science Framework https://k12cs.org/
OpenAPI v3.1 and JSON Schema 2019-09
In all honesty, isn't json schema and open api more or less reinventing wsdl/soap/rpc? It feels like we've come full circle now and replaced <> with {} (and on the way lost a lot of mature xml tooling).
Defusedxml lists a number of XML "Attack vectors: billion laughs / exponential entity expansion, quadratic blowup entity expansion, external entity expansion (remote), external entity expansion (local file), DTD retrieval" https://pypi.org/project/defusedxml/
Are there similar vulnerabilities in JSON parsers (that would then also need to be monkeypatched)?
Sure, but the page itself says, that it's mostly based on some uncommon features. Since we are converging to the feature list of XML/WSDL with JSON, what makes you think that some JSON (i'm using JSON as placeholder for the next big thing in the JSON space) parsing libraries won't have bugs? Isn't that the danger of the added complexity?
So, most likely some of those uncommon features won't be added to JSON but why didn't "we" as a community then not "just" invent "XML light" or disable those dangerous but sometimes used features behind better defaults?
So, over the years all i've seen is some mumble jumble over how XML is too complicated and too noisy and big, atleast i used to say that, but what if that complexity came from decades of usage, from real requirements? Why wouldn't we arrive in the same situation 10 or 20 years from now? And if size matters so much, why do we ship megabytes of code for simple websites? And why didn't we choose protobuffers over JSON?
And what makes you think that the next generation won't think that JSON is too complicated (because naturally, complexity will increase with new requirements and features). So, i suppose we'll see a JSON replacement with different brackets ("TOML Schema"?) in the future. Or maybe it's time for a binary representation again, just to be replaced with the next "editor friendly and humanly readable" format.
It all feels like, we as an industry took a step backward for years (instead of improving the given tools and specs) just to head to the same level of complexity all the while throwing away lots and lots of mature tooling of XML processing.
P.S.: Same goes probably for ASN.1 vs. XML Schemas... So now we are in the 3rd generation of schema based data representation: ASN.1 -> XML -> JSON..
P.P.S.: I'm not arguing that we all should use XML now, i'm reflecting over the historical course and what might be ahead of us. Clearly, XML is declining steadily and has "lost".
Yeah, expressing complex types with only primitives in a portable way is still unfortunately a challenge. For example, how do we encode a datetime without ISO8601 and a schema definition; or, how do we encode complex numbers with units like "1j+1"; or "1.01 USD/meter"?
Fortunately, we can use XSD with RDFS and RDFS with JSON-LD.
LDP: Linked Data Platform and Solid: social linked data (and JSON-LD) are the newer W3C specs for HTTP APIs.
For one, pagination is a native feature of LDP.
JSON-LD is just the same old RDF data model that the W3C promotes unsuccessfully for that last 2 decades, repackaged with trendy JSON syntax. It's almost as unreadable as the XML serialization and still put very little value on the table. It shines however with people who like to do over-engineering instead of delivering a product.
It's astounding how often people make claims like this.
There is a whole lot of RDF Linked Data; and it links together without needing ad-hoc implementations of schema-specific relations.
I'll just link to the Linked Open Data Cloud again, for yet another hater that's probably never done anything for data interoperability: https://lod-cloud.net/
That's a success.
the only thing I miss are the comments
JSON5 supports comments: https://json5.org/
But who supports json5?
You can use json5 for comments and compile it to json.
You can use (yaml, hjson, jsonnet) for the same purpose. What sets json 5 apart?
YAML, hjson and jsonnet do not validate existing json blobs. You can "use" ROT13 for the same purpose, but it's incidental in what you are trying to do.
I would plead that everyone stop using YAML. It's terrible at everything.
jsonnet is a template language for a serialization format. Who would choose that nightmare?
Ecmascript isn't big on extending the language orthogonally, so hjson is eventually going to be superceded by a ES*.
This is a niche concern that has an optimal path. Go with a validation schema designed for applying to a serialization format, which has widespread library support.
>YAML ... do not validate existing json blobs
YAML 1.2 is a strict superset of JSON. What do you mean by "validate" here?
>I would plead that everyone stop using YAML. It's terrible at everything.
Fair enough.
>jsonnet is a template language for a serialization format. Who would choose that nightmare?
People who are tired of Jinja2 and YAML but don't want to jump to a general purpose programming language?
>widespread library support
JSON5 is implemented in Javascript, Go, and C#. Not sure how "widespread" that is. Rust, C, Python, Lua, Java, and Haskell are missing out on the fun.
Git for Node.js and the browser using libgit2 compiled to WebAssembly
This looks useful. Are there pending standards for other browser storage mechanisms than an in-memory FS?
Would it be a security risk to grant limited local filesystem access by domain; with a storage quota?
... To answer my own question, it looks like the FileSystem API is still experimental and only browser extensions can request access to the actual filesystem: https://developer.mozilla.org/en-US/docs/Web/API/FileSystem
Actually, looks like emscripten has support for a few different options, including IndexedDB[1]. I have a use case where we'd like to both use the local filesystem via Node APIs as well as in-memory in the browser. I asked the author of wasm-git and it looks like this is possible with a custom build[2].
[1]: https://emscripten.org/docs/api_reference/Filesystem-API.htm...
Scientists use ML to find an antibiotic able to kill superbugs in mice
> That is an especially pressing challenge in the development of new antibiotics, because a lack of economic incentives has caused pharmaceutical companies to pull back from the search for badly needed treatments. Each year in the U.S., drug-resistant bacteria and fungi cause more than 2.8 million infections and 35,000 deaths, with more than a third of fatalities attributable to C. diff, according to the the Centers for Disease Control and Prevention.
How big does the market have to be to commercially viable for research and development? Nearly 3M potential patients at a couple hundred dollars per course is nearing a $1B/year.
The second-order costs avoided by treatments developed so innovatingly could be included in a "value to society" estimation.
"Acknowledgements" lists the grant funders for this federally-funded open access study.
"A Deep Learning Approach to Antibiotic Discovery" (2020) https://doi.org/10.1016/j.cell.2020.01.021
> Mutant generation
> Chemprop code is available at: https://github.com/swansonk14/chemprop
> Message Passing Neural Networks for Molecule Property Prediction
> A web-based version of the antibiotic prediction model described herein is available at: http://chemprop.csail.mit.edu/
> This website can be used to predict molecular properties using a Message Passing Neural Network (MPNN). In order to make predictions, an MPNN first needs to be trained on a dataset containing molecules along with known property values for each molecule. Once the MPNN is trained, it can be used to predict those same properties on any new molecules.
Shit – An implementation of Git using POSIX shell
Hiya HN. I was ranting on Mastodon earlier today because I feel like people learn git the wrong way - from the outside in, instead of the inside out. I reasoned that git internals are pretty simple and easy to understand, and that the supposedly obtuse interface makes a lot more sense when you approach it with an understanding of the fundamentals in hand. I said that the internals were so simple that you could implement a workable version of git using only shell scripts inside of an afternoon. So I wrapped up what I was working on and set out to prove it.
Five hours later, it had turned into less of a simple explanation of "look how simple these primitives are, we can create them with only a dozen lines of shell scripting!" and more into "oh fuck, I didn't realize that the git index is a binary file format". Then it became a personal challenge to try and make it work anyway, despite POSIX shell scripts clearly being totally unsuitable for manipulating that kind of data.
Anyway, this is awful, don't use it for anything, don't read the code, don't look at it, just don't.
> the supposedly obtuse interface makes a lot more sense when you approach it with an understanding of the fundamentals in hand.
Agreed. I always said the best git tutorial is https://www.sbf5.com/~cduan/technical/git/
> The conclusion I draw from this is that you can only really use Git if you understand how Git works. Merely memorizing which commands you should run at what times will work in the short run, but it’s only a matter of time before you get stuck or, worse, break something.
I never really understood Git until I read this tutorial: https://github.com/susam/gitpr
Things began to click for me as soon as I read this in its intro section:
> Beginners to this workflow should always remember that a Git branch is not a container of commits, but rather a lightweight moving pointer that points to a commit in the commit history.
A---B---C
↑
(master)
> When a new commit is made in a branch, its branch pointer simply moves to point to the last commit in the branch. A---B---C---D
↑
(master)
> A branch is merely a pointer to the tip of a series of commits. With this little thing in mind, seemingly complex operations like rebase and fast-forward merges become easy to understand and use.This "moving pointer" model of Git branches led me to instant enlightenment. Now I can apply this model to other complicated operations too like conflict resolution during rebase, interactive rebase, force pushes, etc.
If I had to select a single most important concept in Git, I would say it is this: "A branch is merely a pointer to the tip of a series of commits."
And you can see this structure if you add to any "git log" command "--graph --oneline --decorate --color". IIRC some of those are unnecessary in recent versions of git, I just remember needing all of them at the point I started using it regularly.
I have a bash function for it (with a ton of other customizations, but it boils down to this):
function pwlog() {
git log "$@" --graph --oneline --decorate --color | less -SEXIER
}
pwlog --all -20
(...in that "less" command, "S" truncates instead of wraps lines, one "E" exits at EOF, "X" prevents screen-clearing, and "R" is to keep the color output. The second "E" does nothing special, it and "I" (case-insensitive search) are just to complete the word)You can also set $GIT_PAGER/core.pager/$PAGER and create an alias to accomplish this:
#export PAGER='less -SEXIER'
#export GIT_PAGER='less -SEXIER'
git config --global core.pager 'less -SEXIER'
git config --global alias.l 'log --graph --oneline --decorate --color'
# git diff ~/.gitconfig
git l
core.pager: https://git-scm.com/docs/git-config#Documentation/git-config...> The order of preference is the $GIT_PAGER environment variable, then core.pager configuration, then $PAGER, and then the default chosen at compile time (usually less).
HTTP 402: Payment Required
The RFC offers no reasoning for this besides
> The 402 (Payment Required) status code is reserved for future use.
Isn't this obviously domain-specific and something that should not be part of the transfer protocol?
> The new W3C Payment Request API [4] makes it easy for browsers to offer a standard (and probably(?) already accessible) interface for the payment data entry screen, at least. https://www.w3.org/TR/payment-request/
Salesforce Sustainability Cloud Becomes Generally Available
> - Reduce emissions with trusted analytics from a trusted platform. Analyzing carbon emissions from energy usage and company travel can be daunting and time-consuming. But with all your data flowing directly onto one platform, you can efficiently quantify your carbon footprint. Formulate a climate action plan for your company from a single source of truth, built on our trusted and secure data platform.
> - Take action with data-driven insights. Prove to customers, employees, and potential investors your commitment to carbon-conscious and sustainable practices. Offer regulatory agencies a clear snapshot of your energy usage patterns. Extrapolate energy consumption and track carbon emissions with cutting-edge analytics — and take action.
> - Tackle carbon accounting audits in weeks instead of months. Carbon analysis can be an overwhelming time commitment, even a barrier to action for companies that want to get it right. Use preloaded datasets from the U.S. EPA, IPCC, and others to accurately assess your carbon accounting. Streamline your data gathering and climate action plan with embedded guides and user flows.
> - Empower decision makers with executive-ready dashboard data. Evaluate corporate environmental impact with rich data visualization and dashboards. Track energy patterns and emission trends, then make the business case to executives. Once an organization understands its carbon footprint, decision makers can begin to drive sustainability solutions.
Are there similar services for Sustainability Reporting and accountability? https://en.wikipedia.org/wiki/Sustainability_reporting
Httpx: A next-generation HTTP client for Python
> Fully type annotated.
This is a huge win compare to requests. AFAIKT requests is too flexible (read: easier to misuse) and difficult to add type annotation now. The author of requests gave a horrible type annotation example here [0].
IMO at this time when you evaluate a new Python library before adopting, "having type annotation" should be as important as "having decent unit test coverage".
The huge win against requests is that Https is fully async.. you can download 20 files in parallel without not too much effort. Throughout in python asyncio is amazing, something similar to node or perhaps better... That's the main point.
FWIW, requests3 has "Type-annotations for all public-facing APIs", asyncio, HTTP/2, connection pooling, timeouts, etc https://github.com/kennethreitz/requests3
Sorry, have you checked the source? Are these features there or only announced? Has requests added a timeout by default finally?
It looks like requests is now owned by PSF. https://github.com/psf/requests
But IDK why requests3 wasn't transferred as well, and why issues appear to be disabled on the repo now.
The docs reference a timeout arg (that appears to default to the socket default timeout) for connect and/or read https://3.python-requests.org/user/advanced/#timeouts
And the tests reference a timeout argument. If that doesn't work, I wonder how much work it would be to send a PR (instead of just talking s to Ken and not contributing any code)
>But IDK why requests3 wasn't transferred as well, and
That's thing... Who knows..
TIL requests3 beta works with httpx as a backend: https://github.com/not-kennethreitz/team/issues/21#issuecomm...
If requests3 is installed, `import requests' imports requests3
BlackRock CEO: Climate Crisis Will Reshape Finance
+1. From the letter: https://www.blackrock.com/us/individual/larry-fink-ceo-lette...
> The money we manage is not our own. It belongs to people in dozens of countries trying to finance long-term goals like retirement. And we have a deep responsibility to these institutions and individuals – who are shareholders in your company and thousands of others – to promote long-term value.
> Climate change has become a defining factor in companies’ long-term prospects. Last September, when millions of people took to the streets to demand action on climate change, many of them emphasized the significant and lasting impact that it will have on economic growth and prosperity – a risk that markets to date have been slower to reflect. But awareness is rapidly changing, and I believe we are on the edge of a fundamental reshaping of finance.
> The evidence on climate risk is compelling investors to reassess core assumptions about modern finance. Research from a wide range of organizations – including the UN’s Intergovernmental Panel on Climate Change, the BlackRock Investment Institute, and many others, including new studies from McKinsey on the socioeconomic implications of physical climate risk – is deepening our understanding of how climate risk will impact both our physical world and the global system that finances economic growth.
Environmental, social and corporate governance > Responsible investment: https://en.wikipedia.org/wiki/Environmental,_social_and_corp...
Corporate social responsibility: https://en.wikipedia.org/wiki/Corporate_social_responsibilit...
UN-supported PRI: Principles for Responsible Investment (2,350 signatories (2019-04)) https://en.wikipedia.org/wiki/Principles_for_Responsible_Inv...
A lot of complex “scalable” systems can be done with a simple, single C++ server
Many developers severely underestimate how much workload can be served by a single modern server and high-quality C++ systems code. I've scaled distributed workloads 10x by moving them to a single server and a different software architecture more suited for scale-up, dramatically reducing system complexity as a bonus. The number of compute workloads I see that actually need scale-out is vanishingly small even in industries known for their data intensity. You can often serve millions of requests per second from a single server even when most of your data model resides on disk.
We've become so accustomed to extremely inefficient software systems that we've lost all perspective on what is possible.
Can you expand on this? I have some pretty massive compute loads that need to be scaled onto a cluster with 100+ workers for most computations. This is after I use a library called dask that graphically does its own mapreduce optimisation inside its modules. This is all for a relatively small 250GB raw data file that I keep in a csv (and need to convert to SQL at some point).
Are you saying this can be optimised to fit inside a single 10 core server in terms of compute loads?
Don't know why you're being downvoted but I'll assume your question is genuine.
You use a cluster when your data and compute requirements are large and parallel enough that the tax paid on network latency trumps the 10-20X speedup you get on SSD and 1000X speedup you get from just keeping data in RAM.
250 Gigs is tiny enough that you could probably much get better performance running on high memory instance in AWS or GCP. You'll generally have to write your own multiprocessing code though which is fairly simple - your existing library may also be able to support it.
I once actually ran this kind of workload on just my laptop using a compiled language that performed better than pyspark on a cluster.
I'd love to keep it in RAM if I could. The problem is, the library I'm familiar with (pandas) typically seems to take more memory than the original csv file once it loads it onto memory. I know this is due to bad data types but in certain cases, I cannot get around those.
However, even if I could load it all into memory at once, and assuming it takes 200 gb, I'm still using a master's student access to a cluster. So I get preempted like it's nobody's business. Hence why I prefer a smaller memory footprint even if I take up cpus at variable rates through a single execution.
I did try to write my own multiprocessing code for this, but the operations are sometimes too complicated (like groupby) for me to rewrite everything from the ground up. If I'm not reliant on serial data communication between processes (like you'd need to sort a column), I can get it done pretty easily. In fact, I wrote my data cleaning code with this and cleaned up the entire file in half an hour because single chunks didn't rely on others.
However, if you have some idea of how to run these computational loads in parallel in python or any other language on single compute instances (like the size of a laptop's memory of 16 gb), I'd really love to see it. Thanks.
Numpy supports memory mapping `ndarrays` which can back a DataFrame in pandas. This lets you access a dataset far larger than will fit in RAM as if it lived in RAM. Provided it's on fast SSD storage you'll have speedy access to the data and can process huge chunks at once.
Can you provide a link to this please? My current knowledge is that all numpy data lives in memory, and pandas itself has a feature to fragment any data into iterables so I can read upto my memory limit. I cannot use this feature due to the serial nature of some of the operations that I alluded to (I'd have to almost rewrite the entire library for some of these complicated operations like groupby and sorting).
I do have fast SSD storage because it's on the scratch drive of a cluster and from what I've seen it can do ~300-400 MB/s easily. I haven't had a chance to test more than that since I'm mostly memory constrained in much of my testing.
My current attempt is to push this data into a pure database handling system like SQL so that I can query it. But like I said, I work with a less-than-stellar set of tools and I have to literally set up a postgres server from ground up to write to it. Which shouldn't be a big deal except when it's on a non-root user and I have to keep remapping dependencies (took 5-6 hours to set it up on the instance I have access to).
My other option was to write the entire 250 GB to a sqllite database using the sqlalchemy library in Python, but that seems to fail whether I do it with parallel writes, or serial writes. In both cases, it fails after I create ~64-70 tables.
Dask groupby example: https://examples.dask.org/dataframes/02-groupby.html
> Generally speaking, Dask.dataframe groupby-aggregations are roughly same performance as Pandas groupby-aggregations, just more scalable.
The dask.distributed scheduler can also run on one high-RAM instance (with threads or processes) https://docs.dask.org/en/latest/setup.html
Pandas docs > Ecosystem > Out-of-core: https://pandas.pydata.org/pandas-docs/stable/ecosystem.html#...
Reading from Parquet into Apache Arrow is much faster than CSV because the data can just be directly loaded into RAM. https://ursalabs.org/blog/2019-10-columnar-perf/
If you have GPU instances, cuDF has a Pandas-like API on top of Apache Arrow. https://github.com/rapidsai/cudf
> Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
> cuDF provides a pandas-like API that will be familiar to data engineers & data scientists, so they can use it to easily accelerate their workflows without going into the details of CUDA programming.
Dask-ML makes scalable scikit-learn, XGBoost, TensorFlow really easy. https://dask-ml.readthedocs.io/en/latest/
... re: the OT: While it's possible to write C++ code that's really fast, it's generally inflexible, expensive to develop, and dangerous for devs with experience in their respective domains of experience to write. Much saner to put a Python API on top and optimize that during compilation.
There are a few C++ frameworks in the top quartile of the TechEmpower framework benchmarks. https://www.techempower.com/benchmarks/
Hardware/hosting is relatively cheap. Developers and memory vulnerabilities aren't.
Unfortunately, I'm not sure what's wrong with dask, but it doesn't work properly on my cluster. I tested it on an exceedingly simple operation - find all unique values in a very big column (5 billion rows, but I know for a fact that there are only 500-502 unique values in there). With a 100 workers, it still failed. Now this is an embarrassingly parallel operation that can be implemented trivially. So I'm not sure if there's a problem with my cluster or if dask just does not work with slurm clusters very well.
https://docs.dask.org/en/latest/setup/hpc.html says dask-jobqueue handles "PBS, SLURM, LSF, SGE and other resource managers"
"Dask on HPC, what works and what doesn't" https://github.com/dask/dask-blog/issues/5
Maybe you should spend some time developing a job visualization system for end users from scratch, for end users with lots of C, JS, and HTML experience https://jobqueue.dask.org/en/latest/interactive.html
Yes, I think the library still isn't ready for non HPC experts to use without tinkering. I don't have that level of expertise. I'm a data person who can handle working tools.
Warren Buffett is spending billions to make Iowa 'the Saudi Arabia of wind'
It's both cost-rational and environment-rational to invest heavily in clean energy (with or without the comparatively paltry tax incentives).
The long-term costs of climate change and inaction are unfortunately still mostly external costs to energy producers. We should expect that to change as we start developing competencies in evaluating the costs and frequency of weather disasters exacerbated by anthropogenic climate change. We all get to pay for floods, fires, tornados, hurricanes, landslides, blizzards, and the gosh darn heat.
Insurance firms clearly see these costs. Our military sees the costs of responding to natural disasters. Local economies see the costs of months and years spent on disaster relief; on just getting back up to speed so that they can generate profit from selling goods and services (and pay taxes to support disaster relief efforts essential to operational readiness).
The cost per kilowatt hour of wind (and solar) energy is now lower than operating existing dirty energy plants that dump soot on our crops, air, and water.
With wind, they talk about the "alligator curve". With solar, it's the "duck curve". Grid-scale energy storage is necessary for reaching 100% renewable energy as soon as possible.
Iowa's renewable energy tax incentives are logically aligned with international long-term goals:
UN Sustainable Development Goal 7: Affordable and Clean Energy https://www.globalgoals.org/7-affordable-and-clean-energy
Goal 13: Climate Action https://www.globalgoals.org/13-climate-action
SDG Target 12.6: "Encourage companies to adopt sustainable practices and sustainability reporting" (CSR; e.g. GRI Sustainability Reporting Standards that we can score portfolios with)
https://www.undp.org/content/undp/en/home/sustainable-develo... :
> Rationalize inefficient fossil-fuel subsidies that encourage wasteful consumption by removing market distortions, in accordance with national circumstances, including by restructuring taxation and phasing out those harmful subsidies, where they exist, to reflect their environmental impacts, taking fully into account the specific needs and conditions of developing countries and minimizing the possible adverse impacts on their development in a manner that protects the poor and the affected communities
...
> Thanks. How can I say "try and only run this [computational workload] in zones with 100% PPA offsets or 100% directly sourced #CleanEnergy"? #Goal7 #Goal11 #Goal12 #Goal13 #GlobalGoals #SDGs
It makes good business sense to invest in clean energy to take advantage of tax incentives, minimize future costs to other business units (e.g. insurance, taxes), and earn the support of investors choosing portfolios with long term environmental (and thus economic) sustainability as a primary objective.
Scientists Likely Found Way to Grow New Teeth for Patients
"Scientists Have Discovered a Drug That Fixes Cavities and Regrows Teeth" https://futurism.com/neoscope/scientists-have-discovered-thi...
Tideglusib https://en.wikipedia.org/wiki/Tideglusib
> Tooth repair mechanisms that promotes dentine reinforcement of a sponge structure until the sponge biodegrades, leaving a solid dentine structure. In 2016, the results of animal studies were reported in which 0.14 mm holes in mouse teeth were permanently filled.
Very interesting mechanism.
GSK3 inhibitors are interesting, but we don't understand much the subtypes that are likely to exist: "GSK-3 appears to both promote and inhibit apoptosis, and this regulation varies depending on the specific molecular and cellular context."
Still, I believe better GSK3i will find a role in autologous bone-marrow grafting therapies to fight senescence: with TERC/TERT overexpression (the opposite of DeGrey WILT ideas) to send new stem cells to the tissues - just like how you can find the donors chromosomes in most tissues of grafted patient.
Announcing the New PubMed
I'm actually part of the team developing the new PubMed. Very curious and interested to know what the hacker news community thinks and feels about their experience. https://pubmed.gov/labs
This looks great: I like the search timeline, the ability to easily search for free full-text meta-analyses (a selection bias we should all be aware of), the MeSH term listing in a reasonably-sized font, and that there's schema.org/ImageObject metadata within the page, but there's no [Medical]ScholarlyArticle metadata?
I've worked with Google Scholar (:o) [1], Semantic Scholar (Allen Institute for AI) [2], Meta (Chan Zuckerberg Institute) [3], Zotero, Mendeley and a number of other tools for indexing and extracting metadata and graph relations from https://schema.org/ScholarlyArticle and MedicalScholarlyArticles . Without RDFa (or Microdata, or JSON-LD) in PDF, there's a lot of parsing that has to go down in order to get a graph from the citations in the article. Each service adds value to this graph of resources. Pushing forward on publishing linked research that's reproducible (#LinkedResearch, #LinkedReproducibility) is a worthwhile investment in meta-research that we have barely yet addressed:
> http://Schema.org/NewsArticle .citation: https://schema.org/citation ... Wouldn't it be great if NewsArticles linked to the ScholarlyArticle and/or Notebook CreativeWorks that they're .about (with reified relations)?
> A practical use case: Alice wants to publish a ScholarlyArticle [1] (in HTML with structured data, as a PDF) predicated upon Datasets [2] (as CSV, CSVW JSONLD, XLSX (DataDownload)) with static HTML (and no special HTTP headers). 1 https://schema.org/ScholarlyArticle 2 https://schema.org/Dataset*
> B wants to build a meta analysis: to collect a # of ScholarlyArticles and Dataset DataDownloads; review study controls and data; merge, join, & concatenate Datasets if appropriate, and inductively or deductively infer a conclusion and suggestions for further studies of variance*
The Linked Open Data Cloud shows the edges, the relations, the structured data links between very many (life sciences) datasets: https://lod-cloud.net/ . https://5stardata.info/en/ lists TimBL's suggested 5-start deployment schema for Open Data; which culuminates in publishing linked open data in non-proprietary formats that uses URIs to describe and link to things.
Could any of these [1][2][3][4][5] services cross-link the described resources, given a common URI identifier such as https://schema.org/identifier and/or https://schema.org/url ? ORCID is a service for generating stable identifiers for researchers and publishers who have names in common but different emails. W3C DID solves for this need in a different way.
When I check an article result page with the OpenLink OSDS extension (or any of a number of other tools for extracting structured data from HTML pages (and documents!) https://github.com/CodeForAntarctica/codeforantarctica.githu... ), there could be quite a bit more data there for search engines, browser extensions, and meta-research tools.
Is this something like ElasticSearch on the backend? It is possible to store JSON-LD documents in the search index. I threw together elasticsearchjsonld to "Generate JSON-LD @contexts from ElasticSearch JSON Mappings" for the OpenFDA FAERS data a few year ago. That's not GraphQL or SPARQL, but it's something and it's Linked Data.
re: "Canada's Decision To Make Public More Clinical Trial Data Puts Pressure On FDA" https://news.ycombinator.com/item?id=21232183
> We really could get more out of this data through international collaboration and through linked data (e.g. URIs for columns). See: "Open, and Linked, FDA data" https://github.com/FDA/openfda/issues/5#issuecomment-5392966... and "ENH: Adverse Event Count / 'Use' Count Heatmap" https://github.com/FDA/openfda/issues/49 . With sales/usage counts, we'd have a denominator with which we could calculate relative hazard.
W3C Web Annotations handle threaded comments and highlights; reviewing the reviewers is left as an exercise for the reader. Does Zotero still make it easy to save the bibliographic metadata for one or more ScholarlyArticles from PubMed to a collection in the cloud (and add metadata/annotations)?
Sorry to toot my own horn here. Great job on this. This opens up many new opportunities for research.
[1] https://scholar.google.com
[2] https://www.semanticscholar.org/
Pubmed publishes its dataset for download. Its rather large but update files come frequently. Its amazing. I beleive NIH adds the MESH terms.
ftp://ftp.ncbi.nlm.nih.gov/pubmed/
We had someone do a project with it. downloaded the dataset and used it and create a tool to do some searches that we found useful to find colaborators: (last author, working on a specific gene, paper counts, most recent).
Searching by Mesh Terms across species, and search with orthologs.
The dataset sometimes has a hard time disambiguating names (I think the european dataset assigns Ids to names)
To make sure your feedback is heard, please use "Feedback button" found in bottom right corner of https://pubmed.gov/labs
Ask HN: Is it worth it to learn C in 2020?
The GNU/Linux kernel, FreeBSD kernel, Windows kernel, MacOS kernel, Python, Ruby, Perl, PHP, NodeJS, and NumPy are all written in C. If you want to review and contribute code, you'd need to learn C.
There are a number of coding guidelines e.g. for safety-critical systems where bounded running time and resource consumption are essential. These coding guidelines and standards are basically only available for C, C++, and Ada. https://github.com/stanislaw/awesome-safety-critical/blob/ma...
Even though modern languages have garbage-collection that runs whenever it feels like it, It's helpful to learn about memory management in C (or C++). You'll appreciate object destructor methods that free memory and sockets and file handles that much more. Reference cycles in object graphs are easier to handle with modern C++ than with C. Are there RAII (Resource Acquisition is Initialization) "smart pointers" that track reference counts in C?
Without OO namespacing, in C, function names are often prefixed with namespaces. How many ways could a struct be initialized? When can I free that memory?
When strace prints a syscall, what is that?
Is it necessary to learn C? Somebody needs to maintain and improve the C-based foundation for most of our OSs and very many of our fancy scripting languages. C can be very unforgiving: it's really easy to do it wrong, and there's a lot to keep in mind at once: the cognitive burden is higher with C (and then still with ASM and WebASM) than with an interpreted (or compiled) duck-typed 3GL scripting language with first-class functions.
What's a good progression that includes syntax, finding and reading the libc docs, Make/CMake/Autotools, secure recommended compiler flags for GCC (CPPFLAGS, CFLAGS, LDFLAGS) and LLVM Clang?
C: https://learnxinyminutes.com/docs/c/
C++: https://learnxinyminutes.com/docs/c++/
Links to the docs for Libc and other tools: https://westurner.github.io/tools/#libc
xeus-cling is a Jupyter kernel for C++ (and most of C) that works with nbgrader. https://github.com/QuantStack/xeus-cling
What's a better unit-testing library for C/C++ than gtest? https://github.com/google/googletest/
Amazing comment, thank you for your detailed input and the awesome resources. I want to get into C because I like network and system programming but lack the underlying knowledge and experience. These links will help me start on that new path. Very much appreciated sir :)
For network programming, you might consider asynchronous programming with coroutines. C++20 has them and they're already supported in LLVM. For C, there are a number of implementations of coroutines: https://en.wikipedia.org/wiki/Coroutine#Implementations_for_...
> Once a second call stack has been obtained with one of the methods listed above, the setjmp and longjmp functions in the standard C library can then be used to implement the switches between coroutines. These functions save and restore, respectively, the stack pointer, program counter, callee-saved registers, and any other internal state as required by the ABI, such that returning to a coroutine after having yielded restores all the state that would be restored upon returning from a function call. Minimalist implementations, which do not piggyback off the setjmp and longjmp functions, may achieve the same result via a small block of inline assembly which swaps merely the stack pointer and program counter, and clobbers all other registers. This can be significantly faster, as setjmp and longjmp must conservatively store all registers which may be in use according to the ABI, whereas the clobber method allows the compiler to store (by spilling to the stack) only what it knows is actually in use.
CPython's asyncio implementation (originally codenamed 'tulip') is written in C and IMHO much easier to use than callbacks like Twisted and JS before Promises and the inclusion of tulip-like async/await keywords in ECMAscript. Uvloop - based on libuv, like Node - is apparently the fastest asyncio event loop. CPython Asyncio C module source: https://github.com/python/cpython/blob/master/Modules/_async... Asyncio docs: https://docs.python.org/3/library/asyncio.html
(When things like file or network I/O are I/O bound, the program can yield to allow other asynchronous coroutines ('async') to run on that core. With network programming, we're typically waiting for things to send or reply.)
Return-oriented-programming > Return-into-library technique is an interesting read regarding system programming :) https://en.wikipedia.org/wiki/Return-oriented_programming#Re...
Free and Open-Source Mathematics Textbooks
This is a good list of books. Unfortunately many of the links are broken? Probably just my luck, but the first few "with Sage" books I excitedly selected unfortunately 404'd. I'll send an email.
> Moreover, the American Institute of Mathematics maintains a list of approved open-source textbooks. https://aimath.org/textbooks/approved-textbooks/
I also like the (free) Green Tea Press books: Think Stats, Think Bayes, Think DSP, Think Complexity, Modeling and Simulation in Python, Think Python 2e: How To Think Like a Computer Scientist https://greenteapress.com/wp/
And IDK how many times I've recommended the book for the OCW "Mathematics for Computer Science" course: https://ocw.mit.edu/courses/electrical-engineering-and-compu...
There may be a newer edition than the 2017 version of the book: https://courses.csail.mit.edu/6.042/spring17/mcs.pdf
Does anyone have a good strategy for archiving and downloading the textbooks from aimath.org? They are all excellently formatted in HTML, but I am not certain what the best way to get the complete book would be.
When I have books in form of webpages I normally write a small crawler in python, extract the text div with beautifulsoup, add <hn> tags for chapter names and throw them all together in html form. Add a cover image and combine everything with pandoc.
Nothing fancy but works reliable in an automated fashion
I've found MIT's "Mathematics for Computer Science" actually hard to read, and in my opinion it cannot really be called a proper book, more like a collection of notes, as it skips over many details, jumping to conclusions, definitions, results, without giving you an intuitive explanation. In that regards it's actually hard to follow.
It's fast-going. You may want to start with something more introductory, like:
Levin http://discrete.openmathbooks.org/dmoi3.html
Hammack https://www.people.vcu.edu/~rhammack/BookOfProof/
the early parts of Newstead https://infinitedescent.xyz/
Make CPython segfault in 5 lines of code
Applications Are Now Open for YC Startup School – Starts in January
I'm presently living in a fairly rural part of the US due to an illness in the family. In the town of 14,000 I currently reside in it's pretty difficult to network in a meaningful way and talk about my company with folks that can give guidance and feedback. Really excited for Startup School and so grateful that YC has put this together.
> In the town of 14,000 I currently reside in it's pretty difficult to network in a meaningful way and talk about my company with folks that can give guidance and feedback
GitLab and Zapier are examples of all remote former YC companies.
"GitLab Handbook" https://about.gitlab.com/handbook/
"The Ultimate Guide to Remote Work: Lessons from a team of over 200 remote workers" https://zapier.com/learn/remote-work/
Were those companies remote when they did YC? I can't recall exactly where offhand but I seem to remember Paul Graham (or was it Sam Altman?) saying that they were strongly against remote teams in the early stages. It was a few years ago though, I wonder if their opinions have changed.
Startup School is now designed as a remote program.
It'd be interesting to hear from them about building all remote team culture with transparency and accountability. Are text-chat "digital stand up meetings" with quality transcripts of each team member's responses to the three questions enough? ( Yesterday / Today and Tomorrow / Obstacles // What did I do since the last time we met? What will I do before the next time we meet? What obstacles are blocking my progress? )
Or are there longer term planning sessions focusing on a plan for delivering value on a far longer term than first getting the MVP down and maximizing marginal profit by minimizing costs?
‘Adulting’ is hard. UC Berkeley has a class for that
+1 for Life Skills for Adulting and also Home Economics including Family and Meal Planning.
A bunch of resources from "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894632 : CS 007: Personal Finance for Engineers, r/personalfinance/wiki, Healthy Eating Plate, Khan Academy > Science > Health and Medicine
And also, Instant Pot. The Instant Pot pressure cooker is your key to nutrient preservation and ultimate happiness.
Founder came back after 8 years to rewrite flash photoshop in canvas/WebGL
Five cities account for vast majority of growth in U.S. tech jobs: study
> Boston, San Francisco, San Jose, Seattle, and San Diego—accounted for more than 90% of the nation’s innovation-sector growth during the years 2005 to 2017
Why 2005 and why'd it drop off post-2017?
Also surprised DC and NYC aren't on the list. Lots of new companies and growth in those areas, but I guess they may not qualify as "innovation-sector growth".
> To that end, the present paper proposes that Congress assemble and award to a select set of metropolitan areas a major package of federal innovation inputs and supports that would accelerate their innovation-sector scale-up. Along these lines, we envision Congress establishing a rigorous competitive process by which the most promising eight to 10 potential growth centers would receive substantial financial and regulatory support for 10 years to become self-sustaining new innovation centers. Such an initiative would not only bring significant economic opportunity to more parts of the nation, but also boost U.S. competitiveness on the global stage.
"Potential growth centers" sounds promising.
Don’t Blame Tech Bros for the Housing Crisis
If there is demand for housing, we would expect people to be finding land and building housing unless there are policies that prevent this (and/or long commutes that people don't want to suffer) or higher-value opportunities.
If the city wanted residential areas (over commercial tax revenue giants), the city should have zoned residential.
The people elect city leaders. The people all want affordable housing.
With $4.5b from corporations and nowhere to build but out or up, high rise residential is the most likely outcome. (Which is typical for dense urban areas that have prioritized and attracted corporate tax revenue over affordable housing)
... Effing scooter bros with their scooters and their gold rush money and their tiny houses.
[Edit: more than] One company says "I will pay you $10,000 to leave the Bay Area / Silicon Valley" Because there's a lot of tech talent (because universities and opportunities) but ridiculously high expenses.
What an effectual headline from NY.
Docker is just static linking for millenials
No, LXC does quite a bit more than static linking. An inability to recognize that likely has nothing to do with generation.
Can you launch a process in a chroot, with cgroups? Okay, now upgrade everything it's linked with (without breaking the host/build system)
Configure a host-only network for a few processes – running in separate cgroups – without DHCP.
Criticize Docker? Rootless builds and containers are essentially impossible. Buildah and podman make rootless builds possible without a socket. Like sysvinit, though, IDK how well centralized logging (and logshipping, and logged crashes and restarts) works without that socket.
Given comments like this, it's likely that you've never built a chroot for a different distro. Or launchd a process with cgroups.
Show HN: Bamboolib – A GUI for Pandas (Python Data Science)
This looks excellent. The ability to generate the Python code for the pandas dataframe transformations looks to be more useful than OpenRefine, TBH.
How much work would it be to use Dask (and Dask-ML) as a backend?
I see the OneHotEncoder button. Have you considered integration with Yellowbrick? They've probably already implemented a few of your near-future and someday roadmap items involving hyperparameter selection and model selection and visualization? https://www.scikit-yb.org/en/latest/
This video shows more of the advanced bamboolib features: https://youtu.be/I0a58h1OCcg
The live histogram rebinning looks useful. Recently I read about a 'shadowgram' / ~KDE approach with very many possible bin widths translucently overlaid in one chart. https://stats.stackexchange.com/questions/68999/how-to-smear...
Yellowbrick also has a bin width optimization visualization in yellowbrick.target.binning.BalancedBinningReference: https://www.scikit-yb.org/en/latest/api/target/binning.html
Great work.
Thank you for your feedback and support :) Are you currently using OpenRefine?
We are currently thinking about providing other dataframe libraries like dask or pyspark and similar. However, we are a little bit unsure on how to make sure that there is user demand before we implement it. It is not a complete rewrite but it would require some additional abstractions at some points in the library. And we need to check if some features might not be available any more. Would dask support be a reason to buy for you?
Great hint with yellowbrick and yes, we are considering some of those features as well if there is a useful place in the library.
In general, we are also thinking about ways how you can extend the library for yourself so that you can add your own analyses/charts of choice and then they will come up again the right point in time. In case that this is useful.
In the past, I've looked at OpenRefine and Jupyter integration. Once I've learned to do data transformation with pandas and sklearn with code, I'll report back to you.
Pandas-profiling has a number of cool descriptive statistics features as well. https://github.com/pandas-profiling/pandas-profiling
There's a new IterativeImputer in Scikit-learn 0.22 that it'd be cool to see visualizations of. https://twitter.com/TedPetrou/status/1197150813707108352 https://scikit-learn.org/stable/modules/impute.html
A plugin model would be cool; though configuring the container every time wouldn't be fun. Some ideas about how we could create a desktop version of binderhub in order to launch REES-compatible environments on our own resources: https://github.com/westurner/nbhandler/issues/1
Dask has only a subset of Pandas available.
Could you send me a link to the docs where they say which ones are not included in Pandas? Would love to take a closer look at his.
Set difference and/or intersection of dir(pd.DataFrame) and dir(dask.DataFrame) with inspect.getargspec and inspect.doc would be a useful document for either or both projects.
pyfilemods generates a ReStructuredText document with introspected API comparisons. "Identify and compare Python file functions/methods and attributes from os, os.path, shutil, pathlib, and path.py" https://github.com/westurner/pyfilemods
Battery-Electric Heavy-Duty Equipment: It's Sort of Like a Cybertruck
> They’ve created a single platform that can be easily modified to do any number of jobs. For instance, their flagship product, the Dannar 4.00, can accept over 250 attachments from CAT, John Deere, or Bobcat. […] Having interoperability with so many different types of equipment, one platform can easily perform many tasks over the course of a year. This is a huge win for cash strapped municipalities. Why would a company or municipality opt to have a backhoe parked all winter long when it could be doing another job?
Does it have regenerative brakes?
Tools for turning descriptions into diagrams: text-to-picture resources
There's a sphinx extension (sphinxcontrib-sdedit) [1] for sdedit ("Quick Sequence Diagram Editor") which does require Java [2].
CSR: Corporate Social Responsibility
> Proponents argue that corporations increase long-term profits by operating with a CSR perspective, while critics argue that CSR distracts from businesses' economic role.
... The 3 Pillars of Corporate Sustainability: Environmental, Social, Economic https://www.investopedia.com/articles/investing/100515/three...
Three dimensions of sustainability: (Environment (Society (Economy))) https://en.wikipedia.org/wiki/Sustainability#Three_dimension...
What are some of the corporate sustainability reporting standards?
How can I score a candidate portfolio with sustainability metrics in order to impact invest with maximum impact?
> What are some of the corporate sustainability reporting standards?
From https://en.wikipedia.org/wiki/Sustainability_reporting#Initi... :
>> Organizations can improve their sustainability performance by measuring (EthicalQuote (CEQ)), monitoring and reporting on it, helping them have a positive impact on society, the economy, and a sustainable future. The key drivers for the quality of sustainability reports are the guidelines of the Global Reporting Initiative (GRI),[3] (ACCA) award schemes or rankings. The GRI Sustainability Reporting Guidelines enable all organizations worldwide to assess their sustainability performance and disclose the results in a similar way to financial reporting.[4] The largest database of corporate sustainability reports can be found on the website of the United Nations Global Compact initiative.
The GRI (Global Reporting Initiative) Standards are now aligned with the UN Sustainable Development Goals (#GlobalGoals). https://en.wikipedia.org/wiki/Global_Reporting_Initiative
>> In 2017, 63 percent of the largest 100 companies (N100), and 75 percent of the Global Fortune 250 (G250) reported applying the GRI reporting framework.[3]
> How can I score a candidate portfolio with sustainability metrics in order to impact invest with maximum impact?
Does anybody have solutions for this? AFAIU, existing cleantech funds are more hand-picked than screened according to sustainability fundamentals.
GTD Tickler file – a proposal for text file format
Taskwarrior is also built upon the todo.txt format. [1]
Taskw supports various task dates – { due: scheduled: wait: until: recur: } [2]
Taskw supports various named dates like soq/eocq, som/eom (start/end of [current] quarter, start/end of month), tomorrow, later [3]
Taskw recurring tasks (recur:) use the duration syntax: weekly/wk/w, monthly/mo, quarterly/qtr, yearly/yr, … [4]
Pandas has a "date offset" "frequency string" microsyntax that supports business days, quarters, and years; e.g. BQuarterEnd, BQuarterBegin [5]
IDK how usable by other tools these date string parsers are.
W/ just a text editor, having `todo.txt`, `daily.todo.txt`, and `weekly.todo.txt` (and `cleanhome.todo.txt` and `hygiene.todo.txt` with "## heading" tasks that get lost @where +sorting) works okay.
I have physical 43 folders, too: A 12 month and a 31 day expanding file. [6]
[2] https://taskwarrior.org/docs/using_dates.html
[3] https://taskwarrior.org/docs/named_dates.html
[4] https://taskwarrior.org/docs/durations.html
[5] https://pandas.pydata.org/pandas-docs/stable/user_guide/time...
Ask HN: Any suggestion on how to test CLI applications?
Hello HN!
I've been looking at alternatives on how to test command line applications, specifically, for example, exit codes, output messages and whatnot. I've seen "bats" https://github.com/sstephenson/bats and Bazel for testing but I'm curious as what other tools people use in a day to day basis. UI testing is nice with tools like Cypress.io and maybe there's something out there that isn't as popular but it's useful.
Thoughts?
pytest-docker-pexpect: https://github.com/nvbn/pytest-docker-pexpect
Pexpect: https://pexpect.readthedocs.io/en/stable/
pytest with subprocess.popen (or Sarge) may be sufficient for checking return codes and checking stdout and stderr output streams. Pytest has tmp_path and tmpdir fixtures that provide less test isolation than Docker containers: http://doc.pytest.org/en/latest/tmpdir.html
sarge.Capture.expect() takes a regex and returns None if there's no match: https://sarge.readthedocs.io/en/latest/tutorial.html#looking...
The Golden Butterfly and the All Weather Portfolio
The Golden Butterfly (is a modified All Weather Portfolio)
> Stocks: 20% Domestic Large Cap Fund (Vanguard’s VTI or Goldman Sach’s JUST), 20% Domestic Small Cap Value (Vanguard’s VBR)
> Bonds: 20% Long Term (Vanguard’s BLV), 20% Short Term (Vanguard’s BSV)
> Real Assets: 20% Gold (SPDR’s GLD)
The All Weather Portfolio:
> Stocks: 30% Domestic Total Stock Market (VG total stock)
> Bonds: 40% Long Term, 15% Intermediate-Term
> Real Assets: 7.5% Commodities, 7.5% Gold
Canada's Decision To Make Public More Clinical Trial Data Puts Pressure On FDA
We really could get more out of this data through international collaboration and through linked data (e.g. URIs for columns). See: "Open, and Linked, FDA data" https://github.com/FDA/openfda/issues/5#issuecomment-5392966... and "ENH: Adverse Event Count / 'Use' Count Heatmap" https://github.com/FDA/openfda/issues/49
With sales/usage counts, we'd have a denominator with which we could calculate relative hazard.
Python Alternative to Docker
Shiv does not solve for what containers and Docker/Podman/Buildah/Containerd solve for: re-launching processes at boot and failure, launching processes in chroots or cgroups (with least privileges), limiting access to network ports, limiting access to the host filesystem, building chroots / images, [...]
You can run build tools like shiv with a RUN instruction in a Dockerfile and get some caching.
You can build a zipapp with shiv (in a build container) and run the zipapp in a container.
Should the zipapp contain the test suite(s) and test_requires so that the tests can be run in an environment most similar to production?
It's much easier to develop with code on the filesystem (instead of in a zipapp).
It's definitely faster to read the whole zipapp into RAM than to stat and read each imported module from the filesystem once at startup.
There may be a better post title than the current "Python Alternative to Docker"? Shiv is a packaging utility for building Python zipapps. Shiv is not an alternative to process isolation with containers (or VMs)
$6B United Nations Agency Launches Bitcoin, Ethereum Crypto Fund
"UNICEF launches Cryptocurrency Fund: UN Children’s agency becomes first UN Organization to hold and make transactions in cryptocurrency" https://www.unicef.org/press-releases/unicef-launches-crypto...
From https://www.unicefusa.org/ :
> UNICEF USA helps save and protect the world's most vulnerable children. UNICEF USA is rated one of the best charities to donate to: 89% of every dollar spent goes directly to help children.
Timsort, the Python sorting algorithm
Here are the Python 3 docs for sorting [1], in-place list.sort() [2], and sorted() [3] (which makes a sorted copy of the references). And the Timsort Wikipedia page [4].
[1] https://docs.python.org/3/howto/sorting.html#sort-stability-...
[2] https://docs.python.org/3/library/stdtypes.html#list.sort
Supreme Court allows blind people to sue retailers if websites aren't accessible
"a11y": Accessibility
https://a11yproject.com/ has patterns, a checklist for checking web accessibility, resources, and events.
awesome-a11y has a list of a number of great resources for developing accessible applications: https://github.com/brunopulis/awesome-a11y
In terms of W3C specifications [1], you've got: WAI-ARIA (Web Accessibility Initiative: Accessibile Rich Internet Applications) [2], and WCAG: Web Content Accessibility Guidelines [3]. The new W3C Payment Request API [4] makes it easy for browsers to offer a standard (and probably(?) already accessible) interface for the payment data entry screen, at least.
There are a number of automated accessibility testing platforms. "[W3C WAI] Web Accessibility Evaluation Tools List" [5] lists quite a few. Can someone recommend a good accessibility testing tools? Is Google Lighthouse (now included with Chrome Devtools and as a standalone script) a good tool for accessibility reviews?
[1] https://github.com/brunopulis/awesome-a11y/blob/master/topic...
[2] https://www.w3.org/TR/using-aria/
[3] https://www.w3.org/WAI/standards-guidelines/wcag/
Streamlit: Turn a Python script into an interactive data analysis tool
Cool!
requests_cache caches HTML requests into one SQLite database. [1] pandas-datareader can cache external data requests with requests-cache. [2]
dask.cache can do opportunistic caching (of 2GB of data). [3]
How does streamlit compare to jupyter voila dashboards (with widgets and callbacks)? They just launched a new separate github org for the project. [4] There's a gallery of voila dashboard examples. [5]
> Voila serves live Jupyter notebooks including Jupyter interactive widgets.
> Unlike the usual HTML-converted notebooks, each user connecting to the Voila tornado application gets a dedicated Jupyter kernel which can execute the callbacks to changes in Jupyter interactive widgets.
> - By default, voila disallows execute requests from the front-end, preventing execution of arbitrary code.
[1] https://github.com/reclosedev/requests-cache
[2] https://pandas-datareader.readthedocs.io/en/latest/cache.htm...
[3] https://docs.dask.org/en/latest/caching.html
[4] https://github.com/voila-dashboards/voila
[5] https://blog.jupyter.org/a-gallery-of-voil%C3%A0-examples-a2...
Acess control and resource exhaustion are challenges with building any {Flask, framework_x,} app [from Jupyter notebooks]. First it's "HTTP Digest authentication should be enough for now"; then it's "let's use SSO and LDAP" (and review every release); then it's "why is it so sloww?". JupyterHub has authentication backends, spawners, and per-user-container/vm resource limits.
> Each user on your JupyterHub gets a slice of memory and CPU to use. There are two ways to specify how much users get to use: resource guarantees and resource limits. [6]
[6] https://zero-to-jupyterhub.readthedocs.io/en/latest/user-res...
Some notes re: voila and JupyterHub:
> The reason for having a single instance running voila only is to allow non JupyterHub users to have access to the dashboards. So without going through the Hub auth flow.
> What are the requirements in your case? Voila can be installed in the single user Docker image, so that each user can also use it on their own server (as a server extension for example). [7]
Scott’s Supreme Quantum Supremacy FAQ
Who even asked these questions?
I question this. All of this.
Literally people on HN have been asking many of these questions for years whenever QC is discussed.
Ask HN: How do you handle/maintain local Python environments?
I'm having some trouble figuring out how to handle my local Python. I'm not asking about 2 vs 3 - that ship has sailed - I'm confused on which binary to be using. From the way I see it, there's at least 4 different Pythons I could be using:
1 - Python shipped with OS X/Ubuntu
2 - brew/apt install python
3 - Anaconda
4 - Getting Python from https://www.python.org/downloads/
And that's before getting into how you get numpy et al installed. What's the general consensus on which to use? It seems like the OS X default is compiled with Clang while brew's version is with GCC. I've been working through this book [1] and found this thread [2]. I really want to make sure I'm using fast/optimized linear algebra libraries, is there an easy way to make sure? I use Python for learning data science/bioinformatics, learning MicroPython for embedded, and general automation stuff - is it possible to have one environment that performs well for all of these?
[1] https://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793
[2] https://www.reddit.com/r/Python/comments/46r8u0/numpylinalgsolve_is_6x_faster_on_my_mac_than_on/
I just use Anaconda. It's basically the final word in monolithic Python distributions.
For data science/numerical computation, all batteries included. It also has fast optimized linear algebra (MKL) plus extras like dask (paralellization), numba, out of the box. No fuss no muss. No need to fiddle with anything.
Everything else is a "pip install" or "conda install" away. Virtual envs? Has it. Run different Python versions on the same machine via conda environments? Has it. Web dev with Django etc.? All there. Need to containerize? miniconda.
The only downside? It's quite big and takes a while to install. But it's a one time cost.
I also prefer conda for the same reasons.
Precompiled MKL is really nice. Conda and conda-forge now build for aarch64. There are very few wheels for aarch64 on PyPI. Conda can install things like Qt (IPython-qt, spyder,) and NodeJS (JupyterLab extensions).
If I want to switch python versions for a given condaenv (instead of just creating a new condaenv for a different CPython/PyPy version), I can just run e.g. `conda install -y python=3.7` and it'll reinstall everything in the depgraph that depended on the previous python version.
I always just install miniconda instead of the whole anaconda distribution. I always create condaenvs (and avoid installing anything in the root condaenv) so that I can `conda-env export -f environment.yml` and clean that up.
BinderHub ( https://mybinder.org/ ) creates docker containers from {git repos, Zenodo, FigShare,} and launches them in free cloud instances also running JupyterLab by building containers with repo2docker (with REES (Reproducible Execution Environment Specification)). This means that all I have to do is add an environment.yml to my git repo in order to get Binder support so that people can just click on the badge in the README to launch JupyterLab with all of the dependencies installed.
REES supports a number of dependency specifications: requirements.txt, Pipfile.lock, environment.yml, aptSources, postBuild. With an environment.yml, I can install the necessary CPython/PyPy version and everything else.
...
In my dotfiles, I have a setup_miniconda.sh script that installs miniconda into per-CPython-version CONDA_ROOT and then creates a CONDA_ENVS_PATH for the condaenvs. It may be overkill because I could just specify a different python version for all of the conda envs in one CONDA_ENVS_PATH, but it keeps things relatively organized and easily diffable: CONDA_ROOT="~/-wrk/-conda37" CONDA_ENVS_PATH="~/-wrk/-ce37"
I run `_setup_conda 37; workon_conda|wec dotfiles` to work on the ~/-wrk/-ce37/dotfiles condaenv and set _WRD=~/-wrk/-ce37/dotfiles/src/dotfiles.
Similarly, for virtualenvwrapper virtualenvs, I run `WORKON_HOME=~/-wrk/-ve37 workon|we dotfiles` to set all of the venv cdaliases; i.e. then _WRD="~/-wrk/-ve37/dotfiles/src/dotfiles" and I can just type `cdwrd|cdw` to cd to the working directory. (Some of the other cdaliases are: {cdwrk, cdve|cdce, cdvirtualenv|cdv, cdsrc|cds}. So far, I have implemented cdalias support for bash, IPython, and vim)
One nice thing about defining _WRD is I can run `makew <tab>` and `gitw` to `cd $_WRD; make <tab>` and `git -C $_WRD` without having to change directory and then `cd -` to return to where I was.
So, for development, I use a combination of virtualenvwrapper, pipsi, conda, and some shell scripts in my dotfiles that I should get around to releasing and maintaining someday. https://westurner.github.io/dotfiles/venv
For publishing projects, I like environment.yml because of the REES support.
Is the era of the $100 graphing calculator coming to an end?
For $100, you can buy a Pinebook with an 11" or 14" screen, a multitouch trackpad, gigabytes of storage, WiFi, a keyboard without a numpad, and an ARM processor.
On this machine, you can create reproducible analyses with JupyterLab; do arithmetic with Python; work with multidimensional arrays with NumPy, SciPy, Pandas, xarray, Dask; do machine learning with Statsmodels, Scikit-learn, Dask-ML, TPOT; create books of these notebooks (containing code and notes (in Markdown, which easily transformed to HTML) and LaTeX equations) with jupyter-book, nbsphinx, git + BinderHub; store the revision history of your discoveries; publish what you've discovered and learned to public or private git repositories; and complete graded exercises with nbgrader.
But the task is to prepare for a world of mental arithmetic, no validation, no tests, no reference materials, and no search engines; and CAS (Computer Algebra Systems) systems like SymPy and Sage are not allowed.
On this machine, you can run write code, write papers, build spreadsheets and/or Jupyter notebooks, run physical simulations, explore the stars, and play games and watch videos. Videos like: Khan Academy videos and exercises that you can watch and do, with validation, until you've achieved mastery and move on to the next task on your todo.txt list.
But the task is to preserve your creativity and natural curiosity despite the compulsory education system's demands for quality control and allocative efficiency; in an environment where drama and popularity are the solutions to relatedness and acceptance needs.
I have three of these $100 calculators in my toolbox. It's been so long since I've powered them on that I'm concerned that the rechargeable AAA batteries are leaking battery acid.
For $100, you can buy an ARM notebook and install conda and conda-forge packages and build sweet visualizations to collaborate with colleagues on (with Seaborn (matplotlib), HoloViews, Altair, Plotly)
"You must buy a $100 calculator that only runs BASIC and ASM, and only use it for arithmetic so that we can measure you."
Hand tools are fun, but please don't waste any more of my compulsory time.
I would LOVE to live in a world where Jupyter notebooks are the defacto standard for post-primary education. It would take some vision and leadership to bring about that sort of change, but would far better prepare kids for the world they will be running one day.
Reinventing Home Directories
Full Disclosure: I am violently opposed to systemd and pulseaudio (before it was taken away from Poettering).
However, I think I'm seeing a pattern, Poettering identifies real issues and actually tries to fix them.
But it seems like his ambition is impeding his ability to consider those who do not want to approach the fix his way. It's never a measured approach, it /feels/ like a "lets write it now and figure out all the problems down the line and fix them" which, for core pieces of software is a dangerous mindset.
After watching the intro here I agree with him, the current state of things is not ideal, and nobody _wants_ to touch user account management on Linux. On MacOS for example, it's much more thought out, and I think Linux could do it better.
As much as I hate poetterings specific approaches (and, subsequent lockout) I have to give credit to him identifying these issues and actually trying to fix them, it's more than I do. Then again I am a lowly sysadmin type.
I like the tone of your comment, unlike many others here. But to your remark:
"But it seems like his ambition is impeding his ability to consider those who do not want to approach the fix his way. It's never a measured approach, it /feels/ like a "lets write it now and figure out all the problems down the line and fix them" which, for core pieces of software is a dangerous mindset."
I'd like to say that he forces nobody and so we should not complain about his creative endeavors. The market will decide, and you, will decide.
I disagree, you can argue the semantics of if it's him specifically or if it's another project that decides it will only target systemd/pulseaudio. But the fact that if I want a stable distro for server usage in 2019 I _must_ use a systemd distro speaks to the fact that I am indeed forced to use it.
Maybe that's just adoption though, typically the sysadmin community considers the "safe" options to be CentOS/RHEL or Debian Stable. Both of which default to systemd. (and no sysadmin is going to change the init system on production servers unless it presents some abject nightmare, short of eating 30% CPU/Mem-- it's core software, it wont be touched).
Projects like Devuan are simply not considered safe choices by the majority of the sysadmin community, and there's no alternative for RHEL-esque sites.
So, semantically: it's not poetterings fault. But at the same time, I'm still forced.
Sorry, but you'll need to find a word different from "forced"
What word would you recommend?
If I must do something then I am forced? no? -- and I'm not being facetious I'm genuinely curious.
If I wish to keep using production grade linux distributions (that my company has been using for 15+ years) then I must adopt systemd.
I guess I could convince my 25,000+ person organisation to change everything to a Debian based OS and subsequently move them to Debian, but that's not likely.
Why do you think that is? Why have production grade Linux distributions all chosen to adopt systemd?
With SysV init, how do you securely launch processes in cgroups, such that they'll consistently restart when the process happens to terminate, with stdout and stderr logged with consistent timestamps, with process dependency models that allow for faster boots due to parallelization?
(edit)
Journalctl is far better than `tail -f /var/log/starstar` and parsing all of those timestamps and inconsistently escaped logfile formats. There's no good way to modify everything in /etc/init.d in order to log to syslog-ng or rsyslog. Systemd and journalctl solve for that; for unified logging.
IMO, there's no question that systemd is the better way and I have zero nostalgia for spawning everything from a probably a shell specified in an /etc/init.d shebang without process restarting (and logging thereof), cgroups, and consistent logging.
> Why do you think that is? Why have production grade Linux distributions all chosen to adopt systemd?
Because RH funds a lot of software development, and Poettering works for them.
> Journalctl is far better than `tail -f /var/log/starstar`
Except when journald shits the bed and corrupts its own log file. Or when you want ship logs off-machine, yet journald does not support any standards-compliant transport so you end up have to log a second logging daemon (e.g., rsyslog)—in which case what's the damn point of the first? Or when you do a "service mysql start" and it dies, but there is no output anywhere of what went wrong, and you have to guess that you have a typo in your "my.cnf".
But other than that Mrs. Lincoln, how was the play?
> ... in an /etc/init.d shebang without process restarting (and logging thereof), cgroups, and consistent logging.
The problem is not systemd-as-init-replacement. The problem is systemd-as-kitchen-sink. udevd is an "essential" part of systemd? Really? It can't be it's own stand-alone project? Really?
When journald logfile corruption occurs, it's detected and it starts writing a new logfile.
When flatfile logfile corruption occurs, it's not detected and there are multiple logfile formats to contend with. And multiple haphazard logrotate configs.
Here's how to use a separate process to ship journald logs - from one file handle - to a remote logging service: https://unix.stackexchange.com/questions/394822/how-should-i...
While there is a systemd-journal-remote, it's not necessary for journald to try and replicate what's already solved and tested in rsyslog and syslog-ng.
It's quite a bit more work to add every new service to the syslog-ng or rsyslog configuration than to just ship one journald log.
Furthermore, service start/stop events are already in the same stream (with the same timestamp format) with the services' stdout and stderr.
Why hasn't anyone written fsck for corrupted journald recovery?
...
I have not needed to makedev and chown and chatted and chcon anything in very many years. When you accidentally newbishly delete something from a static /dev and rebooting doesn't work and you have no idea what the major minor is or was, it sucks bad.
When you're trying to boot a system on a different machine but it doesn't work because the NIC is in a different bus, it's really annoying to have to symlink /dev or modify /etc. With udevd, all you need to do is define a rule to map the busid device name to e.g. eth0. I can remember encountering the devfs race condition resulting in eth0 and eth1 being mapped to different devices on different boots; which was dangerous because firewall rules are applied to device names.
Udev has been in the kernel since 2.6.
"What problems does udev actually solve?" https://superuser.com/questions/686774/what-problems-does-ud...
With integrated udev and systemd, I have no reason to run a separate hotplugd with a different config format (again with no cgroup support) and a different logstream.
Perhaps ironically, here's a link to the presentation PDF that was posted yesterday: https://news.ycombinator.com/item?id=21036020
And my comments there:
> What a good idea.
> Here's the hyperlinkified link to the {systemd-homed.service, systemd-userdbd.service, homectl, userdbctl} sources from the PDF: https://github.com/poettering/systemd/tree/homed
> Hadn't heard of varlink: https://varlink.org/
> Is there a FIPS-like subset of the most-widely-available LUKS configs? Otherwise home directories won't work on systems that have a limited set of LUKS modules.
Serverless: slower and more expensive
It'd be interesting to see how much this same workload would cost with e.g. OpenFaaS on k8s with autoscaling to zero; but there also you'd need to include maintenance costs like OS and FaaS stack upgrades. https://docs.openfaas.com/architecture/autoscaling/
Entropy can be used to understand systems
Maximum entropy: https://en.wikipedia.org/wiki/Maximum_entropy
Here's a quote of a tweet about a (my own): comment on a schema:BlogPost: https://twitter.com/westurner/status/1048125281146421249:
> “When Bayes, Ockham, and Shannon come together to define machine learning” https://towardsdatascience.com/when-bayes-ockham-and-shannon...
> Comment: "How does this relate to the Principle of Maximum Entropy? How does Minimum Description Length relate to Kolmogorov Complexity?"
Thanks for sharing! Despite the fact that Shannon's "A Mathematical Theory of Communication" is so accessible, I find that most in our field (stats/ML) don't often think through information-theoretic tools in a "first principles way."
Yes, KL divergences show up everywhere, but they are not derived from scratch often enough. Maybe I'm stifled by my campus bubble though :)
New Query Language for Graph Databases to Become International Standard
Graph query languages are nice and all, but what about Linked Data here? Queries of schemaless graphs miss lots of data because without a schema this graph calls it "color" and that graph calls it "colour" and that graph calls it "色" or "カラー". (Of course this is also an issue even when there is a defined schema; but it's hardly possible to just happen to have comprehensible inter or even intra-organizational cohesion without e.g. RDFS and/or OWL and/or SHACL for describing (and changing) the shape of the data)
So, the task is then to compile schema-aware SPARQL to GQL or GraphQL or SQL or interminable recursive SQL queries or whatever it is.
For GraphQL, there's GraphQL-LD (which somewhat unfortunately contains a hashtag-indeterminate dash). I cite this in full here because it's very relevant to the GQL task at hand:
"GraphQL-LD: Linked Data Querying with GraphQL" (2018) https://comunica.github.io/Article-ISWC2018-Demo-GraphQlLD/
> GraphQL is a query language that has proven to be a popular among developers. In 2015, the GraphQL framework [3] was introduced by Facebook as an alternative way of querying data through interfaces. Since then, GraphQL has been gaining increasing attention among developers, partly due to its simplicity in usage, and its large collection of supporting tools. One major disadvantage of GraphQL compared to SPARQL is the fact that it has no notion of semantics, i.e., it requires an interface-specific schema. This therefore makes it difficult to combine GraphQL data that originates from different sources. This is then further complicated by the fact that GraphQL has no notion of global identifiers, which is possible in RDF through the use of URIs. Furthermore, GraphQL is however not as expressive as SPARQL, as GraphQL queries represent trees [4], and not full graphs as in SPARQL.
> In this work, we introduce GraphQL-LD, an approach for extending GraphQL queries with a JSON-LD context [5], so that they can be used to evaluate queries over RDF data. This results in a query language that is less expressive than SPARQL, but can still achieve many of the typical data retrieval tasks in applications. Our approach consists of an algorithm that translates GraphQL-LD queries to SPARQL algebra [6]. This allows such queries to be used as an alternative input to SPARQL engines, and thereby opens up the world of RDF data to the large amount of people that already know GraphQL. Furthermore, results can be translated into the GraphQL-prescribed shapes. The only additional requirement is their queries would now also need a JSON-LD context, which could be provided by external domain experts.
> In related work, HyperGraphQL [7] was introduced as a way to expose access to RDF sources through GraphQL queries and emit results as JSON-LD. The difference with our approach is that HyperGraphQL requires a service to be set up that acts as a intermediary between the GraphQL client and the RDF sources. Instead, our approach enables agents to directly query RDF sources by translating GraphQL queries client-side.
All of these RDFS vocabularies and OWL ontologies provide structure that minimizes the costs of merging and/or querying multiple datasets: https://lov.linkeddata.es/dataset/lov/
All of these schema.org/Dataset s in the "Linked Open Data Cloud" are easier to query than a schemaless graph: https://lod-cloud.net/ . Though one can query schemaless graphs with SPARQL, as well.
For reference, RDFLib has a bunch of RDF graph implementations over various key/value and SQL store backends. RDFLib-sqlachemy does query parametrization correctly in order to minimize the risk of query injection. FOR THE RECORD, SQL Injection is the CWE Top 25 #1 most prevalent security weakness; which is something that any new spec and implementation should really consider before launching anything other than an e.g. overly-verbose JSON-based query language that people end up bolting a micro-DSL onto. https://github.com/RDFLib/rdflib-sqlalchemy
Most practically, I frequently want to read a graph of objects into RAM; update, extend, and interlink; and then transactionally save the delta back to the store. This requires a few things: (1) an efficient binary serialization protocol like Apache Arrow (SIMD), Parquet, or any of the BSON binary JSONs; (2) a transactional local store that can be manually synchronized with the remote store until it's consistent.
SPARQL Update was somewhat of an out-of-scope afterthought. Here's SPARQL 1.1 Update: https://www.w3.org/TR/sparql11-update/
Here's SOLID, which could be implemented with SPARQL on GQL, too; though all the re-serialization really shouldn't be necessary for EAV triples with a named graph URI identifier: https://solidproject.org/
5 star data: PDF -> XLS -> CSV -> RDF (GQL, AFAIU (but with no URIs(!?))) -> LOD https://5stardata.info/en/
Linked Data tends to live in a semantic web world that has a lot of open world assumptions. While there are a few systems like this out there, there aren't many. More practically focused systems collapse this worldview down into a much simpler model, and property graphs suit just fine.
There's nothing wrong with enabling linked data use cases, but you don't need RDF+SPARQL+OWL and the like to do that.
The "semantic web stack" I think has been shown by time and implementation experience to be an elegant set of standards and solutions for problems that very few real world systems want to tackle. In the intervening 2 full generations of tech development that have happened since a lot of those standards were born, some of the underlying stuff too (most particularly XML and XML-NS) went from indispensable to just plain irritating.
> Linked Data tends to live in a semantic web world that has a lot of open world assumptions. While there are a few systems like this out there, there aren't many. More practically focused systems collapse this worldview down into a much simpler model, and property graphs suit just fine.
Data integration is cost prohibitive. In n years time, the task is "Let's move all of these data silos into a data lake housed in our singular data warehouse; and then synchronize and also copy data around to efficiently query it in one form or another"
Linked data enables data integration from day one: enables the linking of tragically-silo'd records within disparate databases
There are very very many systems that share linked data. Some only label some of the properties with URIs in templates. Some enable federated online querying.
When you develop a schema for only one application implementation, you're tragically limiting the future value of the data.
> There's nothing wrong with enabling linked data use cases, but you don't need RDF+SPARQL+OWL and the like to do that.
Can you name a property graph use case that cannot be solved with RDFS and SPARQL?
> The "semantic web stack" I think has been shown by time and implementation experience to be an elegant set of standards and solutions for problems that very few real world systems want to tackle.
TBH, I think the problem is that people don't understand the value in linking our data silos through URIs; and so they don't take the time to learn RDFS or JSON-LD (which is pretty simple and useful for very important things like SEO: search engine result cards come from linked data embedded in HTML attributes (RDFa, Microdata) or JSON-LD)
The action buttons to 'RSVP', 'Track Package', anf 'View Issue' on Gmail emails are schema.org JSON-LD.
Applications can use linked data in any part of the stack: the database, the messages on the message queue, in the UI.
You might take a look at all of the use cases that SOLID solves for and realize how much unnecessary re-work has gone into indexing structs and forms validation. These are all the same app with UIs for interlinked subclasses of https://schema.org/Thing with unique inferred properties and aggregations thereof.
> In the intervening 2 full generations of tech development that have happened since a lot of those standards were born, some of the underlying stuff too (most particularly XML and XML-NS) went from indispensable to just plain irritating.
Without XSD, for example, we have no portable way to share complex fractions.
There's a compact representation of JSON-LD that minimizes record schema overhead (which gzip or lzma generally handle anyway)
https://lod-cloud.net is not a trivial or insignificant amount of linked data: there's real value in structuring property graphs with standard semantics.
Are our brains URI-labeled graphs? Nope, and we spend a ton of time talking to share data. Eventually, it's "well let's just get a spreadsheet and define some columns" for these property graph objects. And then, the other teams' spreadsheets have very similar columns with different labels and no portable datatypes (instead of URIs)
> Can you name a property graph use case that cannot be solved with RDFS and SPARQL?
No - that's not the point. Of course you can do it with RDFS + SPARQL. For that matter you could do it with redis. Fully beside the point.
What's important is what the more fluent and easy way to do things is. People vote with their feet, and property graphs are demonstrably easier to work with for most use cases.
“Easier” is completely subjective, no way you can demonstrate that.
RDF solves a much larger problem than just graph data model and query. It addresses data interchange on the web scale, using URIs, zero-cost merge, Linked Data etc.
> “Easier” is completely subjective, no way you can demonstrate that.
I agree it's subjective. While there's no exact measurement for this sort of thing, the proxy measure people usually use is adoption; and if you look into for example Cypher vs. SPARQL adoption, Neo4j vs. RDF store adoption, people are basically voting with their feet.
From my personal experiences developing software with both, I've found property graphs much simpler and a better map for how people think of data.
It's true that RDF tries to solve data interchange on the web scale. That's what it was designed for. But the original design vision, in my view, hasn't come to fruition. There are bits and pieces that have been adopted to great effect (things like RDF microformats for tagging HTML docs) but nothing like what the vision was.
What was the vision?
The RDFJS "Comparison of RDFJS libraries" wiki page lists a number of implementations; though none for React or AngularJS yet, unfortunately. https://www.w3.org/community/rdfjs/wiki/Comparison_of_RDFJS_...
There's extra work to build general purpose frameworks for Linked Data. It may have been hard for any firm with limited resources to justify doing it the harder way (for collective returns)
Dokieli (SOLID (LDP,), WebID, W3C Web Annotations,) is a pretty cool - if deceptively simple-looking - showcase of what's possible with Linked Data; it just needs some CSS and a revenue model to pay for moderation. https://dokie.li/
> property graphs are demonstrably easier to work with for most use cases.
How do you see property graphs as distinct from RDF?
People build terrible apps without schema or validation and leave others to clean that up.
> How do you see property graphs as distinct from RDF?
This is the full answer: https://stackoverflow.com/a/30167732/2920686
I added an answer in context to the comments on the answer you've linked but didn't add a link from the comments to the answer. Here's that answer:
> (in reply to the comments on this answer: https://stackoverflow.com/a/30167732 )
> When an owl:inverseOf production rule is defined, the inverse property triple is inferred by the reasoner either when adding or updating the store, or when selecting from the store. This is a "materialized relation"
> Schema.org - an RDFS vocabulary - defines, for example, https://schema.org/isPartOf as the inverse property of hasPart. If both are specified, it's not necessary to run another graph pattern query to traverse a directed relation in the other direction. (:book1 schema:hasPart ?o), (?o schema:isPartOf :book1), (?s schema:hasPart :chapter2)
> It's certainly possible to use RDFS and OWL to describe schema for and within neo4j property graphs; but there's no reasoner to e.g. infer inverse properties or do schema validation.
> Is there any RDF graph that neo4j cannot store? RDF has datatypes and languages for objects: you'd need to reify properties where datatypes and/or languages are specified (and you'd be re-implementing well-defined semantics)
> Can every neo4j graph be represented with RDF? Yes.
> RDF is a representation for graphs for which there are very many store implementations that are optimized for various use cases like insert and query performance.
> Comparing neo4j to a particular triplestore (with reasoning support) might be a more useful comparison given that all neo4j graphs can be expressed as RDF.
And then, some time later, I realize that I want/need to: (3) apply production rules to do inference at INSERT/UPDATE/DELETE time or SELECT time (and indicate which properties were inferred (x is a :Shape and a :Square, so x is also a :Rectangle; x is a :Rectangle and :width and :height are defined, so x has an :area)); (4) run triggers (that execute code written in a different language) when data is inserted, updated, modified, or linked to; (5) asynchronously yield streaming results to message queue subscribers who were disconnected when the cached pages were updated
A Python Interpreter Written in Python
What an excellent 500 lines introduction to the byterun bytecode interpreter / virtual machine: https://github.com/nedbat/byterun
Also, proceeds from optional purchases of the AOSA books go to Amnesty International. https://aosabook.org/
Reinventing Home Directories – systemd-homed [pdf]
What a good idea.
Here's the hyperlinkified link to the {systemd-homed.service, systemd-userdbd.service, homectl, userdbctl} sources from the PDF: https://github.com/poettering/systemd/tree/homed
Hadn't heard of varlink: https://varlink.org/
Is there a FIPS-like subset of the most-widely-available LUKS configs? Otherwise home directories won't work on systems that have a limited set of LUKS modules.
Weld: Accelerating numpy, scikit and pandas as much as 100x with Rust and LLVM
There's also RustPython, a Rust implementation of CPython 3.5+: https://news.ycombinator.com/item?id=20686580
> https://github.com/RustPython/RustPython
Is this basically what Cython and PyPy are trying to do, but with Rust?
PyPy is a JIT compiler. RustPython is an interpreter.
And Cython is an AOT compiler for a superset of Python.
RustPython seems to be modestly aiming for a reimplementation of CPython.
Maybe "dialect" would be more accurate than "superset"? I don't think Cython is technically a superset of Python, since I think runtime metaprogramming features like __dict__ and monkey-patching are significantly altered or restricted?
Cython is a reification of the interpretation of a python program. Ie it converts the python code into the equivalent CPython API calls (which are all in C) thereby allowing the developer to interperse real C code. Anything you could do in python you could technically do in Cython, although it would be much more verbose.
Yeah, I was going to make a similar comment. It's a dialect of CPython, and certainly there are extensions required to make it usable. But I'm not sure it is a strict superset of the full Python language.
Convention wound lean towards calling it RPython
Why?
Craftsmanship–The Alternative to the 4 Hour Work Week
> To be successful over the course of a career requires the application and accumulation of expertise. This assumes that for any given undertaking you either provide expertise or you are just a bystander. It’s the experts that are the drivers — an expertise that is gained from a curiosity, and a mindset of treating one’s craft very seriously.
Solar and Wind Power So Cheap They’re Outgrowing Subsidies
USD$5.2 trillion was spent globally on fossil fuel subsidies in 2017
https://www.imf.org/en/Publications/WP/Issues/2019/05/02/Glo...
"hey fossil fuels, let's 1v1" sincerely, solar
(From Forbes: United States Spend Ten Times More On Fossil Fuel Subsidies Than Education)
A few things:
1) That's not what was spent, it's what this paper projected was spent.
2) I think this paper is defining subsidy in a way that we don't usually use that word. They're calling the costs associated with anthropogenic climate change and pollution "subsidies", most people would just call those things "costs."
Using the word "subsidy" implies that governments are actively taking tax dollars and giving it to fossil fuel companies and consumers. That doesn't appear to be what's going on for the most part.
Look at Figure 4, the bulk of the so-called "subsidies" are "global warming" and "local pollution" those aren't what most people would call subsidies, they're costs.
I'm saying this just to clarify things, personally I am supportive of massive tax increases on carbon and massive subsidies for renewables.
It’s an unconventional way to use “subsidy,” but I think it fits.
Imagine a garbage company. The government pays them so they can buy land where they dump their garbage. Obviously a subsidy.
Now, let’s say the government buys the land themselves then gives it to the company for dumping. No money changes hands but this is still pretty clearly a subsidy.
Instead of giving the company land, the government retains ownership, but lets the company dump there for free. Still a pretty clear subsidy.
Instead of buying the land, the government just takes it. Now no money is involved at all, but it’s still a subsidy.
Instead of taking the land, the government just declares that it’s legal for the garbage company to dump trash on other people’s land, and the owners just have to deal with it. This is quite different in the details from the original subsidy, but the overall effect is essentially the same.
Polluters are in a situation that’s exactly like this last scenario. They get to dump their trash on everyone’s property and don’t have to pay for the privilege. They’re being subsidized in an amount equal to whatever payment it would take to get everyone to willingly accept this trash.
This is wrong because subsidies aren't just externalized costs. They are defined as monetary gifts. You're extending the definition by analogy and arguing that a company externalizing its costs is the same thing as a subsidy, but a subsidy implies direct and deliberate support.
This is important for a couple of reasons: The main one is that it's confusing to redefine subsidy when you can just say "cost" and have people understand you. People reading this survey might see the $5.2T number and assume that that cost is in addition to whatever the cost of climate change is and will have to read the paper to understand otherwise. This is unnecessarily confusing even if one were to grant the logic of it.
In addition, when people discuss subsidies, they are often most interested in government policy. The purpose of a subsidy often is to increase a certain kind of business, so we might worry about unnecessarily funding the fossil fuels and thus encouraging climate change in a manner above and beyond simply allowing them to be used the way we've always done, but that's not quite what's going on here.
One thing to keep in mind is that it's a lot easier to measure a genuine government subsidy than an externality. So the distinction matters in that regard as well. Any measure of the cost of climate change is to at least some degree speculation, whereas any attempt to measure the direct amount of money given to fossil fuel companies can probably be much more exact.
Gifts in kind aren’t subsidies? If the government gives equipment or resources or labor, rather than money, it’s no longer a subsidy? That sure doesn’t fit how I understand the term. What would you call those?
"Gifts in kind aren’t subsidies?"
They can be but that's less common. The key difference is that a subsidy is direct and an active policy.
For example, when people talk about subsidies for renewables, they aren't talking about any externalized costs of manufacture, which do exist, they are talking about direct government gifts and tax breaks deliberately put in place to encourage investment in renewables. When people talk about subsidies given to fossil fuels the same is true, especially when they are being compared to renewables as is the case in this discussion.
Edit: removed word "decision" to clarify my meaning
Then it's a gift in kind and a subsidy.
When I litter, I get fined, but those companies are not when they litter e.g. carbon all over. When I dump chemicals into nature, I get fined for polluting the environment or even imprisoned outright, those companies do not e.g. when it's in the form of "emissions". Cars in my country get taxed directly or indirectly (through fuel) based in part on emissions, while a lot of commercial vehicles and fuels for these vehikles get a reduced rate or even excluded from taxation. But cars here are taxed far less so than in other European countries. etc.
The governments are clearly aware that pollution and dumping your garbage are things you should not do or at least minimize. They made laws against it, but actively decided to exclude certain business sectors and/or certain types of pollution, or actively decided not to regulate or tax certain types of pollution while regulating/taxing others.
"Cars in my country get taxed directly or indirectly (through fuel) based in part on emissions"
Unless these emmission taxes are calibrated to the cost of climate change then this argument is missing the point. Taxes and subsidies are often instituted in response to negative and positive externalities, but that doesn't change the fact that they are different things. This is important when trying to draw policy comparisons which is our situation here.
You seem to want to argue that subsidizing a business and not taxing them on an externality is somehow morally the same thing, and that's an entirely different discussion, but it doesn't mean that they are factually the same thing. There is a practical difference in terms of how things are measured and how policies are compared across industries and governments and that difference matters.
>Unless these emmission taxes are calibrated to the cost of climate change then this argument is missing the point.
The stated policy goal in part is to reduce emissions to met climate targets to fight climate change, so yes.
>You seem to want to argue that subsidizing a business and not taxing them on an externality is somehow morally the same thing
It is.
>but it doesn't mean that they are factually the same thing.
First of all, the meaning of words and political concepts are never factual.
But I'd still argue that at a high level they are the same, and both are subsidies. In both cases the government refuses money it would otherwise collect from different parties, thereby gifting those entities value you can put a price tag on.
Those decisions are active decisions NOT to do something (while doing something about the same thing or very similar things when it comes to other parts of the population), at least at this point.
The only distinction I'd make is between direct subsidies (the government forks over money) and indirect/implicit subsidies (the government decides not to make certain entities pay for certain things for which other entities have to pay the government).
">but it doesn't mean that they are factually the same thing.
First of all, the meaning of words and political concepts are never factual."
You're still missing the point. The point is that there is a distinction between a subsidy and an externality and that distinction is important. It's important for measurement reasons (The exact monetary amount a government spends on something is easier to measure than the indirect cost of a policy,) and for simple communication reasons. It makes no sense to talk about instituting a tax to cover a subsidy. You institute a tax to cover an externality. It also matters because there are ways of dealing with externalities other than taxes and subsidies and reducing the language makes this more confusing. It's especially confusing when the distinction is made in one discussion (about renewables) but not in the other (about fossil fuels.)
There is a term for using an unexpected definition for a word that already has a widely used definition during a discussion, that is a 'stipulative definition'. It's dishonest to do so without being clear upfront or in response to a discussion where the original definition is in use. This results in equivocation. Whether or not a subsidy is morally equivalent to an externality is a moot point if you're willing toss about with the language. My work involves financial reporting and if my employer asked for one set of numbers and I gave him another that I argued were 'morally equivalent' would pretty clearly be in the wrong, even if the point I was making about moral equivalence was correct.
No, you're splitting hairs.
There are direct and indirect subsidies. Indirect subsidies include externalities: external costs paid by everyone else (that the government should be incentivizing reductions in by requiring the folks causing them to pay)
Semantic digressions aside, they're earning while everyone else pays costs resultant from their operations (and from our apparent inability to allocate with e.g. long term security, health, and prosperity as primary objectives for the public sphere)
Handing money to polluters helps polluters, and failing to disincentivize externalities also helps polluters, but it's OK to call one thing a subsidy and the other thing poor governance.
"Subsidy" carries a connotation of purposeful action to help something. Subsidizing a bad thing is worse than merely allowing it to happen or looking the other way. It seems like you want to re-label things in "B" by the label for "A" to make it sound worse.
>"Subsidy" carries a connotation of purposeful action to help something.
Purpouseful action implies awareness and intention.
Externality implies not yet recognized.
Having 2 words for similar concepts does not mean they are different concepts. "heavy" and "massive" are two different words but they convey essentially the same thing.
I would agree that unrecognized externalities are not subsidies, say CO2 pollution before it was recognized, but then again since they were literally not recognized as externalities (yet).
But the very moment CO2 pollution is recognized, the previously unrecognized externality is to be instantly viewed as a subsidy.
History repeats itself, the first time as a tragedy, from then on as a farce.
[deleted]
Why do subsidies have to be an active policy decision?
You’re right, when people talk about fossil fuel subsidies, that’s thinking of government gifts and tax breaks. My point is that this is a deeply inadequate way to think about it. If you only look at these, and compare fossil fuels to renewables, you’ll come away thinking that fossil fuels are far cheaper than they actually are.
"If you only look at these, and compare fossil fuels to renewables, you’ll come away thinking that fossil fuels are far cheaper than they actually are."
In the case of this discussion, you're looking at it backwards. The subject of fossil fuel subsidies was brought up in comparison to subsidies for renewables which in this case were already defined as specific government programs. If it weren't for that, I wouldn't be being such a stickler. However given that we are comparing renewables to fossil fuels we need to be sure that we are measuring the same thing and to include externalities when measuring fossil fuels and not when measuring renewables is to be not measuring the same thing. (I actually got confused when I first saw the IMF link so I know my concern isn't hypothetical.)
Yes, I get that the case can be made that cost of using fossil fuels is more that the sum of direct gifts by the government to fossil fuel companies (fucking duh!) but that can be expressed without implying that governments gave $5.2T in tax dollars directly to fossil fuel companies in 2017. Where confusion is possible, it's best to make distinctions and clarify what is meant.
"subsidies" includes both direct and indirect subsidies.
We can measure direct subsidies by measuring real and effective tax rates.
We can measure indirect subsidies like healthcare costs paid by Medicare with subjective valuations of human life and rough estimates of the value of a person's health and contribution to growth in GDP, and future economic security.
But who has the time for this when we're busy paying to help folks who require disaster relief services from the government and NGOs (neither of which are preventing further escalations in costs)
Problem is that if the word "subsidies" is used exclusively to mean a specific and narrow concept around cash transfers when talking about renewables, but then you use the term to include a much broader set of scenarios when talking about fossil fuels, while directly comparing the derived numbers, then you're misusing the language in order to deceive, regardless of all else. You're communicating that the numbers represent the same calculations for both forms of energy, when you know that they actually don't.
Define the terms however you like, as long as you are careful to not to induce people to believe someone that is untrue.
Show HN: Python Tests That Write Themselves
This actually gave me another idea. What do you all think, I’d be up for trying to build it.
It would watch your program during execution and record each function calls input and output. And then create tests for each function using those inputs and outputs.
You could always go over the created tests manually and fix them or review them but if nothing else it could be a good start.
Instagram created 'MonkeyType' to do something similar for type annotations.
A lower barrier way of trying this out could be to run MonkeyType in combination with OP's hypothesis-auto (which uses type annotations to generate the tests).
pytype (Google) [1], PyAnnotate (Dropbox) [2], and MonkeyType (Instagram) [3] all do dynamic / runtime PEP-484 type annotation type inference [4]
[1] https://github.com/google/pytype
[2] https://github.com/dropbox/pyannotate
Most Americans see catastrophic weather events worsening
The stratifications on this are troubling.
> But there are wide differences in assessments by partisanship. Nine in 10 Democrats think weather disasters are more extreme, compared with about half of Republicans.
It's not a partisan issue: we all pay these costs.
> Majorities of adults across demographic groups think weather disasters are getting more severe, according to the poll. College-educated Americans are slightly more likely than those without a degree to say so, 79 percent versus 69 percent.
Weather disasters are getting more severe. It is objectively, quantitatively true that weather disasters are getting more frequent and more severe.
> Weather disasters are getting more severe. It is objectively, quantitatively true that weather disasters are getting more frequent and more severe.
Source? What definitions are being used for severity? How is the sample of events selected? Is there a statistically-significant effect or might it be random variation?
> Source? What definitions are being used for severity? How is the sample of events selected? Is there a statistically-significant effect or might it be random variation?
These are great questions that any good skeptic / data scientist should always be asking. Here are some summary opinions based upon meta analyses with varyingly stringent inclusion criteria.
( I had hoped that the other top-level post I posted here would develop into a discussion, but these excerpts seem to have bubbled up. https://news.ycombinator.com/item?id=20919368 )
"Scientific consensus on climate change" lists concurring, non-commital, and opposing groups of persons with and without conflicting interests: https://en.wikipedia.org/wiki/Scientific_consensus_on_climat...
USGCRP, "2017: Climate Science Special Report: Fourth National Climate Assessment, Volume I" [Wuebbles, D.J., D.W. Fahey, K.A. Hibbard, D.J. Dokken, B.C. Stewart, and T.K. Maycock (eds.)]. U.S. Global Change Research Program, Washington, DC, USA, 470 pp, doi: 10.7930/J0J964J6.
"Chapter 8: Droughts, Floods, and Wildfire" https://science2017.globalchange.gov/chapter/8/
"Chapter 9: Extreme Storms" https://science2017.globalchange.gov/chapter/9/
"Appendix A: Observational Datasets Used in Climate Studies" https://science2017.globalchange.gov/chapter/appendix-a/
The key findings in this report do list supporting evidence and degrees of confidence in predictions about the frequency and severity of severe weather events.
I'll now proceed to support the challenged claim that disaster severity and frequency are increasing by citing disaster relief cost charts which do not directly support the claim. Unlike your typical televised debate or congressional session, I have: visual aids, a computer, linked to the sources I've referenced. Finding the datasets ( https://schema.org/Dataset ) for these charts may be something that someone has time for while the costs to taxpayers and insurance holders are certainly increasing for a number of reasons.
"Taxpayer spending on U.S. disaster fund explodes amid climate change, population trends" (2019) has a nice chart displaying "Disaster-relief appropriations, 10-year rolling median" https://www.washingtonpost.com/us-policy/2019/04/22/taxpayer...
"2018's Billion Dollar Disasters in Context" includes a chart from NOAA: "Billion-Dollar Disaster Event Types by Year (CPI-Adjusted)" with the title embedded in the image text - which I searched for - and eventually found the source of: [1] https://www.climate.gov/news-features/blogs/beyond-data/2018...
[1] "Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series
Thank you!
It seems these data are mostly counted in dollars of damage or dollars of relief, which is a proxy for the severity.
Would it be correct to say there is still some question about whether dollars are a good measure of severity?
EDIT: As I am browsing the data, it's hard to disentangle actual weather events from things like lava, fires, unsound building decisions, and just the politics of money moving around.
> It is objectively, quantitatively true that weather disasters are getting more frequent and more severe.
From the article, storms may be getting slightly more severe - but "more frequent" is incorrect:
> Scientific studies indicate a warming world has slightly stronger hurricanes, but they don’t show an increase in the number storms hitting land, Colorado State University hurricane researcher Phil Klotzbach said. He said the real climate change effect causing more damage is storm surge from rising seas, wetter storms dumping more rain and more people living in vulnerable areas.
Now, we all should do what we can to address it - but we all should also examine whether we're picking and choosing only scientific observations that support our feelings/pre-conceived notions while discarding the rest of the data.
The article seems to have focused on perceptions of persons who aren't concerned with taking an evidence-based look (at various types of storms: floods, cyclones (i.e. hurricanes), severe thunderstorms, windstorms. Regardless, costs are increasing. I've listed a few sources here: https://news.ycombinator.com/item?id=20925127
"2017: Climate Science Special Report: Fourth National Climate Assessment, Volume I" > "Chapter 9: Extreme Storms" lists a number of relevant Key Findings with supporting evidence (citations) and degrees of confidence: https://science2017.globalchange.gov/chapter/9/
> we all pay these costs
... if there actually are costs to pay. You seem to be sidestepping the actual question here.
"Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series
> The stratifications on this are troubling.
They are, yet almost no one I know of (in leadership positions, or the general public) seems to think they're troubling enough to put any serious effort into determining with some level of certainty why there is so much stratification along political lines on so many important issues, this being only one of them. "Conservatives are uneducated" sounds about right so that's what we'll go with it seems, regardless of whether it's actually correct. That belief may be right, but it may not.
> It's not a partisan issue: we all pay these costs.
In one perspective, but as is usually the case, there are several perspectives involved. Another non-trivial perspective is that the solution requires non-partisan cooperation, and from that perspective partisanship is not only an issue, it is a crucially important issue, so we might be well served by putting some effort into understanding the true nature of it.
> Weather disasters are getting more severe. It is objectively, quantitatively true that weather disasters are getting more frequent and more severe.
For certain definitions of "more", "severe", "objectively", "true", and "frequent".
So if the problem isn't simply that "conservatives are uneducated", what else might it be? I think the answer lies in the incredibly complex manner in which people perceive the world in general, what information they consume, how they perceive that information, and how they integrate it into their personal internal overall worldview. People think they think in facts, but they actually think in stories, and in turn this affects how they consume new information.
Take this simple article as an example, and notice the variety of perspectives we see already with just a few (14 at the time of my writing) comments posted in this thread. Notice how people aren't discussing just the content of the article, but rather including related ideas from past information they have consumed and stored (the mechanics of which we do not understand) within the mental model of the world they hold in their brain. An even better example of this behavior can be seen in the recent discussion on the CDC report on vaping: https://news.ycombinator.com/item?id=20915520 Observe all the complex perspectives based on the very same "facts" in the article. Observe all the assertions of related "facts", that are actually only opinions. Observe all the mind reading.
If you start looking for this phenomenon you will notice it everywhere, and if you pay closer attention you may also notice that certain topics are particularly prone to devolving into narrative (rather than fact) based conversation. The usual suspects are obvious: religion, gender, sexuality, etc, but smoking/vaping seems to me like somewhat of an interesting outlier. The former examples are very closely tied to personal identity, but while smoking/vaping shares an identity attribute to some degree, I feel like there's some other unseen psychological issue in play that results in such high polarization of beliefs.
My guess is you and I see very different things despite consuming the very same article. I lean conservative/libertarian (generally speaking), and I am deeply distrustful of government (for extremely good reasons I believe), so I know for a fact that my interpretation of the article is going to be heavily distorted by that. Any logical inconsistency, ambiguousness, disingenuousness, technical dishonesty, or anything else along those lines is going to get red flagged in my mind, whereas others will read it in a much more forgiving fashion. And in an article on a different political hot topic, we will switch our behaviors.
In such threads, I think it would be extremely interesting for people with opposing views to post excerpts of the parts that "catch your attention", with an explanation of why. This is kind of what happens anyway, but I'm thinking with a completely different motive: rather than quoting excerpts with commentary to argue your ~political side of the issue with the goal of "winning the argument", take an unemotional, more abstract view of your personal cognitive processing of the article, and post commentary on ~why/how you believe you feel you consider that important on a psychological level. Psychological self-analysis is famously difficult, but even with moderate success I suspect some very interesting things would rise to the surface.
> My guess is you and I see very different things despite consuming the very same article. I lean conservative/libertarian (generally speaking),
HN specifically avoids politics. In context to the in-scope article, when you say "conservative/libertarian" do you mean: fiscally conservative (haven't seen a deficit hawk in decades other than "Read my lips. No new taxes" followed by responsibly raising taxes), socially libertarian (Liberty as a fundamental right; if you're not violating the rights of others the government is not obligated or even granted the right to intervene at all), or conservative as in imposing your particular traditional standard of moral values which you believe are particular to a particular side of the aisle?
Or, do you mean that you're libertarian in regards to the need and the right to regulate business and industry in the interest of consumers ("laissez faire")? I'm certainly not the only person to observe that lack of regulation results in smog-filled cities due to un-costed 'externalities' in a blind pursuit of optimization for short-term profit.
At issue here, I think, is whether we think we can avert future escalations of costs by banding together to address climate change now; and how best to achieve the Paris Agreement targets that we set for ourselves (despite partisan denial, delusion, and indifference to increasing YoY costs [1]) https://en.wikipedia.org/wiki/Paris_Agreement
I'm personally and financially far more concerned about the long-term costs of climate change than a limited number of special interests who can very easily diversify and/or divest to take advantage of the exact same opportunities.
> and I am deeply distrustful of government (for extremely good reasons I believe), so I know for a fact that my interpretation of the article is going to be heavily distorted by that. Any logical inconsistency, ambiguousness, disingenuousness, technical dishonesty, or anything else along those lines is going to get red flagged in my mind, whereas others will read it in a much more forgiving fashion. And in an article on a different political hot topic, we will switch our behaviors.
While governments (and militaries (TODO)) do contribute substantially to emissions and resultant climate change, I think it unnecessary to qualify that unregulated decisions by industry should be the primary focus here. Industry has done far more to cause climate change than governments (which can more efficiently provide certain services useful to all citizens)
> In such threads, I think it would be extremely interesting for people with opposing views to post excerpts of the parts that "catch your attention", with an explanation of why. This is kind of what happens anyway, but I'm thinking with a completely different motive: rather than quoting excerpts with commentary to argue your ~political side of the issue with the goal of "winning the argument", take an unemotional, more abstract view of your personal cognitive processing of the article,
These people aren't doing jack about the problem because they haven't reviewed this chart: "Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series
Maybe they want insurance payouts, which result in higher premiums. Maybe the people who built in those locations should be paying the costs.
> and post commentary on ~why/how you believe you feel you consider that important on a psychological level. Psychological self-analysis is famously difficult, but even with moderate success I suspect some very interesting things would rise to the surface.*
They don't even care because they refuse to accept that it's a problem.
The article was ineffectual at addressing the very real problem.
From https://news.ycombinator.com/item?id=20925127 :
> ( I had hoped that the other top-level post I posted here would develop into a discussion, but these excerpts seem to have bubbled up. https://news.ycombinator.com/item?id=20919368 )
In this observational study of perceptions, college education was less predictive than party affiliation.
Maybe reframing this as a short-term money problem [1] would result in compassion for people who are suffering billions of dollars of loss every year.
> HN specifically avoids politics.
I'm not saying this to be argumentative, but I suspect that is once again merely your perception. HN avoids (dang aggressively shuts down, to the detriment of the world imho, because if smart people can't find a way to discuss these things objectively, how do we expect the average person on the street to) political discussions that have a caustic odor, but there's plenty of political shit talking on HN.
> "conservative/libertarian" do you mean: fiscally conservative
Yes, but the real kind, not the phonies we've had for decades.
> socially libertarian (Liberty as a fundamental right; if you're not violating the rights of others the government is not obligated or even granted the right to intervene at all)
Yes.
> or conservative as in imposing your particular traditional standard of moral values which you believe are particular to a particular side of the aisle
To a degree, but here I think you're dealing with an interpretation of reality more than reality itself (not meant as an insult; it's how humans are). But yes, somewhat, and this is a whole other interesting and important conversation, that I think society should be having in a more serious / less polarized way.
> Or, do you mean that you're libertarian in regards to the need and the right to regulate business and industry in the interest of consumers ("laissez faire")?
I used to be very laissez faire, but in many specific areas I am now the exact opposite - if I was in charge, corporations would be in for a very rude awakening. But laissez faire is my default stance until facts suggest it is excessively harmful to the greater good.
> At issue here, I think, is whether we think we can avert future escalations of costs by banding together to address climate change now
100% agree. But if we willfully ignore the realities of human nature/psychology, I predict we'll never be able to even remotely band together, especially in the hyper-weaponized-meme world we now find ourselves in. Averting future escalations is the larger goal, but it is completely dependent upon cooperation, which is dependent on communication & perception. I believe perception is where we are failing most.
> I'm personally and financially far more concerned about the long-term costs of climate change than a limited number of special interests who can very easily diversify and/or divest to take advantage of the exact same opportunities.
Me too.
>> and I am deeply distrustful of government (for extremely good reasons I believe), so I know for a fact that my interpretation of the article is going to be heavily distorted by that. Any logical inconsistency, ambiguousness, disingenuousness, technical dishonesty, or anything else along those lines is going to get red flagged in my mind, whereas others will read it in a much more forgiving fashion. And in an article on a different political hot topic, we will switch our behaviors.
> While governments (and militaries (TODO)) do contribute substantially to emissions and resultant climate change, I think it unnecessary to qualify that unregulated decisions by industry should be the primary focus here. Industry has done far more to cause climate change than governments (which can more efficiently provide certain services useful to all citizens)
I think you misunderstood. Here I'm not talking about government/military damage to the environment (although that's a very big deal, the hypocrisy of which reinforces my distrust even more), I'm saying that I don't trust what they're up to at all. With a few exceptions (Bernie Sanders, etc), I am very distrustful of the true honesty and sincerity of all politicians regardless of affiliation. I suppose this is less that they're fundamentally dishonest, but rather the nature of our system is such that you have to be dishonest.
>> In such threads, I think it would be extremely interesting for people with opposing views to post excerpts of the parts that "catch your attention", with an explanation of why. This is kind of what happens anyway, but I'm thinking with a completely different motive: rather than quoting excerpts with commentary to argue your ~political side of the issue with the goal of "winning the argument", take an unemotional, more abstract view of your personal cognitive processing of the article,
> These people aren't doing jack about the problem because they haven't reviewed this chart: "Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series Maybe they want insurance payouts, which result in higher premiums. Maybe the people who built in those locations should be paying the costs.
I'm afraid you've completely misunderstood. I was referring to a meta discussion, on human psychology and the nature of perception - the distinctly different way in which you and I consume, perceive, store, integrate, and recall the information in an article, not the information itself. I believe this is where the solution to these problems is hiding.
> They don't even care because they refuse to accept that it's a problem.
Here you are acting as if you are able to read the minds of other people. Intuition, which is kind of a dimensional compression of perceived reality, has evolved in humans because it is extremely useful. However, when dealing with high complexity, it can be dangerous.
> ( I had hoped that the other top-level post I posted here would develop into a discussion, but these excerpts seem to have bubbled up. https://news.ycombinator.com/item?id=20919368 )
Oh, you're not wrong on those facts I'm sure, but people don't think in facts. They think they do, but they don't actually. If we want to do something about these and other problems, you have to look for the invisible blockages, and some of them are within you and I, despite the sincerity of our intentions.
Unfortunately, even genuinely smart people (like much of the HN crowd) seem to be extremely resistant to even considering this notion. I suspect the problem is that although a person may be highly educated, at the subconscious level they are still highly tribal. But I believe if we can get people to start to realize these things, to see the world in this abstract form, then much of the rest might fall into place far easier than the current apparent polarization of beliefs would suggest.
> It is objectively, quantitatively true that weather disasters are getting more frequent and more severe.
There is no data to support such an assertion. It's confirmation bias stemming from the belief that climate change will lead to such an effect.
> "2017: Climate Science Special Report: Fourth National Climate Assessment, Volume I" > "Chapter 9: Extreme Storms" lists a number of relevant Key Findings with supporting evidence (citations) and degrees of confidence: https://science2017.globalchange.gov/chapter/9/
How about a link to a chart indicating frequency and severity of severe weather events?
The Paris Agreement is predicated upon the link between human actions, climate change, and severe weather events. 195 countries have signed the Paris Agreement with consensus that what we're doing is causing climate change.
Here are some climate-relevant poll questions:
Do you think the costs of disaster relief will continue to increase due to frequency and severity of severe weather events?
Does it make sense to spend more on avoiding further climate change now rather than even more on disaster relief later?
How can you help climate refugees? Do you donate to DoD and National Guards? Do you donate to NGOs? How can we get better at handling more frequent and more severe disasters?
Emergent Tool Use from Multi-Agent Interaction
Inkscape 1.0 Beta 1
Where Dollar Bills Come From
Monetary Policy Is the Root Cause of the Millennials’ Struggle
Volatility works out for people who save (who park capital in liquid assets that aren't doing work in order to have wheat for the eventual famine). These guys. They save, short like heck when the market is falling, and swoop in to save the day. What a great time to be selling 0% loans.
Personal Savings Rate (PSR) stratified by greatest generation and not greatest generation is also relevant. Are relatively fixed living expenses higher now? Yes. Is my generation just blowing what they could invest into interest-bearing investments on unnecessary stuff from Amazon? Yes. And expensive meals and drinks.
How have corporate profits and wages changed?
In their day, you put you gosh-danged money aside. For later. So that you have money later.
And that is why you should buy my book, entitled: "Invest in things with long term returns: don't buy shtuff you don't f need, save for tomorrow; and other financial advice"
Which brings me to: the cost of college textbooks and a college education in terms of average hourly wages.
By the way, over the longer term, index funds are likely to outperform funds. Gold may be likely to outperform the stock market. And, over the recent term -- this is for all you suckers out there -- cryptocurrencies have outperformed all stock and commodities markets. How much total wealth is being created on an annual basis here?
Payday loans have something like 300% APY.
How does 2% inflation affect trade when other central banking cabals haven't chosen the same target? "Devaluation"! "Treachery"!
Non-root containers, Kubernetes CVE-2019-11245 and why you should care
> At the same time, all the current implementations of rootless containers rely on user namespaces at their core. Not to be confused with what is referred to as non-root containers in this article, rootless containers are containers that can be run and managed by unprivileged users on the host. While Docker and other runtimes require a daemon running as root, rootless containers can be run by any user without additional capabilities.
non-root / rootless
How do black holes destroy information and why is that a problem?
man, watch the PBS YouTube channel SpaceTime. They're short, informative, and very well done. they dedicate a handful of episodes to getting you ready for Hawking radiation.
"Why Quantum Information is Never Destroyed" re: determinism and T-Symmetry ("time-reversal symmetry") by PBS SpaceTime https://youtu.be/HF-9Dy6iB_4
Classical information is 'collapsed' quantum information, so that would mean that classical information is never lost either.
There appear to be multiple solutions for Navier-Stokes; i.e. somewhat chaotic.
If white holes are on the other side of black holes, Hawking radiation would not account for the entirety of the collected energy/information. Is our visible universe within a white hole? Is everything that's ever been embedded in the sidewall of a black hole shredder?
Maybe even recordings of dinosaurs walking; or is that lemurs walking in reverse?
Do 1/n, 1/∞, and n/∞ approach a symbolic limit where scalars should not be discarded; with piecewise operators?
Banned C standard library functions in Git source code
FWIW, here's awesome-static-analysis > Programming Languages > C/C++: https://github.com/mre/awesome-static-analysis/blob/master/R...
These tools have lists of functions not to use. Most of them — at least the security-focused ones — likely also include: strcpy, strcat, strncpy, strncat, sprints, and vsprintf just like banned.h
Ask HN: What's the hardest thing to secure in a web-app?
"OWASP Top 10 Most Critical Web Application Security Risks" https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Proje...
> A1:2017-Injection, A2:2017-Broken Authentication, A3:2017-Sensitive Data Exposure, A4:2017-XML External Entities (XXE), A5:2017-Broken Access Control, A6:2017-Security Misconfiguration, A7:2017-Cross-Site Scripting (XSS), A8:2017-Insecure Deserialization, A9:2017-Using Components with Known Vulnerabilities, A10:2017-Insufficient Logging&Monitoring
"OWASP Top 10 compared to SANS CWE 25" https://www.templarbit.com/blog/2018/02/08/owasp-top-10-vs-s...
Crystal growers who sparked a revolution in graphene electronics
> This seven-metre-tall machine can squeeze carbon into diamonds
OT but, is this a thing now? Diamonds can be entangled.
Don't know what you mean by entangled. They mention forming diamonds using the heat and pressure of the press, which is a well known technique to alter the crystal lattice between carbon atoms.
Does it take more energy than mining for diamonds?
> Quantum Entanglement Links 2 Diamonds: Usually a finicky phenomenon limited to tiny, ultracold objects, entanglement has now been achieved for macroscopic diamonds at room temperature (2011) https://www.scientificamerican.com/article/room-temperature-...
It takes less human suffering.
Things to Know About GNU Readline
I map <up> to history-search-backward in my .inputrc; so I can type 'sudo ' and press <up> to cycle through everything starting with sudo:
# <up> -- history search backward (match current input)
"\e[A": history-search-backward
# <down> -- history search forward (match current input)
"\e[B": history-search-forward
https://github.com/westurner/dotfiles/blob/develop/etc/.inpu...I do the exact same thing. This will probably be considered sacrilege by some but I also remap tab completion to cycle through all matches instead of stopping at the first ambiguous character:
# Set up tab and Shift-tab to cycle through completion options
"\e[Z": "\e-1\t"
TAB: menu-complete
# Alt-S is normal style TAB complete
"\es": complete
> I also remap tab completion to cycle through all matches instead of stopping at the first ambiguous character
Windows PowerShell does this and I love it. I don’t understand why people wouldn’t want this
Because instead of helping me enter a long filename by its distinctive parts, it forces flat mental read-decide-skip loop based on first few letters, which is slow and cannot be narrowed down without typing out additional letters by hand. It is like selecting from a popup menu through a 1-line window.
If it presented a real popup for file selection, combining both worlds (tab to longest match, untab to undo, up-down to navigate), that would be great. But it doesn’t.
> which is slow and cannot be narrowed down without typing out additional letters by hand.
In a PowerShell environment specifically, you usually end up having to backspace a dozen or more characters completed from a long command name before you can resume narrowing the search. It's less of a problem on a Unix shell where commands tend to be short.
Interestingly, that's one of the things I hate about powershell (and dos)
zsh works like that too.
I did this a lot, too, until I started using fzf and mapped <Control-r> to fuzzy history search. It is really useful, you might like it!
I came here to post exactly this! Very useful.
Two more quick tips
1. Undo is ^_ (ctrl+shift+-)
2. To learn more tricks...
bindkey -p
> Undo is ^_ (ctrl+shift+-)
I use ctrl-/. I don't recall whether it's a default binding or not, but at least you don't have to press the shift key :-)
Both are valid undo bindings in emacs so this is probably the reason that they are both supported. (I think this is an arrogant of a time where the keys were indistinguishable (and I think maybe they still are mostly indistinguishable to terminal apps))
Is this macro from the article dangerous because it doesn't quote the argument?
Control-j: "\C-a$(\C-e)"
I can never remember how expansion and variable substitution work in shells.The macro means:
\C-a: beginning of line
$: self-insert
(: self-insert
\C-e: end of line
): self-insert
So if your prompt (with | for cursor) looks like this: grep 'blah blah' |some_file
And you execute that macro, you get: $(grep 'blah blah' some_file)|
Which is correctly quoted and I think always correctly quotes whatever is at the prompt (unless you e.g. get half way through a string and press enter so the beginning of the prompt is halfway through a string, or maybe if you have multiple lines)Show HN: Termpage – Build a webpage that behaves like a terminal
This looks useful.
FWIW, you can build a curses-style terminal GUI with Urwid (in Python) and use that through the web. AFAIU, it requires Apache; but it's built on Tornado (which is now built on Asyncio) so something more lightweight than Apache on a Pi should definitely be doable. Termpage with like a Go or Rust REST API may still be more lightweight, but more work.
Vimer - Avoid multiple instances of GVim with gvim –remote[-tab]-silent wrapper
I have a shell script I named 'e' (for edit) that does basically this. If VIRTUAL_ENV_NAME is set (by virtualenvwrapper), e opens a new tab in that gui vim remote if gvim or macvim are on PATH, or just in a console vim if not. https://github.com/westurner/dotfiles/blob/develop/scripts/e
'editwrd'/'ewrd'/'ew' does tab-completion relative to whatever $_WRD (working directory) is set to (e.g. by venv) and calls 'e' with that full path: https://github.com/westurner/dotfiles/blob/develop/scripts/_...
It's unfortunately not platform portable like vimer, though.
Electric Dump Truck Produces More Energy Than It Uses
Ask HN: Let's make an open source/free SaaS platform to tackle school forms
I have 4 kids. I am filling out all the start of school forms for each kid. I have to fill out these same forms each year. Are you doing the same thing? Let's make this year the last year we are manually filling out forms -- let's build a SaaS platform for school forms. Community built, open-sourced, free.
Brief sketch of the idea: survey monkey + docusign, but with a 100 pre-built templates for K-12 school situations. Medical emergency form. Carpool form. Field trip permission form. Backend gives schools an easy way to customize and track forms. Forms are emailed to parents and filled out online. Parent's information is saved so that any new form is pre-filled in with as much known info as possible.
Anyone feeling the same pain? Anyone want to join with me and do it?
Technically, a checkbox may qualify as a digital signature; however, identification / authentication and storage integrity are fairly challengeable (just as a written signature on a piece of paper with a date written on it is challengeable)
Given that notarization is not required for parental consent forms, I'm not sure what sort of server security expense is justified or feasible.
How much does processing all of the paper forms cost each school? Per-student?
In terms of storing digital record of authorization, a private set of per-student OpenBadges with each OpenBadge issued by the school would be easy enough. W3C Verified Claims (and Linked Data Signatures) are the latest standards for this sort of thing.
We could evaluate our current standards for chain of custody in regards to the level of trust we place in commercial e-signature platforms.
The school could send home a sheet with a QR code and a shorturl, but that would be more expensive than running hundreds of copies of the same sheet of paper.
The school could require a parent or guardian's email address for each student in the SIS Student Information System and email unique links to prefilled forms requesting authorization(s).
Just as with e-Voting, assuring that the person who checks a checkbox or tries to scribble their signature with a mouse or touchscreen is the authorized individual may be more difficult than verifying that a given written signature is that of the parent or guardian authorized to authorize.
AFAIU, Google Forms for School can include the logged-in user's username; but parents don't have school domain accounts with Google Apps for Education or Google Classroom.
How would the solution integrate with schools' existing SIS (Student Information Systems)? Upload a CSV of (student, {student info}, {guardian email (s)})? This is private information that deserves security, which costs money.
Which users can log-in for the school and/or district to check the state of the permission / authorization requests and PII personally-identifiable information.
While cryptographic signatures may be overkill as a substitute for permission slips, FWIW, a timestamp within a cryptographically-signed document only indicates what the local clock was set to at the time. Blockchains have relatively indisputable timestamps ("certainly no later than the time that the tx made it into a block"), but blockchains don't solve for proving the key-person relation at a given point in time.
And also, my parent or guardian said you can take me on field trips if you want. https://backpack.openbadges.org/
Ask HN: Is there a CRUD front end for databases (especially SQLite)?
I'm currently looking for a program (a simple executable) that "opens" an SQLite database and (via introspection of the schema) without any further configuration allows simple CRUD operations on the database.
Yes, there is DB Browser and a gazillion other database administration frontends, but it should really be limited to CRUD operations. No changing the table, the schema, the indexes. Simple UI.
For users that have no idea about SQL or databases.
Is there anything like that already done and ready to use?
There are lots of apps that do database introspection. Some also generate forms on the fly, but eventually it's necessary to: specify a forms widget for a particular field because SQL schema only describes the data and not the UI; and specify security authorization restrictions on who can create, read, update, or delete data.
And then you want to write arbitrary queries to filter on columns that aren't indexed; but it's really dangerous to allow clients to run arbitrary SQL queries because there basically are no row/object-level database permissions (the application must enforce row-level permissions).
Datasette is a great tool for read-only database introspection and queries of SQLite databases. https://github.com/simonw/datasette
Sandman2 generates a REST API for an arbitrary database. https://github.com/jeffknupp/sandman2
You can generate Django models and then write admin.py files for each model/table that you want to expose in the django.contrib.admin interface.
There are a number of apps for providing a GraphQL API given introspection of a database that occurs at every startup or at runtime; but that doesn't solve for row-level permissions (or web forms)
If you have an OpenAPI spec for the REST API that runs atop The database, you can generate forms ("scaffolding") from the OpenAPI spec and then customize those with form widgets; optionally with something like json-schema.
It's not safe to allow introspected CRUD like e.g. phpMyAdmin for anything but development. If there are no e.g. foreign-key constraints specified in the SQL schema,a blindly-introspected UI very easily results in database corruption due to invalid foreign key references (because the SQL schema doesn't specify what table.column a foreign key references).
Django models, for example, unify SQL schema and forms UI in models.py; admin.py is optional but really useful for scaffolding (such as when you're doing manual testing because you haven't yet written automated tests) https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#mod...
California approves solar-powered EV charging network and electric school buses
> The press release from the company said, “heavy-duty vehicles produce more particulate matter than all of the state’s power plants combined”.
> […] for instance why only “10 school buses”?
IARC has recognized diesel exhaust as carcinogenic (lung cancer) since 2012.
Are there other electric school bus programs in the US?
(edit)
https://www.trucks.com/2019/03/22/can-electric-school-buses-...
> Most school systems don’t have sufficient capital to finance the high initial costs of electric bus purchases and charging infrastructure development, he said.
> In the U.S., the school bus market is about 33,000 to 35,000 vehicles per year – about six times more than transit buses.
You May Be Better Off Picking Stocks at Random, Study Finds
How is this not the same conclusion as other studies that find that index funds out perform others?
In addition to diversification that reduces risk of overexposure to down sectors or typically over-performing assets, index funds have survivorship bias: underperforming assets are replaced by assets that meet the fund's criteria.
Yes its sad that every one jumps on index funds which can be a good idea but its not the right choice 100% of the time.
Root: CERN's scientific data analysis framework for C++
Root has some features that are very unique and powerful.
It’s used in particle physics today mostly because it allows to do performant out-of-memory, on-disk Data Processing.
With frameworks like Python pandas, you always end up having to manually partition your data if it doesn’t fit in memory. And of course, it’s C++, so by default the data analysis code is pretty performant. This makes a difference when you can iterate your analysis in one hour instead of 20.
That being said, when I last worked with it, Root was a scrambled mess with terrible interfaces and way to many fringe features, e.g. around plotting, that are better handled by Python nowadays. It even has a C++ command line!!!
I wrote a blog post back then how I thought it could be fixed: https://www.konstantinschubert.com/2016/06/18/root8-what-roo...
> With frameworks like Python pandas, you always end up having to manually partition your data if it doesn’t fit in memory.
"Pandas Docs > Pandas Ecosystem > Out of Core" lists a number of solutions for working with datasets that don't fit into RAM: Blaze, Dask, Dask-ML (dask-distributed; Scikit-Learn, XGBoost, TensorFlow), Koalas, Odo, Ray, Vaex https://pandas-docs.github.io/pandas-docs-travis/ecosystem.h...
The dask API is very similar to the pandas API.
Are there any plans for ROOT to gain support for Apache Parquet, and/or Apache Arrow zero-copy reads and SIMD support, and/or https://RAPIDS.ai (Arrow, numba, Dask, pandas, scikit-learn, XGboost, spark, CUDA-X GPU acceleration, HPC)? https://arrow.apache.org/
https://root.cern.ch/root-has-its-jupyter-kernel (2015)
> Yet another milestone of the integration plan of ROOT with the Jupyter technology has been reached: ROOT now offers a Jupyter kernel! You can try it already now.
> ROOT is the 54th entry in this list and this is pretty cool. Now not only the PyROOT, the ROOT Python bindings, are integrated with notebooks but it's also possible to express your data mining in C++ within a notebook, taking advantage of all the powerful features of ROOT - plotting (now also interactive thanks to (Javascript ROOT](https://root.cern.ch/js/)), multivariate analysis, linear algebra, I/O and reflection: all available within a notebook.
Does this work with JupyterLab now? (edit) Here's the JupyterLab extension developer guide: https://jupyterlab.readthedocs.io/en/stable/developer/extens... (edit) here's the gh issue: https://github.com/root-project/jsroot/issues/166
...
ROOT is now installable with conda: `conda install -c conda-forge root metakernel jupyterlab # notebook`
For c/c++ in Jupyter, see xeus-cling https://github.com/QuantStack/xeus-cling
Coincidentally, cling (wrapped by xeus-cling) is also a product from CERN.
Many of the CERN researchers are pretty deep into C++.
It was there that I got my template meta-programming baptism, back in 2002, when gcc was still trying to cope with template heavy code.
And curiously, also where I got my first safety heavy code reviews of C++ best practices.
I don't believe those a coincidences, but more collaborations and the right people working together :-)
MesaPy: A Memory-Safe Python Implementation based on PyPy (2018)
I gave this a spin a while back but, unfortunately didn’t get very far since the software and dependencies have grown long in the byte. Since then, I’ve found RustPython [0] which is progressing toward feature parity with CPython but entirely written in Rust (!). A side benefit is that it compiles to Web Assembly, so if you could sandbox it without too much extra overhead.
> Since then, I’ve found RustPython [0] which is progressing toward feature parity with CPython but entirely written in Rust (!). A side benefit is that it compiles to Web Assembly, so if you could sandbox it without too much extra overhead.
It's now possible to run JupyterLab entirely within a browser with jyve (JupyterLab + pyodide) https://github.com/iodide-project/pyodide/issues/431
Pyodide:
> Pyodide brings the Python runtime to the browser via WebAssembly, along with the Python scientific stack including NumPy, Pandas, Matplotlib, parts of SciPy, and NetworkX. The packages directory lists over 35 packages which are currently available.
Is the RustPython WASM build more performant or otherwise preferable to brython or pyodide?
Ask HN: Configuration Management for Personal Computer?
Hello HN,
Every couple of years I find myself facing the same old tired routine: migrating my stuff off some laptop or desktop to a new one, usually combined with an OS upgrade. Is there anything like the kind of luxuries we now consider normal on the server side (IaaS; Terraform; maybe Ansible) that can be used to manage your PC and that would make re-imaging it as easy as it is on the server side?
Ansible is worth the extra few minutes, IMHO.
+ (minimal) Bootstrap System playbook
+ Complete System playbook (that references group_vars and host_vars)
+ Per-machine playbooks stored alongside the ansible inventory, group_vars, and host_vars in a separate repo (for machine-specific kernel modules and e.g. touchpad config)
+ User playbook that calls my bootstrap dotfiles shell script
+ Bootstrap dotfiles shell script, which creates symlinks and optionally installs virtualenv+virtualenvwrapper, gitflow and hubflow, and some things with pipsi. https://github.com/westurner/dotfiles/blob/develop/scripts/b...
+ setup_miniconda.sh that creates a CONDA_ROOT and CONDA_ENVS_PATH for each version of CPython (currently py27-py37)
Over the years, I've worked with Bash, Fabric, Puppet, SaltStack, and now Ansible + Bash
I log shell commands with a script called usrlog.sh that creates a $USER and per-virtualenv tab-delimited logfiles with unique per-terminal-session identifiers and ISO8601 timestamps; so it's really easy to just grep for the apt/yum/dnf commands that I ran ad-hoc when I should've just taken a second to create an Ansible role with `ansible-galaxy init ansible-role-name ` and referenced that in a consolidated system playbook with a `when` clause. https://westurner.github.io/dotfiles/usrlog.html#usrlog
A couple weeks ago I added an old i386 netbook to my master Ansible inventory and system playbook and VScode wouldn't install because VScode Linux is x86-64 only and the machine doesn't have enough RAM; so I created when clauses to exclude VScode and extensions on that box (with host_vars). Gvim with my dotvim works great there too though. Someday I'll merge my dotvim with SpaceVim and give SpaceMacs a try; `git clone; make install` works great, but vim-enhanced/vim-full needs to be installed with the system package manager first so that the vimscript plugin installer works and so that the vim binary gets updated when I update all.
I've tested plenty of Ansible server configs with molecule (in docker containers), but haven't yet taken the time to do a full workstation build with e.g. KVM or VirtualBox or write tests with testinfra. It should be easy enough to just run Ansible as a provisioner in a Vagrantfile or a Packer JSON config. VirtualBox supports multi-monitor VMs and makes USB passthrough easy, but lately Docker is enough for everything but Windows (with a PowerShell script that installs NuGet packages with chocolatey) and MacOS (with a few setup scripts that download and install .dmg's and brew) VMs. Someday I'll write or adapt Ansible roles for Windows and Mac, too.
I still configure browser profiles by hand; but it's pretty easy because I just saved all the links in my tools doc: https://westurner.github.io/tools/#browser-extensions
Someday, I'll do bookmarks sync correctly with e.g. Chromium and Firefox; which'll require extending westurner/pbm to support Firefox SQLite or a rewrite in JS with the WebExtension bookmarks API.
A few times, I've decided to write docs for my dotfiles and configuration management policies like someone else is actually going to use them; it seemed like a good exercise at the time, but invariably I have to figure out what the ultimate command sequence was and put that in a shell script (or a Makefile, which adds a dependency on GNU make that's often worth it)
Clonezilla is great and free, but things get out of date fast in a golden master image. It's actually possible to PXE boot clonezilla with Cobbler, but, AFAICT, there's no good way to secure e.g. per-machine disk or other config with PXE. Apt-cacher-ng can proxy-cache-mirror yum repos, too. Pulp requires a bit of RAM but looks like a solid package caching system. I haven't yet tested how well Squid works as a package cache when all of the machines are simultaneously downloading the exact same packages before a canary system (e.g. in a VM) has populated the package cache.
I'm still learning to do as much as possible with Docker containers and Dockerfiles or REES (Reproducible Execution Environment Specifications) -compatible dependency configs that work with e.g. repo2docker and https://mybinder.org/ (BinderHub)
GitHub Actions now supports CI/CD, free for public repositories
This is great news for developers. The trend has been to combine version control and CI for years now. For a timeline see https://about.gitlab.com/2019/08/08/built-in-ci-cd-version-c...
This is bad news for the CI providers that depend on GitHub, in particular CircleCI. Luckily for them (or maybe they saw this coming) they recently raised a series D https://circleci.com/blog/we-raised-a-56m-series-d-what-s-ne... and are already looking to add support for more platforms. It is hard to depend on a marketplace when it starts competing with you, from planning (Waffle.io), to dependency scanning (Gemnasium acquired by us), to CI (Travis CI layoff where especially sad).
It is interesting that a lot of the things GitHub is shipping is already part of Azure DevOps https://docs.microsoft.com/en-us/azure/architecture/example-... The overlap between Azure DevOps and GitHub seems to increase instead of to be reduced. I wonder what the integration story is and what will happen to Azure DevOps.
It's a horrible trend. CI should not be tied to version control. I mean we all have to deal with it now, but I'd much rather have my CI agnostic and not have config files for it checked into the repo.
I've browsed through the article you linked to, one of the subtitles was "Realizing the future of DevOps is a single application". Also a horrible idea: I think it locks developers into a certain workflow which is hard to escape. You have an issue with your setup you can't figure out - happened to me with Gitlab CI - sorry, you're out of luck. Every application is different, DevOps processes is something to be carefully crafted for each particular case with many considerations: large/small company, platform, development cycle, people preferred workflow etc. What I like to do is to have small well tested parts constitute my devops. It's a bad idea to adopt something just because everyone is doing this.
To sum it up, code should be separate from testing, deployment etc. On our team, I make sure developers don't have to think about devops. They know how to deploy and test and they know the workflow and commands. But that's about it.
I'm an operations guy, but I think I have a different perspective. The developers I work with don't have to think about CI/CD, but the configuration still lives in the repo, I'm just a contributer to that repo like they are.
Having CI configuration separate from the code sounds like a nightmare when a code change requires CI configurations to be updated. A new version of code requires a new dependency for instance, there needs to be a way to tie the CI configuration change with a commit that introduced that dependency. That comes automatically when they're in the same repo.
Having CI configuration inside the codebase also sounds like a nightmare when changes to the CI or deployment environment require configuration changes or when multiple CI/deployment environments exist.
For example as a use case: Software has dozens of tagged releases; organization moves from deploying on AWS to deploying in a Kubernetes cluster (requiring at least one change to the deployment configuration). Now, to deploy any of the old tagged releases, every release now has to be updated with the new configuration. This gets messy because there are two different orthogonal sets of versions involved. First, the code being developed has versions and second, the environments for testing, integration, and deployment also change over time and have versions to be controlled.
Even more broadly, consider multiple organizations using the same software package. They will each almost certainly have their own CI infrastructure, so there is no one "CI configuration" that could ever be checked into the repository along with the code without each user having to maintain their own forks/patchsets of the repo with all the pain that entails.
You can create a separate repo with your own CI config that pulls in the code you want to test; and thus ignore the code's CI config file. When something breaks, you'd then need to determine in which repo something changed: in the CI config repo, or the code repo. And then, you have CI events attached to PRs in the CI config repository.
IMHO it makes sense to have CI config version controlled in the same repo as the code. Unless there's a good tool for bisecting across multiple repos and subrepos?
The Fed is getting into the Real-Time payments business
This system will need to interface with other domestic and international settlement and payments networks.
There is thus an opportunity for standards, a need for federation, and a need to make it easy for big players to offer liquidity.
As far as I understand, e.g. Ripple and Stellar solve basically exactly the 24x7x365 RTGS problem that FedNow intends to solve; and, they allow all sorts of assets to be plugged into the network. Could FedNow just use a different UNL (Unique Node List) with participating banks operating trusted validators and/or offering liquidity ("liquidity provisioning")?
Notably, Ripple is specifically positioned to do international interbank real time gross settlement (RTGS) and remittances. Ripple could integrate with FedNow directly. Most efficiently, if it complies with KYC/AML requirements, FedNow could operate an XRP Ledger. Or, each bank could operate XRP Ledgers. https://xrpl.org/become-an-xrp-ledger-gateway.html
Getting thousands of banks to comply with an evolving API / EDI spec is no small task. Blockchain solutions require API compliance, have solutions for governance where there are a number of stakeholders seeking to reach consensus, and lack single points of failure.
Here's to hoping that we've learned something about decentralizing distributed systems for resiliency.
>> In contrast, the XRP Ledger requires 80 percent of validators on the entire network, over a two-week period, to continuously support a change before it is applied. Of the approximately 150 validators today, Ripple runs only 10. Unlike Bitcoin and Ethereum — where one miner could have 51 percent of the hashing power — each Ripple validator only has one vote in support of an exchange or ordering a transaction. https://news.ycombinator.com/item?id=19195050
So, you want to get banks onboard with only one s'coin USD stablecoin; but you don't want to deal with exchanges or FOREX or anything because that's a different thing? And, this is not just yet another ACH with lower clearance time?
> Interledger Architecture
https://interledger.org/rfcs/0001-interledger-architecture/
> Interledger provides for secure payments across multiple assets on different ledgers. The architecture consists of a conceptual model for interledger payments, a mechanism for securing payments, and a suite of protocols that implement this design.
> The Interledger Protocol (ILP) is the core of the Interledger protocol suite. Colloquially, the whole Interledger stack is sometimes referred to as "ILP". Technically, however, the Interledger Protocol is only one layer in the stack.
> Interledger is not a blockchain, a token, nor a central service. Interledger is a standard way of bridging financial systems. The Interledger architecture is heavily inspired by the Internet architecture described in RFC 1122, RFC 1123 and RFC 1009.
[...]
> You can envision the Interledger as a graph where the points are individual nodes and the edges are accounts between two parties. Parties with only one account can send or receive through the party on the other side of that account. Parties with two or more accounts are connectors, who can facilitate payments to or from anyone they're connected to.
> Connectors provide a service of forwarding packets and relaying money, and they take on some risk when they do so. In exchange, connectors can charge fees and derive a profit from these services. In the open network of the Interledger, connectors are expected to compete among one another to offer the best balance of speed, reliability, coverage, and cost.
Why should we prefer an immutable, cryptographically-signed blockchain solution over SQL/BigTable/MQ for FedNow?
Blockchain and payments standards: https://news.ycombinator.com/item?id=19813340
... Here's the notice and request for comment PDF: "Docket No. OP – 1670: Federal Reserve Actions to Support Interbank Settlement of Faster Payments" https://www.federalreserve.gov/newsevents/pressreleases/file...
"Federal Reserve announces plan to develop a new round-the-clock real-time payment and settlement service to support faster payments" https://www.federalreserve.gov/newsevents/pressreleases/othe...
A Giant Asteroid of Gold Won’t Make Us Richer
> this example shows that real wealth doesn’t actually come from golden hoards. It comes from the productive activities of human beings creating things that other human beings desire.
Value, Price, and Wealth
???
I'd suggest a different triad: cost, price, value.
https://old.reddit.com/r/dredmorbius/comments/48rd02/cost_va...
Good call. I don't know where I was going with that. Cost, price, value, and wealth.
Are there better examples for illustrating the differences between these kind of distinct terms?
Less convertible collectibles like coins and baseball cards (that require energy for exchange) have (over time t): costs of production, marketing, and distribution; retail sales price; market price; and 'value' which is abstract relative (opportunity cost in terms of fiat currency (which is somehow distinct from price at time t (possibly due to 'speculative information')))
Wealth comes from relationships, margins between costs and prices, long term planning, […]
Are there better examples for illustrating the differences between these kind of distinct terms?
For concepts as intrinsically fundamental to economics as these are, the agreement and understanding of what they are, even amomg economists, is surprisingly poor. It's not even clear whether or not "wealth" refers to a flow or stock -- Adam Smith uses the term both ways. And much contemporary mainstream 'wealth creation" discussion addresses accounting profit rather than economic wealth. Or broader terms such aas ecological wealth (or natural capital). There's some progress, and Steve Keen has been synthesizing much of it recently, but the terms fare poorly.
A key issue is that "price‘ and "exchange value" are ofteen conflated, creating confusiin with use/ownership value.
Addressing your terms, "cost" and "price", and typically "value", indicate some metric of exchange or opportunity cost (or benefit). Whilst "wealth", as typically used, tends to relate to some store or accumulation. In electrical terms (a potentially, so to speak, useful analogue) the difference between voltage and charge, with current representing some other property, possibly material flows of goods or energy.
The whole question of media for exchange (currency, and the like), and durable forms of financial wealth (land, art, collectibles) is another interesting one, with discussionnby Ricardo and Jevons quite interesting -- both useful and flawed.
And don't even get me started on the near total discounting of accumulated natural capital, say, the 100-300 million year factor of time embodied in fossil fuels. The reasons and rationales for excluding that being fascinating (Ricardo, Tolstoy, Gray, Hotelling, Boulding, Soddy, Georgescu-Roegen, Daly, Keen).
You are correct that all value (and hence wealth) is relative, and hence relational.
TL;DR: Not that I'm aware.
Abusing the PHP Query String Parser to Bypass IDS, IPS, and WAF
Hrm... the solution is to keep your PHP codebase up to date...?
The solution is to throw out your WAF.
Possible solutions:
(1) Change all underscores in WAF rule URL attribute names to the appropriate non-greedy regex. Though I'm not sure about the regex the article suggests: '.' only matches one character, AFAIU.
(2) Add a config parameter to PHP that turns off the magical url parameter name mangling that no webapp should ever depend on ( and have it default to off because if you rely on this 'feature' you should have to change a setting in php.ini anyway )
Ask HN: Scripts/commands for extracting URL article text? (links -dump but)
I'd like to have a Unix script that basically generates a text file named, with the page title, with the article text neatly formatted.
This seems to me to be something that would be so commonly desired by people that it would've been done and done and done a hundred times over by now, but I haven't found the magic search terms to dig up people's creations.
I imagine it starts with "links -dump", but then there's using the title as the filename, and removing the padded left margin, wrapping the text, and removing all the excess linkage.
I'm a beginner-amateur when it comes to shell scripting, python, etc. - I can Google well and usually understand script or program logic but don't have terms memorized.
Is this exotic enough that people haven't done it, or as I suspect does this already exist and I'm just not finding it? Much obliged for any help.
Just for the record in case anyone digs this up on a later Google search, install the newspaper, unidecode, and re python libraries (pip3 install), then:
from sys import argv
from unidecode import unidecode
from newspaper import Article
import re
script, arturl = argv
url = arturl
article=Article(url)
article.download()
article.parse()
title2 = unidecode(article.title)
fname2 = title2.lower()
fname2 = re.sub(r"[^\w\s]", '', fname2)
fname2 = re.sub(r"\s+", '-', fname2)
text2 = unidecode(article.text)
text2 = re.sub(r'\n\s*\n', '\n\n', text2)
f = open( '~/Desktop/' + str(fname2) + '.txt', 'w' )
f.write( str(title2) + '\n\n' )
f.write( str(text2) + '\n' )
f.close()
I execute via from shell: #!/bin/bash
/usr/local/opt/python3/Frameworks/Python.framework/Versions/3.7/bin/python3 ~/bin/url2txt.py $1
If I want to run it on all the URLs in a text file: #!/bin/bash
while IFS='' read -r l || [ -n "$l" ]; do
~/bin/u2t "$l"
done < $1
I'm sure most of the coders here are wincing at one or multiple mistakes or badly formatted items I've done here, but I'm open to feedback ...There could be collisions where `fname2` is the same for different pages; resulting in unintentionally overwriting. A couple possible solutions: generate a random string and append it to the filename, set fname2 to a hash of the URL, replace unsafe filename characters like '/' and/or '\' and/or '\n' with e.g. underscores. IIRC, URLs can be longer than the max filename length of many filesystems, so hashes as filenames are the safest solution. You can generate an index of the fetched URLs and store it with JSON or e.g. SQLite (with Records and/or SQLAlchemy, for example).
If or when you want to parallelize (to do multiple requests at once because most of the time is spent waiting for responses from the network) write-contention for the index may be an issue that SQLite solves for better than a flatfile locking mechanism like creating and deleting an index.json.lock. requests3 and aiohttp-requests support asyncio. requests3 supports HTTP/2 and connection pooling.
SQLite can probably handle storing the text of as many pages as you throw at it with the added benefit of full-text search. Datasette is a really cool interface for sqlite databases of all sorts. https://datasette.readthedocs.io/en/stable/ecosystem.html#to...
...
Apache Nutch + ElasticSearch / Lucene / Solr are production-proven crawling and search applications: https://en.m.wikipedia.org/wiki/Apache_Nutch
> I imagine it starts with "links -dump", but then there's using the title as the filename,
The title tag may exceed the filename length limit, be the same for nested pages, or contain newlines that must be escaped.
These might be helpful for your use case:
"Newspaper3k: Article scraping & curation" https://github.com/codelucas/newspaper
lazyNLP "Library to scrape and clean web pages to create massive datasets" https://github.com/chiphuyen/lazynlp/blob/master/README.md#s...
scrapinghub/extruct https://github.com/scrapinghub/extruct
> extruct is a library for extracting embedded metadata from HTML markup.
> It also has a built-in HTTP server to test its output as JSON.
> Currently, extruct supports:
> - W3C's HTML Microdata
> - embedded JSON-LD
> - Microformat via mf2py
> - Facebook's Open Graph
> - (experimental) RDFa via rdflib
NPR's Guide to Hypothesis-Driven Design for Editorial Projects
HDD – Hypothesis-Driven Development – Research, Plan, Prototype, Develop, Launch, Review.
The article lists (and links to!) "Lean UX" [1] and Google Ventures' Design Sprint Methodology as inspirations.
[1] "Lean UX: Applying Lean Principles to Improve User Experience" http://shop.oreilly.com/product/0636920021827.do
[2] https://www.gv.com/sprint/
"How To Write A Technical Paper" [3][4] has: (Related Work, System Model, Problem Statement), (Your Solution), (Analysis), (Simulation, Experimentation), (Conclusion)
Gryphon: An open-source framework for algorithmic trading in cryptocurrency
Hey folks, I'm the primary maintainer of Gryphon. The backstory here is: I was one of the founders of Tinker, the trading company that built Gryphon as our in-house trading infrastructure. We operated 2014-18, starting with just a simple arbitrage bot, and slowly grew the operation until our trades were peaking above 20% of daily trading volume on the big exchanges.
The company has since wound down (founders moved on to other projects), but I always thought the code deserved to be shared, so we've open sourced it. Someone trying to do what we did will probably save 1.5 years of engineering effort if they build on Gryphon vs. make their own. As far as I know there isn't anything out there like this, in any market (not just cryptocurrencies).
Hope you guys like it!
> As far as I know there isn't anything out there like this, in any market (not just cryptocurrencies).
How does Gryphon compare to Catalyst (Zipline)? https://github.com/enigmampc/catalyst
They list a few example algorithms: https://enigma.co/catalyst/example-algos.html
"Ask HN: Why would anyone share trading algorithms and compare by performance?" https://news.ycombinator.com/item?id=15802834 (pyfolio, popular [Zipline] algos shared through Quantopian)
"Superalgos and the Trading Singularity" https://news.ycombinator.com/item?id=19109333 (awesome-quant,)
Looks great!
Though looking at the code -> https://github.com/garethdmm/gryphon/tree/master/gryphon/lib...
Seems quite hard to extend it with new exchanges.
Shouldn't be, you just need to write a wrapper for the exchange API with the interface defined in 'gryphon.lib.exchange.exchange_api_wrapper'. I'll add an article to the docs with more about that soon.
Would CCXT be useful here? https://github.com/ccxt/ccxt
> The ccxt library currently supports the following 135 cryptocurrency exchange markets and trading APIs:
It's possible CCXT could be used to easily wrap other exchanges into gryphon. I'm not familiar with the library so hard to guess if it would be a net win or not.
Thanks for taking the time to open source this.
So is its use case limited to arb? Or are other HFT strategies supported?
It's perfectly general: arb, market making, signal trading, ml, etc. Whatever strategy class you're thinking of, you can probably implement it on Gryphon.
Can you please explain to me how a tool written in python can be used for HFT or market making?
I’m asking because we generally used ASICS and c++ in the past, or more recently rust. Even GPUs are often difficult because they introduce milliseconds of latency.
If you want to restrict the definition of HFT to only sub-millisecond strategies you're correct. But then, all HFT is impossible in crypto, since with web request latency and rate limits, it would be very difficult to get tick speeds even in the 10s of milliseconds. It's fine if you want to call this "algo trading" instead of HFT, but I think a common understanding of the term would include Gryphon's capabilities.
In any case Gryphon uses Cython to compile itself down to C, which isn't quite as good as writing in native C but is a good chunk of the way there.
> In any case Gryphon uses Cython to compile itself down to C, which isn't quite as good as writing in native C but is a good chunk of the way there.
Would there be any advantage to asyncio with uvloop (also written in Cython (on libuv like Node) like Pandas)? https://github.com/MagicStack/uvloop
IDK how many e.g. signals routines benefit from asyncio yet.
Wow. That’s pretty cool.
Been toying with the idea of algo trading on stock market. Nice to have a reference work
I've been playing around with using lstm recurrent nets to find patterns in forex trades, with no real expectation of anything other than learning about recurrent nets (and td convolutional nets). I was able to access 15 years of historical tick data. I would imagine lack of historical pricing data would be an issue for any machine learning approach to crypto trading. Even with 15 years of daily prices I only have ~5500 samples per major currency pair. I've toyed with learning off hourly prices rather than daily, and I've also thought about creating more samples by shifting prices up or down, since perhaps the general patterns would be the same.
Ouch. Out of all the possible subjects to learn NNs on, you have picked by far the most difficult possible. Seriously. If you think of an analogue to rocketry, with the easiest being launching fireworks from a bottle and the other being a mission to Mars, you have picked a Moon landing.
I don’t even know where to begin. Financial data has an extremely low signal to noise ratio and it is fraught with pitfalls. It is highly non-normal, heteroscedastic, non-stationary and frequently changes behavioural regimes. It is irregular, the information content is itself irregular and the prices sold by vendors often have difficult to detect issues that will taint your results until you actually start trading and realise that a fundamental assumption was wrong. You may train a model on one period, and find that the market behaviour has changed and your model is rubbish. Cross validation and backtesting on black box algorithms with heavy parameter tuning is a field of study on it’s own with so many issues that endless papers have been written on each specific nuance.
Successfully building ML models for trading is an extremely difficult discipline that requires a deep understanding of the markets, the idiosyncrasies of market data, statistics and programming. Most quant shops who run successful ML Algos (they are quite rare) have dedicated data teams whose entire remit is to source and clean data. The saying of rubbish in, rubbish out is very true. Even data providers like Reuter’s or Bloomberg frequently have crap data. We pay nearly 500k a year to Reuters, and find errors in their tick data every week. Data like spot forex is a special beast because the market is decentralized. There is no exchange which could provide an authoritative price feed. Trades have been rolled back in the past and if your data feed does not reflect this, you are effectively analysing junk data.
I don’t even want to get started about the fact that trying to train an RNN on 5500 observations is folly. Did you treat the data in any way? The common way to regularise market data for ML is to resample it to information bars. This is not going to work on a daily basis, so you should start off with actual tick data.
Nearly every starry eyed junior quant goes in with the notion that you can just run some fancy ML models on some market data and you’ll get a star trading algo. That a small handful of statistical tests will tell you whether your results are meaningful, whether your data has autocorrelation or mean reverting properties. In reality, ML models are very difficult to train on financial data. Most statistical forecasting tools fail to find relationships and blindly training models on past data very rarely results in more than spurious performance in a back test.
I don’t want to discourage you by any means, but I’d start off with something easier than what you are proposing. Finance firms have entire teams dedicated to what you are trying to do and even they often fail to find anything.
Thanks for the great feedback. I have no expectations for this other than the learning, and it's already been successful on that front. Just seemed like a fun thing to poke at when most other hobbyists seem to be doing image analysis and language modeling. I've crawled a couple of forums and I get that there are a lot of people out there who think they can readily use these techniques to make money. I doubt very much that this will be the outcome in my case :).
Where I am now I am just trying to figure out how to treat the data, whether to normalize or stationarize and how to encode inputs, etc. The reason that I am working with daily prices is that the fantasy output of this would be a model that can inform a one day grid trading strategy. It may very well be that daily prices won't work for this.
Whether there's anything like an equilibrium in cryptoasset markets where there are no underlying fundamentals is debatable. While there's no book price, PoW coin prices might be rationally describable in terms of (average_estimated cost of energy + cost per GH/s + 'speculative value')
A proxy for energy costs, chip costs, and speculative information
Are there standard symbols for this?
Can cryptoasset market returns be predicted with quantum harmonic oscillators as well? What NN topology can learn a quantum harmonic model? https://news.ycombinator.com/item?id=19214650
"The Carbon Footprint of Bitcoin" (2019) defines a number of symbols that could be standard in [crypto]economics texts. Figure 2 shows the "profitable efficiency" (which says nothing of investor confidence and speculative information and how we maybe overvalue teh security (in 2007-2009)). Figure 5 lists upper and lower estimates for the BTC network's electricity use. https://www.cell.com/joule/fulltext/S2542-4351(19)30255-7
Here's a cautionary dialogue about correlative and causal models that may also be relevant to a cryptoasset price NN learning experiment: https://news.ycombinator.com/item?id=20163734
Wind-Powered Car Travels Downwind Faster Than the Wind
NOAA upgrades the U.S. global weather forecast model
> Working with other scientists, Lin developed a model to represent how flowing air carries these substances. The new model divided the atmosphere into cells or boxes and used computer code based on the laws of physics to simulate how air and chemical substances move through each cell and around the globe.
> The model paid close attention to conserving energy, mass and momentum in the atmosphere in each box. This precision resulted in dramatic improvements in the accuracy and realism of the atmospheric chemistry.
Global Forecast System > Future https://en.wikipedia.org/wiki/Global_Forecast_System#Future
A plan to change how Harvard teaches economics
This isn't an economics class. It's a public policy class. Here are the course topics:[1]
- Part I: Equality of Opportunity
- Part II: Education
- Part III: Racial Disparities
- Part IV: Health
- Part V: Criminal Justice
- Part VI: Climate Change
- Part VII: Tax Policy
- Part VIII: Economic Development and Institutional Change
This really belongs in Harvard's "JFK School of Government", not economics.
Possible topics for a modern economics intro class:
- Instability and equilibrium, or why markets oscillate.
- From zero to one, the tendency to and effects of monopoly and near-monopoly.
- Externalities, their uses and discontents.
- Debt vs. equity vs. what tax policy rewards
- Scarce resources that don't map to money - attention and time.
- Finance as a system decoupled from productive activity
[1] https://opportunityinsights.org/wp-content/uploads/2019/05/E...
Is there a single book that covers all that? That would be awesome.
Schools like Harvard don’t teach from a book.
Wrong side of the Atlantic. I didn’t have to buy a single textbook my entire undergrad because we had libraries and lecture notes. Required textbooks are far more a US thing than most other countries.
Harvard absolutely has courses taught from a book. I’m sure Mankiw requires his textbook for Harvard’s intro econ course. He wrote it, he thinks it’s good.
And he's made $42 million in royalties on the book. It's almost endearing how economists claim they are somehow immune to incentives, until you pause to consider they are primarily employed as apologists for the continuation of rent-seeking policies that entrench the rich and mighty.
> apologists for the continuation of rent-seeking policies that entrench the rich and mighty.
This.
"THE IMF CONFIRMS THAT 'TRICKLE-DOWN' ECONOMICS IS, INDEED, A JOKE" https://psmag.com/economics/trickle-down-economics-is-indeed...
> INCREASING THE INCOME SHARE TO THE BOTTOM 20 PERCENT OF CITIZENS BY A MERE ONE PERCENT RESULTS IN A 0.38 PERCENTAGE POINT JUMP IN GDP GROWTH.
> The IMF report, authored by five economists, presents a scathing rejection of the trickle-down approach, arguing that the monetary philosophy has been used as a justification for growing income inequality over the past several decades. "Income distribution matters for growth," they write. "Specifically, if the income share of the top 20 percent increases, then GDP growth actually declined over the medium term, suggesting that the benefits do not trickle down."
"Causes and Consequences of Income Inequality: A Global Perspective" (2015) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22...
I'll add that we tend to overlook the level of government spending during periods trickle-down economics and confound. Change in government spending (somewhat unfortunately regardless of revenues) is a relevant factor.
Let's make this economy great again? How about you identify the decade(s) you're referring to and I'll show you the tax revenue (on income and now capital gains), federal debt per capital, and the growth in GDP.
The difference big data/data science/empirical study and 'classical' economics (by which I'd include any system of economics that seeks to explain human behavior via an underlying metatheory) is that a primarily empirical approach obscures the necessary underlying theory present in any experiment where you're trying to fit data to a curve.
For example, when you run a science experiment and you plot the data, you may find that you're looking at a line. While this is an interesting finding, it has zero predictive value for anything other than the exact situation you've collected data for. In order to formulate scientific law, you first must (a) believe that such a thing exists and (b) have some theory as to what shape the curve ought to fit. For example, a naive look at physics using an 'empirical' approach might incorrectly conclude that force is mass times acceleration. While moderately useful for many problems, this offers little predictive power in the general case. In order to actually formulate a law that can be of predictive value, you have to first consider various other laws and axioms (such as the constant speed of light for force), at which point -- by deduction, without any need of empirism -- you determine that this is wrong, and you need another kind of equation to fit your data to.
I don't know if the simplistic demand curves drawn in the original text book are correct or not. However, at least those are based on a particular set of assumptions that can be validated or not. The kind of empiricism put forth by Mr Chetty does not offer this at all.
All this is to say that, while data is useful for validation, it is not useful for prediction. The last thing we need is a black-box machine learning model to make major economic decisions off of. What we do need is proper models that are then validated, which don't necessarily need 'big data.'
> All this is to say that, while data is useful for validation, it is not useful for prediction. The last thing we need is a black-box machine learning model to make major economic decisions off of. What we do need is proper models that are then validated, which don't necessarily need 'big data.'
Hand-wavy theory - predicated upon physical-world models of equillibrium which are themselves classical and incomplete - without validation is preferable to empirical models? Please.
Estimating the predictive power of some LaTeX equations is a different task than measuring error of a trained model.
If the model does not fit all of the big data, the error term is higher; regardless of whether the model was pulled out of a hat in front of a captive audience or deduced though inference from actual data fed through an unbiased analysis pipeline.
If the 'black-box predictive model' has lower error for all available data, the task is then to reverse the model! Not to argue for unvalidated theory.
Here are a few discussions regarding validating economic models, some excellent open econometric lectures (as notebooks that are unfortunately not in an easily-testable programmatic form), the lack of responsible validation, and some tools and datasets that may be useful for validating hand-wavy classical economic theories:
"When does the concept of equilibrium work in economics?" https://news.ycombinator.com/item?id=19214650
> "Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 (data sources (pandas-datareader, pandaSDMX), tools, latex2sympy)
That's just an equation in a PDF.
(edit) Here's another useful thread: "Ask HN: Data analysis workflow?" https://news.ycombinator.com/item?id=18798244
Most of the interesting economic questions are inference problems, not prediction problems The question is not "what is the best guess of y[i] given these values of x[i]'s", but what would y[i] have been for this very individual i (or country in macro-economics) if we could have wound back the clock and change the values of x[i]'s for this individual. The methods that economists know and use may not be the best, but the standard ML prediction methods do not address the same questions, and data scientists without a social / economic / medical background are often not even aware of the distinction.
Economists and social scientists try to do non-experimental causal inference. Maybe they're not good at it, maybe the very problem is unsolvable, but it's not because they don't know how Random Forests or RNNs work. Economists already know that students from single parent families do worse at school than from married families. If the problem is just to predict individual student results, number of parents in the household is certainly a good predictor. The problem facing economists is, would encouraging marriage or discouraging diveroce improve student results? Nothing in PyTorch or Tensorflow will help with the answer..
Backtesting algorithmic trading algorithms is fairly simple: what actions would the model have taken given the available data at that time, and how would those trading decisions have affected the single objective dependent variable. Backtesting, paper trading, live trading.
Medicine (and also social sciences) is indeed more complex; but classification and prediction are still the basis for making treatment recommendations, for example.
Still, the task really is the same. A NN (like those that Torch, Theano, TensorFlow, and PyTorch produce; now with the ONNX standard for neural network model interchange) learns complex relations and really doesn't care about causality: minimize the error term. Recent progress in reducing the size of NN models e.g. for offline natural language classification on mobile devices has centered around identifying redundant neuronal connections ("from 100GB to just 0.5GB"). Reversing a NN into a far less complex symbolic model (with variable names) is not a new objective. NNs are being applied for feature selection, XGBoost wins many Kaggle competitions, and combinations thereof appear to be promising.
Actually testing second-order effects of evidence-based economic policy recommendations is certainly a complex highly-multivariate task (with unfortunate ideological digression that presumes a higher-order understanding based upon seeming truisms that are not at all validated given, in many instances, any data). A causal model may not be necessary or even reasonably explainable; and what objective dependent variables should we optimize for? Short term growth or long-term prosperity with environmental sustainability?
... "Please highly weight voluntary sustainability reporting metrics along with fundamentals" when making investments and policy decisions?
Were/are the World3 models causal? Many of their predictions have subsequently been validated. Are those policy recommendations (e.g. in "The Limits to Growth") even more applicable today, or do we need to add more labeled data and "Restart and Run All"?
...
From https://research.stlouisfed.org/useraccount/fredcast/faq/ :
> FREDcast™ is an interactive forecasting game in which players make forecasts for four economic releases: GDP, inflation, employment, and unemployment. All forecasts are for the current month—or current quarter in the case of GDP. Forecasts must be submitted by the 20th of the current month. For real GDP growth, players submit a forecast for current-quarter GDP each month during the current quarter. Forecasts for each of the four variables are scored for accuracy, and a total monthly score is obtained from these scores. Scores for each monthly forecast are based on the magnitude of the forecast error. These monthly scores are weighted over time and accumulated to give an overall performance.
> Higher scores reflect greater accuracy over time. Past months' performances are downweighted so that more-recent performance plays a larger part in the scoring.
The #GobalGoals Targets and Indicators may be our best set of variables to optimize for from 2015 through 2030; I suppose all of them are economic.
Using predictive models for policy is not new, in fact it was the standard approach long before more inferential models, and the famed Lucas critique precisely targets a primitive approach similar to what you are proposing.
The issue is the following: In economics, one is interested in an underlying parameter of a complex equilibrium system (or, if you wish, a non-equilibrium complex system of multi-agentic behavior). This may be, for example, some pricing parameter for a given firm - say - how your sold units react to setting a price.
Economics faces two basic issues:
First, any predictive model (like a NN or simple regression) that takes price as an input, will not correctly estimate the sensitivity of revenue to price. It is actually usually the case, that the inference is reversed.
A model where price is input, and sold units or revenue is output (or vice-versa) will predict (you can check that using pretty much any dataset of prices and outputs) that higher prices lead to higher outputs, because that is the association in the data. Of course we know that in truth, prices and outputs are co-determined. They are simultaneous phenomena, and regressing one on the other is not sufficient to "causally identify" the correct effect.
This is independent of how sophisticated your model is otherwise. Fitting a better non-linear representation does not help.
The solution is of course to reduce down these "endogenous" phenomena to their basic ingredients. Say you have cost data, and some demand parameters. Then, using a regression model (or NN) to predict the vector of endogenous outcome variables will work, and roughly give you the right inference.
Then, as a firm, you are able to use these (more) exogenous predictive variables to find your correct pricing.
This is not new, pops up everywhere in social science, is the basis of a gigantic literature called econometrics, and really has nothing to do with how you do the prediction.
The only thing that NN add are better predictions (better fitting) and the ability to deal with more data. As this inferential problem shows, using more (and more fine-grained) data is indeed crucial to predicting what a firm should do.
BUT, it is crucial to understand and reason about the underlying causality FIRST, because otherwise even the most sophisticated statistical approach will simply give you wrong results.
Secondly, the counterfactual data for economic issues is usually very scarce. The approach taken by machine learning is problematic, not only because of potentially wrong inference, but also because two points in time may simply not be based on comparable data-generating processes.
In fact, this is exactly the blindness that led to people missing the financial crisis. Of course, with enough data, and long enough samples, one should expect to be become pretty good at predicting economic outcomes. But experience has shown that in economics, these data are simply too scarce. The unobserved variation between two quarters, two years, two countries, two firms (etc.) is simply very large and has fat tails. This leads to spontaneous breakdowns of such predicitive models.
Taking these two issues together, we see that better non-linear function approximation is not the solution to our problems. Instead, it is a methodological improvement that must be used in conjunction with what we have learned about causality.
Indeed the literature moves into a different direction. Good economic science nowadays means to identify effects via natural experiments and other exogenous shifts that can plausibly show causality.
Of course such experiments are more rare, and more difficult, the larger the scale becomes. Which is why Macroeconomics is arguably the "worst science" in economics, while things like auctions and microstructure of markets are actually surprisingly good science (nowadays).
Doors are wide open for ML techniques, but really only to the point that they are useful in operationalizing more and better data.
Anyone trying to understand economic phenomena needs to be keenly aware of how inference can be done, which requires an understanding (or an approach to) - that is, a theory - of the underlying mechanisms.
Yes, some combination of variables/features grouped and connected with operators that correlate to an optima (some of which are parameters we can specify) that occurs immediately or after a period of lag during which other variables of the given complex system are dangerously assumed to remain constant.
> In fact, this is exactly the blindness that led to people missing the financial crisis
ML was not necessary to recognize the yield curve inversion as a strongly predictive signal correlating to subsequent contraction.
An NN can certainly learn to predict according to the presence or magnitude of a yield curve inversion and which combinations of other features.
- [ ] Exercise: Learning this and other predictive signals by cherry-picking data and hand-optimizing features may be an extremely appropriate exercise.
"This field is different because it's nonlinear, very complex, there are unquantified and/or uncollected human factors, and temporal"
Maybe we're not in agreement about whether AI and ML can do causal inference just as well if not better than humans manipulating symbols with human cognition and physical world intuition. The time is nigh!
In general, while skepticism and caution are appropriate, many fields suffer from a degree of hubris which prevents them from truly embracing stronger AI in their problem domain. (A human person cannot mutate symbol trees and validate with shuffled and split test data all night long)
> Anyone trying to understand economic phenomena needs to be keenly aware of how inference can be done, which requires an understanding (or an approach to) - that is, a theory - of the underlying mechanisms.
I read this as "must be biased by the literature and willing to disregard an unacceptable error term"; but also caution against rationalizing blind findings which can easily be rationalized as logical due to any number of cognitive biases.
Compared to AI, we're not too rigorous about inductive or deductive inference; we simply store generalizations about human behavior and predict according to syntheses of activations in our human NNs.
If you're suggesting that the information theory that underlies AI and ML is insufficient to learn what we humans have learned in a few hundred years of observing and attempting to optimize, I must disagree (regardless of the hardness or softness of the given complex field). Beyond a few combinations/scenarios, our puny little brains are no match for our department's new willing AI scientist.
> ML was not necessary to recognize the yield curve inversion as a strongly predictive signal correlating to subsequent contraction.
> An NN can certainly learn to predict according to the presence or magnitude of a yield curve inversion and which combinations of other features.
> - [ ] Exercise: Learning this and other predictive signals by cherry-picking data and hand-optimizing features may be an extremely appropriate exercise.
If the financial crisis has not yet occurred, how will the NN learn a relationship that does not exist in the data?
The exercise of cherry picking data and hand-optimizing is equivalent to applying theory to your statistical problem. It is what is required if you lack data points - using ML or otherwise. Nevertheless, we (as in humans) are bad at it. Speaking of the financial crisis. It was not AI's that picked up on it, it was some guys applying sophisticated and deep understanding of causal relationships. And that so few people did this, shows how bad we humans are at doing this implicitly and automatically by just looking at data!
> Maybe we're not in agreement about whether AI and ML can do causal inference just as well if not better than humans manipulating symbols with human cognition and physical world intuition. The time is nigh! In general, while skepticism and caution are appropriate, many fields suffer from a degree of hubris which prevents them from truly embracing stronger AI in their problem domain. (A human person cannot mutate symbol trees and validate with shuffled and split test data all night long)
ML and AI certainly can do causal inference. But then you have to do causal inference. Again, prediction on historical data is not equivalent to causal analysis, and neither is backtesting or validation. At the end of the day, AI and ML improves on predictions, but the distinction of causal analysis is a qualitative one.
> I read this as "must be biased by the literature and willing to disregard an unacceptable error term"; but also caution against rationalizing blind findings which can easily be rationalized as logical due to any number of cognitive biases.
No. My point is that for causal analysis, you have to leverage assumptions that are beyond your data set. Where these come from is besides the point. You will always employ a theory, implicitly or explicitly.
The major issue is not the we use theories, but rather that we might do it implicitly, hiding the assumptions about the DGP that allows causal inference. This is where humans are bad. Theories are just theories. With precise assumptions giving us causal identification, we are in a good position to argue where we stand.
If we just run algorithms without really understand what is going on, we are just repeating the mistakes from the last forty years!
> If you're suggesting that the information theory that underlies AI and ML is insufficient to learn what we humans have learned in a few hundred years of observing and attempting to optimize, I must disagree (regardless of the hardness or softness of the given complex field). Beyond a few combinations/scenarios, our puny little brains are no match for our department's new willing AI scientist.
All the information theory I have seen in any of the Machine Learning textbooks I have picked up is methodologically equivalent to statistics. In particular, the standard textbooks (Elements, Murhpy etc.) treatment of information theory would only allow causal identification under the exact same conditions that the statistics literature treats.
I fail to see the difference, or what AI in particular adds. The issue of causal inference is a "hot topic" in many fields, including AI, but the underlying philosophical issues are not exactly new. This includes information theory.
You seem to think that ML has somehow solved this problem. From my reading of these books, I certainly disagree. Causal inference is certainly POSSIBLE - just as in statistics, but ML doesn't give it to you for free!
In particular, note the following issue: To show causal identification, you need to make assumptions on your DGP (exogenous variation, timing, graphical causal relations ... whatever). Even if these assumptions are very implicit, they do exist. Just by looking at data, and running a model, you do not get causal inference. It can not be done "within" the system/model.
If you bake these things into your AI, then it, too makes these assumptions. There really is no difference. For example, I could imagine an AI that can identify likely exogenous variations in the data and use them to predict counterfactuals. That's probably not too far off, if it doesn't exist already. But, this is still based on the assumption that these variations are, indeed exogenous, which can never be proven within the DGP.
In contrast, I find that most "AI scientists" care very much about prediction, and very little about causal inference. I don't mean this subfield doesn't exist. But it is a subfield. In contrast, for many non-AI scientists, causal inference IS the fundamental question, and prediction is only an afterthought. ML in practice involved doing correct experiments (AB testing), at best. It will sooner or later also adopt all other causal inference techniques. But, my point stands, I have yet to see what ML adds. Enlighten me!
AI, ML and stats will merge, if they haven't already. The distinction will disappear. I believe the issues will not. I employ a lot of AI/ML techniques in my scientific work. Never have they solved the underlying issue of causal inference for me!
> AI, ML and stats will merge, if they haven't already. The distinction will disappear. I believe the issues will not.
All tools are misapplied; including economics professionals and their advice.
Here's a beautiful Venn diagram of "Colliding Web Sciences" which includes economics as a partially independent category: https://www.google.com/search?q=colliding+web+sciences&tbm=i...
A causal model is a predictive model. We must validate the error of a causal model.
Why are theoretic models hand-wavy? "That's just because noise, the model is correct." No, such a model is insufficient to predict changes in dependent variables when in the presence of noise; which is always the case. How does validating a causal model differ from validating a predictive model with historical and future data?
Yield-curve inversion as a signal can be learned by human and artificial NNs. Period. There are a few false positives in historical data: indeed, describe the variance due to "noise" by searching for additional causal and correlative relations in additional datasets.
I searched for "python causal inference" and found a few resources on the first page of search results: https://www.google.com/search?q=python+causal+inference
CausalInference: https://pypi.org/project/CausalInference/
DoWhy: https://github.com/microsoft/dowhy
CausalImpact (Python port of the R package): https://github.com/dafiti/causalimpact
"What is the best Python package for causal inference?" https://www.quora.com/What-is-the-best-Python-package-for-ca...
Search: graphical model "information theory" [causal] https://www.google.com/search?q=graphical+model+%22informati...
Search: opencog causal inference https://www.google.com/search?q=opencog+causal+inference (MOSES, PLN,)
If you were to write a pseudocode algorithm for an econometric researcher's process of causal inference (and also their cognitive processes (as executed in a NN with a topology)), how would that read?
(Edit) Something about the sufficiency of RL (Reinforcement Learning) for controlling cybernetic systems. https://en.wikipedia.org/wiki/Cybernetics
What's the point of dumping a bunch of Google results here? At least half the results are about implementations of pretty traditional etatistical / econometric inference techniques. The Rudin causal inference framework requires either randomized controlled trials or for propensity score models an essentially unverifiable separate model step.
Google's CausalImpact model, despite having been featured on Google's AI blog, is a statistical/econometric model (essentially the same as https://www.jstor.org/stable/2981553). It leaves it up to the user to find and designate a set of control variables, which has to be designated by the user to be unaffected by the treatment. This is not done algorithmically, and has very little to do with RNNs, Random Forests or regression regularization.
> If you were to write a pseudocode algorithm for an econometric researcher's process of causal inference (and also their cognitive processes (as executed in a NN with a topology)), how would that read?
[1] Set up a proper RCT, that is randomly assign the treatment to different subjects [2] Calculate the outcome diffences between the treated and untreated
For A/B testing your website, the work division between [1] and [2] might be 50-50, or at least at similar order of magnitudes.
For the questions that academic economists wrstle with, say, estimate the effect of increasing school funding / decreasing class size, the effect of shifts between tax deductions vs tax credits vs changing tax rates or bands, or of the different outcome on GDP growth and unemployment of monetary vs fiscal expansion [1] would be 99.9999% of the work, or completely impossible.
Faced with the impracticallity/impossiblility of proper experiments, academic micro-economists have typically resorted to Instrumental Variable regressions. AFAICT finding (or rather, convincing the audience that you have) a proper instrument is not very amendable to automation or data mining.
In academic macro-economics (and hence at Serious Institutions such as central banks and the IMF), the most popular approaches to building causal models in the last 3 or 4 decades have probably been 1) making a bunch of unrealistic assumpsions of the behaviour individual agents (microfoundations/DSGE models) 2) making a bunch of uninterpretable and unverifyable technical assumptions on the parameters in a generic dynamic stochastic vector process fitted to macro-aggregates (Structural VAR with "identifying restrictions") 3) manually grouping different events in different countries from different periods in history as "similar enough" to support your pet theory: lowering interest rates can lead to a) high inflation, high unemployment (USA 1970s), b) high inflation, low unemployment (Japan 1970s), b) low inflation, high unemployment (EU 2010s) c) low inflation, low unemployment (USA, Japan past 2010s)
I really don't see how a RL would help with any of this. Care to come up with something concrete?
> What's the point of dumping a bunch of Google results here? At least half the results are about implementations of pretty traditional etatistical / econometric inference techniques.
Here are some tools for causal inference (and a process for finding projects to contribute to instead of arguing about insufficiency of AI/ML for our very special problem domain here). At least one AGI implementation doesn't need to do causal inference in order to predict the outcomes of actions in a noisy field.
Weather forecasting models don't / don't need to do causal inference.
> A/B testing
Is multi-armed bandit feasible for the domain? Or, in practice, are there too many concurrent changes in variables to have any sort of a controlled experiment. Then, aren't you trying to do causal inference with mostly observational data.
> I really don't see how a RL would help with any of this. Care to come up with something concrete?
The practice of developing models and continuing on with them when they seem to fit and citations or impact reinforce is very much entirely an exercise in RL. This is a control system with a feedback loop. A "Cybernetic system". It's not unique. It's not too hard for symbolic or neural AI/ML. Stronger AI can or could do [causal] inference.
I am at loss at what you want to say to me, but let me reiterate:
Any learning model by itself is a statistical model. Statistical models are never automatically causal models, albeit causal models are statistical models.
Several causal models can be observationally equivalent to a single statistical model, but the substantive (inferential) implications on doing "an intervention" on the DGP differ.
It is therefore not enough to validate and run on a model on data. Several causal models WILL validate on the same data, but their implications are drastically different. The data ALONE provides you no way to differentiate (we say, identify) the correct causal model without any further restrictions.
By extension, it is impossible for any ML mechanism to predict unobserved interventions without being a causal model.
ML and AI models CAN be causal models, which is the case if they are based on further assumptions about the DGP. For example, they may be graphical models, SCM/SEM etc. These restrictions can be derived algorithmically, based on all sorts of data, tuning, coding and whatever. It really doesn't change the distinction between causal and statistical analysis.
The way these models become causal is based on assumptions that constitute a theory in the scientific sense. These theories can then of course also be validated. But this is not based on learning from historical data alone. You always have to impose sufficient restrictions on your model (e.g. the DGP) to make such causal inference.
This is not new, but for your benefit, I basically transferred the above from an AI/ML book on causal analysis.
AI/ML can do causal analysis, because its statistics. AI/ML are not separate from these issues, do not solve these issues ex-ante, are not "better" than other techniques except on the dimensions that they are better as statistical techniques, AND, most importantly, causal application necessarily implies a theory.
Whether this is implicit or explicit is up to the researcher, but there are dangers associated with implicit causal reasoning.
And as Pearl wrote (who is not a fan of econometrics by any means!), the issue of causal inference was FIRST raised by econometricians BASED on combining the structure of economic models with statistical inference. In the 1940's.
I mean I get the appeal to trash talk social sciences, but when it comes to causal inference, you probably picked exactly the wrong one.
You are free to disregard economic theory. But you can not claim to do causal analysis without any theory. Doing so implicitly is dangerous. Furthermore, you are wrong in the sense that economic theory has put causal inference issues at the forefront of econometric research, and is therefore good for science even if you dislike those theories.
And by the way, I can come up with a good number of (drastic, hypothetical) policy interventions that would break your inference about a market crash - an inference you only were able to make once you saw such a market crash at least once.
If this dependence is broken, your non-causal model will no longer work, because the relationship between yield curve and market crash is not a physical constant fact. What you did to make it a causal inference is implicitly assume a theory about how markets work (e.g. - as they do right now -) and that it will stay this way. Actually, you did a lot more, but that's enough.
Now, you and me, we can both agree that your model with yield curves is good enough. We could even agree that you would have found it before the financial crashes, and are a billionaire. But the commonality we agree upon is a context that defines a theory.
Some alien that has been analyzing financial systems all across the universe may disagree, saying that your statistical model is in fact highly sensitive to Earth's political, societal and natural context.
Such is the difficulty of causal analysis.
> By extension, it is impossible for any ML mechanism to predict unobserved interventions without being a causal model.
In lieu of a causal model, when I ask an economist what they think is going to happen and they aren't aware of any historical data - there is no observational data collected following the given combination of variables we'd call an event or an intervention - is it causal inference that they're doing in their head? (With their NN)
> Now, you and me, we can both agree that your model with yield curves is good enough.
Yield curves alone are insufficient due to the rate of false positives. (See: ROC curves for model evalutation just like everyone else)
> We could even agree that you would have found it before the financial crashes,
The given signal was disregarded as a false positive by the appointed individuals at the time; why?
> Some alien that has been analyzing financial systems all across the universe may disagree,
You're going to run out of clean water and energy, and people will be willing to pay for unhealthy sugar water and energy-inefficient transaction networks with a perception of greater security.
That we need Martian scientist as an approach is, IMHO, necessary because of our learned biases; where we've inferred relations that have been reinforced which cloud our assessment of new and novel solutions.
> Such is the difficulty of causal analysis.
What a helpful discussion. Thanks for explaining all of this to me.
Now, I need to go write my own definitions for counterfactual and DGP and include graphical models in there somewhere.
A further hint, here is a great book about causal analysis from a ML/AI perspective
https://mitpress.mit.edu/books/elements-causal-inference
I feel like you will benefit from reading this!
It's free!
> In lieu of a causal model, when I ask an economist what they think is going to happen and they aren't aware of any historical data - there is no observational data collected following the given combination of variables we'd call an event or an intervention - is it causal inference that they're doing in their head? (With their NN)
It's up for debate if NN's represent what is going on in our heads. But let's for a moment assume it is so.
Then indeed, an economist leverages a big set of data and assumptions about causal connections to speculate how this intervention would change the DGP (the modules in the causal model) and therefore how the result would change.
An AI could potentially do the same (if that is really what we humans do), but so far, we certainly lack the ability to program such a general AI. The reason is, in part, because we have difficulty creating causal AI models even for specialized problems. In that sense, humans are much more sophisticated right now.
It is important to note that such a hypothetical AI would create a theory, based on all sorts of data, analogies, prior research and so forth, just like economists do.
It does not really matter if a scientist, or an AI, does the theorizing. The distinction is between causal and non-causal analysis.
The value of formal theory is to lay down assumptions and tautological statements that leave no doubt about what the theory is. If we see that the theory is wrong, because we disagree on the assumptions, this is actually very good and speaks for the theory. Lot's of social sciences is plagued by "general theories" that can never really shown to be false ex ante. And given that theories can never be empirically "proven", only validated in the statistical sense, this leads to a many parallel theories of doubtful value. Take a gander into sociology if you want to see this in action.
Secondly, and this is very important, is that we learn from models. This is not often recognized. What we learn from writing down models is how mechanics or modules interact. These interactions, highly logical, are USUALLY much less doubtful than the prior assumptions. For example, if price and revenues are equilibrium phenomena, we LEARN from the model that we CAN NOT estimate them with a standard regression model!
This is exactly what lead to causal analysis in this case, because earlier we would literally regress price on quantity or production on price etc. and be happy about it. But the results were often even in the entirely wrong direction!
Instead, looking at the theory, we understood the mechanical intricacies of the process we supposedly modeled, and saw that we estimated something completely different than what we interpreted. Causal analysis, among other things, tackles this issue by asking "what it is really that we estimate here?".
> Hand-wavy theory - predicated upon physical-world models of equillibrium which are themselves classical and incomplete - without validation is preferable to empirical models? Please.
My friend, you are strawmanning.
I said,
> What we do need is proper models that are then validated, which don't necessarily need 'big data.'
Which agrees with you. I said we need both and, not one or the other.
> If the model does not fit all of the big data, the error term is higher; regardless of whether the model was pulled out of a hat in front of a captive audience or deduced though inference from actual data fed through an unbiased analysis pipeline.
Big data without a model is still only valid for the scenario the data were collected in
> If the 'black-box predictive model' has lower error for all available data, the task is then to reverse the model! Not to argue for unvalidated theory.
Certainly, but we should simultaneously recognize that any model so conceived is still only valid in the situations the data were collected in, which makes the not necessarily useful for the future. You could turn such an equation into an economic philosophy, but you'd have to do a lot more, non metric, work.
How can you possibly be arguing that we should not be testing models with all available data?
All models are limited by the data they're trained from; regardless of whether they are derived through rigorous, standardized, unbiased analysis or though laudable divine inspiration.
From https://news.ycombinator.com/item?id=19084622 :
> pandas-datareader can pull data from e.g. FRED, Eurostat, Quandl, World Bank: https://pandas-datareader.readthedocs.io/en/latest/remote_da...
> pandaSDMX can pull SDMX data from e.g. ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank; with requests-cache for caching data requests: https://pandasdmx.readthedocs.io/en/latest/#supported-data-p...
> All models are limited by the data they're trained from; regardless of whether they are derived through rigorous, standardized, unbiased analysis or though laudable divine inspiration.
Some of the data we have isn't training data. Purely data-driven models tend to be ensnared by Goodhart's law.
For example, suppose we're issuing 30-year term loans and we have some data that shows that people with things like country club memberships and foie gras on their credit card statements have a higher tendency not to miss payments. So we use that information to make our determination.
But people are aware we're doing this and the same data is externally available, so now people start to waste resources on extravagant luxuries in order to qualify for a loan or a low interest rate, and that only makes it more likely that they ultimately default. However, that consequence doesn't become part of the data set until years have passed and the defaults actually occur, and in the meantime we're using flawed reasoning to issue loans. When we finally figure that out after ten years, the new data we use then will have some fresh different criteria for people to game, because the data is always from the past rather than the future.
We've already seen the kind of damage this can do. Politicians see data that college-educated people are better off so subsidize college loans, only to discover that the signal from having a degree that caused it to result in such gainful employment is diluted as it becomes more common, and subsidizing loans results in price inflation, and making a degree a prerequisite for jobs that shouldn't require it creates incentives for degree mills that pump out credentials but not real education.
To get out of this we have to consider not only what people have done in the past but how they are likely to respond to a given policy change, for which we have no historical data prior to when the policy is enacted, and so we need to make those predictions based on logic in addition to data or we go astray.
> To get out of this we have to consider not only what people have done in the past but how they are likely to respond to a given policy change, for which we have no historical data prior to when the policy is enacted, and so we need to make those predictions based on logic in addition to data or we go astray.
"Pete, it's a fool who looks for logic in the chambers of the human heart."
Logically, we might have said "prohibition will reduce substance abuse harms" but the actual data indicates that margins increased. Then, we look at the success of Portugal's decriminalization efforts and cannot at all validate our logical models.
Similarly, we might've logically claimed that "deregulation of the financial industry will help everyone" or "lowering taxes will help everyone" and the data does not support.
So, while I share the concerns about Responsible AI and encoding biases (and second-order effects of making policy recommendations according to non-causal models without critically, logically thinking first) I am very skeptical about our ability to deduce causal relations without e.g. blind, randomized, longitudinal, interventional studies (which are unfortunately basically impossible to do with [economic] policy because there is no "ceteris paribus")
https://personalmba.com/second-order-effects/
"Causal Inference Book" https://news.ycombinator.com/item?id=17504366
> https://www.hsph.harvard.edu/miguel-hernan/causal-inference-...
> Causal inference (Causal reasoning) https://en.wikipedia.org/wiki/Causal_inference ( https://en.wikipedia.org/wiki/Causal_reasoning )
The virtue of logic isn't that your model is always correct or that it should be adhered to without modification despite contrary evidence, it's that it allows you to have one to begin with. It's a method of choosing which experiments to conduct. If you think prohibition will reduce substance abuse but then you try it and it doesn't, well, you were wrong, so end prohibition.
This is also a strong argument for "laboratories of democracy" and local control -- if everybody agrees what to do then there is no dispute, but if they don't then let each local region have their own choice, and then we get to see what happens. It allows more experiments to be run at once. Then in the worst case the damage of doing the wrong thing is limited to a smaller area than having the same wrong policy be set nationally or internationally, and in the best case different choices are good in different ways and we get more local diversity.
> If you think prohibition will reduce substance abuse but then you try it and it doesn't, well, you were wrong, so end prohibition.
Maybe we're at a local optima, though. Maybe this is a sign that we should just double down, surge on in there and get the job done by continuing to do the same thing and expecting different results. Maybe it's not the spec but the implementation.
Recommend a play according to all available data, and logic.
> This is also a strong argument for "laboratories of democracy" and local control -- if everybody agrees what to do then there is no dispute, but if they don't then let each local region have their own choice, and then we get to see what happens. It allows more experiments to be run at once. Then in the worst case the damage of doing the wrong thing is limited to a smaller area than having the same wrong policy be set nationally or internationally, and in the best case different choices are good in different ways and we get more local diversity.
"Adjusting for other factors," the analysis began.
- [ ] Exercise / procedure to be coded: Brainstorm and identify [non-independent] features that may create a more predictive model (a model with a lower error term). Search for confounding variables outside of the given data.
The New York Times course to teach its reporters data skills is now open-source
It's more work to verify all formulas that reference unnamed variables in a spreadsheet than to review the code inputs and outputs in a notebook.
"Teaching Pandas and Jupyter to Northwestern journalism students" [in DC] https://www.californiacivicdata.org/2017/06/07/dc-python-not...
> http://www.firstpythonnotebook.org/
You can also develop d3.js visualizations — just like NYT — with jupyter notebooks and whichever language(s).
"Data-Driven Journalism" ("ddj") https://en.wikipedia.org/wiki/Data-driven_journalism
http://datadrivenjournalism.net/
"The Data Journalism Handbook 1" https://datajournalism.com/read/handbook/one
"The Data Journalism Handbook 2" https://datajournalism.com/read/handbook/two
While there are a number of ScholarlyArticle journals that can publish notebooks, I'm not aware of any newspapers that are prepared to publish notebooks as NewsArticles. It's pretty easy to `jupyter convert --to html` and `--to markdown` or just 'Save as'
Regarding expressing facts as verifiable claims with structured data in HTML and/or blockchains: "Fact Checks" https://news.ycombinator.com/item?id=15529140
Does this course recommend linking to every source dataset and/or including full citations (with DOI) in the article? Does this course recommend getting a free DOI for the published revision of an e.g. GitHub project repository (containing data, and notebooks and/or the article text) with Zenodo?
No Kings: How Do You Make Good Decisions Efficiently in a Flat Organization?
Group decision-making > Formal systems: https://en.wikipedia.org/wiki/Group_decision-making#Formal_s...
> Consensus decision-making, Voting-based methods, Delphi method, Dotmocracy
Consensus decision-making: https://en.wikipedia.org/wiki/Consensus_decision-making
There's a field that some people are all calling "Collaboration Engineering". I learned about this from a university course in Collaboration.
6 Patterns of Collaboration [GRCOEB] — Generate, Reduce, Clarify, Organize, Evaluate, Build Consensus
7 Layers of Collaboration [GPrAPTeToS] — Goals, Products, Activities, Patterns of Collaboration, Techniques, Tools, Scripts
The group decision making processes described in the article may already be defined with the thinkLets design pattern language.
A person could argue against humming for various unspecified reasons.
I'll just CC this here from my notes, which everyone can read here [1]:
“Collaboration Engineering: Foundations and Opportunities” de Vreede (2009) http://aisel.aisnet.org/jais/vol10/iss3/7/
“A Seven-Layer Model of Collaboration: Separation of Concerns for Designers of Collaboration Systems” Briggs (2009) http://aisel.aisnet.org/icis2009/26/
Six Patterns of Collaboration “Defining Key Concepts for Collaboration Engineering” Briggs (2006) http://aisel.aisnet.org/amcis2006/17/
“ThinkLets: Achieving Predictable, Repeatable Patterns of Group Interaction with Group Support Systems (GSS)” http://www.academia.edu/259943/ThinkLets_Achieving_Predictab...
https://scholar.google.com/scholar?q=thinklets
[1] https://wrdrd.github.io/docs/consulting/team-building#collab...
4 Years of College, $0 in Debt: How Some Countries Make Education Affordable
It at least makes sense to pay for doctors and nurses to go to school, right? If you want to care for others and you do the work to earn satisfactory grades, I think that investing in your education would have positive ROI.
We had plans here in the US to pay for two years of community college for whoever ("America's College Promise"). IDK what happened to that? We should have called it #ObamaCollege so that everyone could attack corporate welfare and bad investments with no ROI.
New York has the Excelsior scholarship for CUNY and SUNY. Tennessee pays for college with lottery proceeds. Are there other state-level efforts to fund higher education in the US such that students can finish school debt-free or close to it?
There are MOOCs (online courses) which are worth credit hours for the percentage of people that commit to finishing the course. https://www.classcentral.com/
Khan Academy has free SAT, MCAT, NCLEX-RN, GMAT, and LSAT test prep and primary and supplementary learning resources. https://www.khanacademy.org/test-prep
Free education: https://en.wikipedia.org/wiki/Free_education
Ask HN: What jobs can a software engineer take to tackle climate change?
I'm a software engineer with a diverse background in backend, frontend development.
How do I find jobs related to tackling global warming and climate change in Europe for an English speaker?
Open to ideas and thoughts.
> I'm a software engineer with a diverse background in backend, frontend development.
> How do I find jobs related to tackling global warming and climate change in Europe for an English speaker?
While not directly answering the question, here are some ideas for purchasing, donating, creating new positions, and hiring people that care:
Write more efficient code. Write more efficient compilers. Optimize interpretation and compilation so that the code written by people with domain knowledge who aren't that great at programming who are trying to solve other important problems is more efficient.
Push for PPAs (Power Purchase Agreements) that offset energy use. Push for directly sourcing clean energy.
Use services that at least have 100% PPAs for the energy they use: services that run on clean energy sources.
Choose green datacenters.
- [ ] Add the capability for cloud resource schedulers like Kubernetes and Terraform to prefer or require clean energy datacenters.
Choose to work with companies that voluntarily choose to do sustainability reporting.
Work to help develop (and popularize) blockchain solutions that are more energy efficient and that have equal or better security assurances as less efficient chains.
Advocate for clean energy. Donate to NGOs working for our environment and for clean energy.
Invest in clean energy. There are a number of clean energy ETFs, for example. Better energy storage is a good investment.
Push for certified green buildings and datacenters.
- [ ] We should create some sort of a badge and structured data (JSONLD, RDFa, Microdata) for site headers and/or footers that lets consumers know that we're working toward '200% green' so that we can vote with our money.
Do not vote for people who are rolling back regulations that protect our environment. Pay an organization that pays lobbyists to work the system: that's the game.
Help explain why it's both environment-rational and cost-rational to align with national and international environmental sustainability and clean energy objectives.
Argue that we should make external costs internal in order that markets will optimize for what we actually want.
Thermodynamics is part of the physics curriculum for many software engineering and computer science degrees.
There are a number of existing solutions that solve for energy inefficiency due to unreclaimed waste heat.
"Thermodynamics of Computation Wiki" https://news.ycombinator.com/item?id=18146854
"Why Do Computers Use So Much Energy?" https://news.ycombinator.com/item?id=18139654
YC's request for startups: Government 2.0
There's money to be earned in solving for the #GlobalGoals Goals, Targets, and Indicators:
The Global Goals
1. No Poverty
2. Zero Hunger
3. Good Health & Well-Being
4. Quality Education
5. Gender Equality
6. Clean Water & Sanitation
7. Affordable & Clean Energy
8. Decent Work & Economic Growth
9. Industry, Innovation & Infrastructure
10. Reduced Inequalities
11. Sustainable Cities and Communities
12. Responsible Consumption & Production
13. Climate Action
14. Life Below Water
15. Life on Land
16. Peace and Justice & Strong Institutions
17. Partnerships for the Goals
Almost 40% of Americans Would Struggle to Cover a $400 Emergency
I always wonder what proportion of that group is due to insufficient income, and what proportion is due to terrible financial literacy.
> I always wonder what proportion of that group is due to insufficient income
According to the Social Security Administration [1]:
2017 Average net compensation: 48,251.57
2017 Median net compensation: 31,561.49
The FPL (Federal Poverty Level) income numbers for Medicaid and the Children's Health Insurance Program (CHIP) eligibility [2]:
>> $12,140 for individuals, $16,460 for a family of 2, $20,780 for a family of 3, $25,100 for a family of 4, $29,420 for a family of 5, $33,740 for a family of 6, $38,060 for a family of 7, $42,380 for a family of 8
Wages are not keeping up with corporate profits. That can't all be due to automation.
The minimum wage is only one factor linked to price inflation. We can raise wages and still keep inflation down to an ideal range.
Maybe it's that we don't understand what it's like to live on $12K or $32K a year (without healthcare due to lack of Medicaid expansion; due to our collective failure to instill charity as a virtue and getting people back on their feet as a good investment). How could we learn (or remember!) about what it's like to be in this position (without zero-interest bank loans to bail us out)?
> and what proportion is due to terrible financial literacy.
The r/personalfinance wiki is one good resource for personal finance. From [3]:
>> Personal Finance (budgets, interest, growth, inflation, retirement)
Personal Finance https://en.wikipedia.org/wiki/Personal_finance
Khan Academy > College, careers, and more > Personal finance https://www.khanacademy.org/college-careers-more/personal-fi...
"CS 007: Personal Finance For Engineers" https://cs007.blog
https://reddit.com/r/personalfinance/wiki
... How can we make personal finance a required middle and high school curriculum component? [4]
"What are some ways that you can save money in order to meet or exceed inflation?"
Dave Ramsey's 7 Baby Steps to financial freedom [5] seem like good advice? Is the debt snowball method ideal for minimizing interest payments?
[1] https://www.ssa.gov/OACT/COLA/central.html
[2] https://www.healthcare.gov/glossary/federal-poverty-level-fp...
[3] "Ask HN: How can you save money while living on poverty level?" https://news.ycombinator.com/item?id=18894582
[4] "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894632
Congress should grow the Digital Services budget, it more than pays for itself
> The U.S. Digital Service isn’t perfect, but it is clearly working. The team estimates that for every $1 million invested in USDS that the government will avoid spending $5 million and save thousands of labor hours. Over a five-year period, the team’s efforts will save $1.1 billion, redirect almost 2,000 labor years towards higher value work, and generate over 400 percent return on investment. Most importantly, USDS will continue to deliver better government services for the American people, including Veterans who deserve better.
> In the private sector, these kinds of numbers would not lead to a 50 percent cut in budget. Instead, you’d clearly invest further with that kind of return. Considering the ambitious goals set out in the President’s Management Agenda, the Trump Administration should double down on better support for the public, our troops, and our veterans. The best way to do that is clearly through investments like USDS.
Why would you halve the budget of a team that's yielding a more than 400% ROI (in terms of cost savings)?
https://en.wikipedia.org/wiki/United_States_Digital_Service
400% ROI to whom?
More data makes hiding corruption harder, so I'm not sure if it really gives 400% to the decision makers.
USDS reports 400% ROI in savings to the taxpayers who fund the government with tax revenue (instead of kicking the can down the road with debt financing) and improvements in customer service quality.
https://www.usaspending.gov (Federal Funding Accountability and Transparency Act of 2006 (Obama, McCain, Carper, Coburn)) has more fine-grained spending data, but not credit-free immutable distributed ledger transaction IDs, quantitative ROI stats, or performance.gov and #globalgoals goal alignment. We'd need a metadata field on spending bills to link to performance.gov and SDG Goals, Targets, and Indicators.
"Transparency and Accountability"
IIRC, here on HN, I've mentioned a number of times -- and quoted in the full from -- the 13 plays of the USDS Digital Services Playbook; all of which are applicable to and should probably be required reading for all government IT and govtech: https://playbook.cio.gov/
There are forms with workflow states that need human review sometimes. USDS helps with getting those processes online in order to reduce costs, increase cost-efficiency, and increase quality of service.
The Trillion-Dollar Annual Interest Payment
> Given the recent actions of Congress, and the years of prior inaction in changing the nation’s fiscal path, the U.S. government’s annual interest payment will eclipse annual defense spending in only six years. By 2025, annual interest costs on the national debt will reach $724 billion, while annual defense spending will reach $706 billion. To put that into perspective, in the 2018 fiscal year, the U.S. government spent $325 billion in interest payments and spent $622 billion in defense (Exhibit 2).
Why would you cut taxes and debt finance our nation's future?
Oak, a Free and Open Certificate Transparency Log
Great use case for blockchain technology
CT logs are already chained
> Great use case for blockchain technology
>> CT logs are already chained
Trillian is a centralized Merkle tree: it doesn't support native replication (AFAIU?) and there is a still a password that can delete or recreate the chain (though we can track for any such inappropriate or errant modifications (due to e.g. solar flares) by manually replicating and verifying every entry in the chain, or trusting that everything before whatever we consider to be a known hash (that could be colliding) is unmodified (since the last time we never verified those entries)).
According to the trillian README, trillian depends upon MySQL/MariaDB and thus internal/private replication is as good as the SQL replication model (which doesn't have a distributed consensus algorithm like e.g. paxos).
A Merkle tree alone is not a blockchain; though it provides more assurance of data integrity than a regular tree, verifying that the whole chain of hashes actually is good and distributed replication without configuring e.g. SSL certs are primary features of blockchains.
There are multiple certificate issuers, multiple logs, and multiple log verifiers. With no single point of failure, that doesn't sound centralized to me?
Which components of the system are we discussing?
PKI is necessarily centralized: certs depend upon CA certs which can depend upon CA certs. If any CA is compromised (e.g. by theft or brute force (which is inestimably infeasible given current ASIC resources' preference for legit income)) that CA can sign any CRL. A CT log and a CT log verifier can help us discover that a redundant and so possibly unauthorized cert has been issued for a given domain listed in an x.509 cert CN/SAN.
The CT log itself - trillian, for Google and now LetsEncrypt, too - though, runs on MySQL; which has one root password.
The system of multiple independent, redundant CT logs is built upon databases that depend upon presumably manually configured replication keys.
Does my browser call a remote log verifier API over (hopefully pinned with a better fingerprint than MD5) HTTPS?
There are multiple issuers, so from an availability point of view, if one is down, you could choose another. They submit to at least two logs, so if one log is unavailable you could read the other one. This is a form of decentralization.
Now, from a security point of view, it only takes breaking into one issuer to issue bad certificates. But maybe classifying everything as either centralized or decentralized is too simple?
Centralized and decentralized are overloaded terms. We could argue that every system that depends upon DNS is a centralized (and thus has a single point of failure).
We could describe replication models as centralized or decentralized. Master/master SQL replication is still not decentralized (regardless of whether there are multiple A records or multiple static IPs configured in the client).
With PKI, we choose the convenience of trusting a CA bundle over having to manually check every cert fingerprint.
Whether a particular chain is centralized or decentralized is often bandied about. When there are a few mining pools that effectively choose which changes are accepted, that's not decentralized either.
That there are multiple redundant independent CT logs is a good thing.
How do I, as a concerned user, securely download (and securely mirror?) one or all of the CT logs and verify that none of the record hashes don't depend upon the previous hash? If the browser relies upon a centralized API for checking hash fingerprints, how is that decentralized?
Looks like there is a bit here about how to get started: https://security.stackexchange.com/questions/167366/how-can-...
Most people aren't going to do it, but I think that's not really the point, any more than every user needs to review Linux kernel patches. But I wonder if there are enough "eyes" on this and how would we check?
Death rates from energy production per TWh
Apparently the deaths are justified because energy.
Are the subsidies and taxes (incentives and penalties) rational in light of the relative harms of each form of energy?
"Study: U.S. Fossil Fuel Subsidies Exceed Pentagon Spending" https://www.rollingstone.com/politics/politics-news/fossil-f...
> The IMF found that direct and indirect subsidies for coal, oil and gas in the U.S. reached $649 billion in 2015. Pentagon spending that same year was $599 billion.
> The study defines “subsidy” very broadly, as many economists do. It accounts for the “differences between actual consumer fuel prices and how much consumers would pay if prices fully reflected supply costs plus the taxes needed to reflect environmental costs” and other damage, including premature deaths from air pollution.
IDK whether they've included the costs of responding to requests for help with natural disasters that are more probable due to climate change caused by these "externalties" / "external costs" of fossil fuels.
Energy saves lives so some risk probably is justified.
Why isn't the market choosing the least harmful, least lethal energy sources? Energy is for the most part entirely substitutable: switching costs for consumers like hospitals are basically zero.
(Everyone is free to invest in clean energy at any time)
Switching costs for the entire society are far from zero.
100% Renewable Energy https://en.wikipedia.org/wiki/100%25_renewable_energy
> The main barriers to the widespread implementation of large-scale renewable energy and low-carbon energy strategies are political rather than technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.
We need to make the external costs of energy production internal in order to create incentives to prevent these fossil fuel deaths and other costs.
Use links not keys to represent relationships in APIs
A thing may be identified by a URI (/person/123) for which there are zero or more URL routes (/person/123, /v1/person/123). Each additional route complicates caching; redirects are cheap for the server but slower for clients.
JSONLD does define a standard way to indicate that a value is a link: @id (which can be specified in a/an @context) https://www.w3.org/TR/json-ld11/
One additional downside to storing URIs instead of bare references is that it's more complicated to validate a URI template than a simple regex like \d+ or [abcdef\=\d+]+
And on a more fundamental standpoint I get that disk space is cheap theses days. But you just doubled, if not worse, the storage space required for a key for a vague reason.
It may never make any difference on a small dataset, where storage was anyway unaware of differences between integer and text. But it would be hiding in the dark. And maybe in a few year a new recruit will have the outstanding idea to convert text-link to bigint to save some space...
No Python in Red Hat Linux 8?
/usr/bin/python can point to either /usr/bin/python3 or (as PEP 394 currently recommends) /usr/bin/python2
$ alternatives --config python
FWIU, there are ubi8/python-27 and ubi8/python-36 docker images. IDK if they set /usr/bin/python out of the box? Changing existing shebangs may not be practical for some applications (which will need to specify 'python4' whenever that occurs over the next 10 supported years of RHEL/CENTOS 8)JMAP: A modern, open email protocol
What are the optimizations in JMAP that make it faster than, say, Solid? Solid is built on a bunch of W3C Web, Security, and Linked Data Standards; LDP: Linked Data Protocol, JSON-LD: JSON Linked Data, WebID-TLS, REST, WebSockets, LDN: Linked Data Notifications. [1][2] Different worlds, I suppose.
There's no reason you couldn't represent RFC5322 data with RDF as JSONLD. There's now a way to do streaming JSON-LD.
LDP does paging and querying.
Solid supports pubsub with WebSockets and LDN. It may or may not (yet?) be as efficient for synchronization as JMAP, but it's definitely designed for all types of objects with linked data web standards; and client APIs can just parse JSON-LD.
[1] https://github.com/solid/information#solid-specifications
[2] https://github.com/solid/solid-spec/issues/123 "WebSockets and HTTP/2" SSE (Server-Side Events)
JMAP: JSON Meta Application Protocol https://en.wikipedia.org/wiki/JSON_Meta_Application_Protocol
Is there a OpenAPI Specification for JMAP? There are a bunch of tools for Swagger / OpenAPIs: DRY interactive API docs, server implementations, code generators: https://swagger.io/tools/open-source/ https://openapi.tools/
Does JMAP support labels; such that I don't need to download a message and an attachment and mark it as read twice like labels over IMAP?
How does this integrate with webauthn; is that a different layer?
(edit) Other email things: openpgpjs; Web Key Directory /.well-known/openpgpkey/*; if there's no webserver on the MX domain, you can use the ACME DNS challenge to get free 3-month certs from LetsEncrypt.
I would say your comment, and the first reply, both demonstrate quite effectively why JMAP is probably a better choice for email.
> It may or may not (yet?) be as efficient for synchronization as JMAP, but it's definitely designed for all types of objects
If we hypothetically allow for equal adoption & mindshare of both, and assume both are non-terrible designs, I'd guess the one designed for "all types of objects" is less likely to ever be as efficient as the one designed with a single use-case in mind.
And narrow focus is not only good for optimising specific use-cases, it's also good for adoption as people immediately understand what your protocol is for and how to use it when it has a single purpose and a single source of truth for reference spec, rather than a series of disparate links and vague all-encompassing use-cases.
Solid has brilliant people behind it, but it's too broad, too ambitious, and very much lacks focus, and that will impair adoption because it isn't the "one solution" for anyone's "one problem".
--
To take another perspective on this, there are other commenters in this thread bemoaning the loss of non-HTTP-based protocols. Funnily enough, HTTP itself is a broadly used, broadly useful protocol than can be used for pretty much anything (and had TBL behind it also). The big difference was that Tim wasn't proposing that HTTP be the solution to all our internet problems and needs in 1989—it was just for hypertext documents. It's only now, post-adoption, that it is used for so much more than that.
> If we hypothetically allow for equal adoption & mindshare of both, and assume both are non-terrible designs, I'd guess the one designed for "all types of objects" is less likely to ever be as efficient as the one designed with a single use-case in mind.
This is a generalization that is not supported by any data.
Standards enable competing solutions. Competing solutions often result in performance gains and efficiency.
Hopefully, there will be performant implementations and we won't need to reinvent the wheel in order to synchronize and send notifications for email, contacts, and calendars.
> There's no reason you couldn't represent RFC5322 data with RDF as JSONLD. There's now a way to do streaming JSON-LD.
Is there a reason you'd want to? I clicked all your links but I still have no idea what Solid is.
To eliminate the need for domain-specific parser implementations on both server and client, make it easy to index and search this structured data, and to link things with URIs and URLs like other web applications that also make lots of copies.
Solid is a platform for decentralized linked data storage and retrieval with access controls, notifications, WebID + OAuth/OpenID. The Wikipedia link and spec documents have a more complete description that could be retrieved and stored locally.
Grid Optimization Competition
From "California grid data is live – solar developers take note" https://news.ycombinator.com/item?id=18855820 :
>> It looks like California is at least two generations of technology ahead of other states. Let’s hope the rest of us catch up, so that we have a grid that can make an asset out of every building, every battery, and every solar system.
> +1. Are there any other states with similar grid data available for optimization; or any plans to require or voluntarily offer such a useful capability?
How do these competitions and the live actual data from California-only (so far; AFAIU) compare?
Are there standards for this grid data yet? Without standards, how generalizable are the competition solutions to real-world data?
Blockchain's present opportunity: data interchange standardization
What are the current standards efforts for blockchain data interchange?
W3C JSON-LD, ld-signatures + lds-merkleproof2017 (normalize the data before signing it so that the signature is representation-independent (JSONLD, RDFa, RDF, n-triples)), W3C DID Decentralized Identifiers, W3C Verifiable Claims, Blockcerts.org
W3C Credentials Community Group: https://w3c-ccg.github.io/community/work_items.html#draft-sp... (DID, Multihash (IETF), [...])
"Blockchain Credential Resources; a gist" https://gist.github.com/westurner/4345987bb29fca700f52163c33...
Specifically for payments:
https://www.w3.org/TR/?title=payment (the W3C Payment Request API standardizes browser UI payment/checkout workflows)
ILP: Interledger Protocol https://interledger.org/rfcs/0027-interledger-protocol-4/
> W3C JSON-LD
https://www.w3.org/TR/json-ld/ (JSON-LD 1.0)
https://www.w3.org/TR/json-ld11/ (JSON-LD 1.1)
> ld-signatures + lds-merkleproof2017 (normalize the data before signing it so that the signature is representation-independent (JSONLD, RDFa, RDF, n-triples))
https://w3c-dvcg.github.io/ld-signatures/
https://w3c-dvcg.github.io/lds-merkleproof2017/ (2017 Merkle Proof Linked Data Signature Suite)
> W3C DID Decentralized Identifiers
https://w3c-ccg.github.io/did-primer/
>> A Decentralized Identifier (DID) is a new type of identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. DIDs are typically associated with cryptographic material, such as public keys, and service endpoints, for establishing secure communication channels. DIDs are useful for any application that benefits from self-administered, cryptographically verifiable identifiers such as personal identifiers, organizational identifiers, and identifiers for Internet of Things scenarios. For example, current commercial deployments of W3C Verifiable Credentials heavily utilize Decentralized Identifiers to identify people, organizations, and things and to achieve a number of security and privacy-protecting guarantees.
> W3C Verifiable Claims
https://github.com/w3c/verifiable-claims
https://w3c.github.io/vc-data-model/ (Data Model)
https://w3c.github.io/vc-use-cases/ (Use Cases: Education, Healthcare, Professional Credentials, Legal Identity,)
Ask HN: Value of “Shares of Stock options” when joining a startup
I got an offer from a US start-up (well +25 employees) which has an office in EU where I would join them.
The offer's base salary is good (ie. higher than average for senior positions for that location) but I intend to negotiate it further, as I have possible other options. patio11's negotiation guide was a great read in that regard.
However, I'm relocating from a non-EU/US country, and I don't have a single idea about the financial systems, stock markets, and how to evaluate "15k shares of stock-options" or what "Stock Option and Grant Plan" means, I'm asking you fellow HNers about this part.
Do I just treat them as worthless and focus on base salary (as some internet sources suggest) or is there a formula to evaluate what they would be worth in say 2 years for instance ?
There are a number of options/equity calculators:
https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits*", "~20% of companies will make you money")
https://comp.data.frontapp.com/ "Compensation and Equity Calculator"
http://optionsworth.com/ "What are my options worth?"
http://foundrs.com/ "Co-Founder Equity Calculator"
CMU Computer Systems: Self-Grading Lab Assignments (2018)
These look fun; in particular the "Attack Lab".
Dockerfiles might be helpful and easy to keep updated. Alpine Linux or just busybox are probably sufficient?
The instructor set could extend FROM the assignment image and run a few tests with e.g. testinfra (pytest)
You can also test code written in C with gtest.
I haven't read through all of the materials: are there suggested (automated) fuzzing tools? Does OSS-Fuzz solve?
Are there references to CWE and/or the SEI CERT C Coding Standard rules? https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?c...
"How could we have changed our development process to catch these bugs/vulns before release?"
"If we have 100% [...] test coverage, would that mean we've prevented these vulns?"
What about 200%?
What on earth would 200% test coverage mean?
Coverage for all the code that was written, and all the code that has yet to be typed, of course.
All thanks to quantum computing :)
Show HN: Debugging-Friendly Tracebacks for Python
pytest also has helpful tracebacks; though only for test runs.
With nose-progressive, you can specify --progressive-editor or update the .noserc so that traceback filepaths are prefixed with your preferred editor command.
vim-unstack parses paths from stack traces / tracebacks (for a number of languages including Python) and opens each in a split at that line number. https://github.com/mattboehm/vim-unstack
Here's the Python regex from my hackish pytb2paths.sh script:
'\s+File "(?P<file>.*)", line (?P<lineno>\d+), in (?P<modulestr>.*)$'
https://github.com/westurner/dotfiles/blob/develop/scripts/p...Why isn't 1 a prime number?
Funny true story about this: One time a man came up to me and handed me his phone, and asked me to call his mom. He was about to pass out and he asked me not call an ambulance (God bless America) , he appeared to have a concussion. By this time there was about 10-15 people around him and he could barely talk. I asked him what his phone code was and instead of just giving it to me he said "it's the first four prime numbers". Immediately, about five people shout "1,2,3,5". I am no longer holding the phone, because I handed to someone else to make sure it was okay. Sure enough, I was in a mathematical proofs class and we had just discussed this topic. So, I say "one is not a prime number". Of course, we get the phone unlocked in the second try with "2,3,5,7" and the guys mom is on the way. Everyone thought I was a genius a hero.
You can also dial emergency contacts without unlocking the phone. They are accessible from the medical ID page on iOS, I assume Android has similar.
> You can also dial emergency contacts without unlocking the phone. They are accessible from the medical ID page on iOS, I assume Android has similar.
You can set a Lock Screen Message by searching for "Lock Screen Message" in the Android Settings.
You can also create an "ICE (In Case of Emergency)" contact.
How do we know when we’ve fallen in love? (2016)
Where is the feeling felt? Is there an associated color or a shape or a kinesiologic position?
"Love styles" https://en.wikipedia.org/wiki/Love_styles
"Greek words for love" https://en.wikipedia.org/wiki/Greek_words_for_love
Rare and strange ICD-10 codes
These are too funny: https://www.empr.com/home/features/the-strangest-and-most-ob...
Choice selection:
V91.07X - Burn Due to Water Skis on Fire(?!)
W61.42XA. Struck By Turkey, initial encounter. If a duck is involved, there's a code for that too. (W61.62X)
R46.1. Bizarre Personal Appearance.
My favorite: T63.012D: Toxic effect of rattlesnake venom, intentional self-harm, subsequent encounter
So basically you have a case where someone not only intentionally wanted to harm themselves with rattlesnake venom once, but at least twice!
> So basically you have a case where someone not only intentionally wanted to harm themselves with rattlesnake venom once, but at least twice!
No, you misunderstand the terminology. "Subsequent encounter" means with the doctor not with the rattlesnake. AKA followup care during or after recovery.
> No, you misunderstand the terminology. "Subsequent encounter" means with the doctor not with the rattlesnake
You can reference ICD codes with the schema.org/code property of schema.org/MedicalEntity and subclasses. https://schema.org/docs/meddocs.html
"Subsequent encounter" is poorly defined. IMHO, there should be a code for this.
> "Subsequent encounter" is poorly defined.
"Poorly defined" is poorly defined. Explanations of when to use the D make perfect sense to me.
"The 7th character for “subsequent encounter” is to be used for all encounters after the patient has received active treatment of the condition and is receiving routine care for the condition during the healing or recovery phase. Examples of subsequent encounters include cast change or removal, x-ray to check healing status of a fracture, removal of external or internal fixation device, medication adjustments, and other aftercare and follow-up visits following active treatment of the injury or condition. Encounters for rehabilitation, such as physical and occupational therapy, are another example of the use of the “subsequent encounter” 7th character. For aftercare following an injury, the acute injury code should be assigned with the 7th character for subsequent encounter."
This parses the ICD 10 CM with lxml: https://github.com/westurner/pycd10api/blob/master/pycd10api...
Python Requests III
So what's the difference to the predecessor?
asyncio, HTTP/2, connection pooling, timeouts, Python 3.6+
README > "Feature Support" https://github.com/kennethreitz/requests3/blob/master/README...
https://github.com/aio-libs/aiohttp already supports asyncio. Why do I need this for asyncio?
Anyway, I see that Issues are disabled for the repo. Is that the new way to develop? /s
Tools built with aiohttp: https://libraries.io/pypi/aiohttp/usage
Tools built with requests: https://libraries.io/pypi/requests/usage
Post-surgical deaths in Scotland drop by a third, attributed to a checklist
My former roommate is a pilot. When I first met him, I noticed that he uses checklists for just about everything, even the most basic everyday tasks.
After some time, I decided to apply that same mentality to my own life. Both in private and work situations.
I get it now. Checklists reduce cognitive load tremendously well, even for basic tasks. As an example: I have a checklist for when I need to travel, it contains stuff like what to pack, asking someone to feed my cat, check windows are closed, dishwasher empty, heating turned down, etc. Before the checklist, I would always be worried I forgot something, now I can relax.
Also, checklists are a great way to improve processes. Basically a way to debug your life. For instance: I once forgot to empty the trash bin before a long trip, I added that to my checklist and haven't had a smelly surprise ever since ;)
This is described in the book called "the checklist manifesto". Very good book by the way.
https://www.amazon.com/Checklist-Manifesto-How-Things-Right/...
The medical community uses checklists. They've been using checklists for a while. A number of studies found that they only help for a few months, while they're new.
That said, "the medical community" is not a homogeneous monolith, and you can absolutely find regional variation in what checklists are used for, how detailed they are, how closely they're followed, how people are accountable for keeping to them, etc.
"The Checklist Manifesto" chose to overlook the studies about how transient the benefit of checklists is.
Are these paper based? Any checklist apps out there for our daily use?
GitHub and GitLab support task checklists in Markdown and also project boards which add and remove labels like 'ready' and 'in progress' when cards are moved between board columns; like kanban:
- [ ] not complete
- [x] completed
Other tools support additional per-task workflow states:
- [o] open
- [x (2019-04-17)] completed on date
I worked on a large hospital internal software project where the task was to build a system for reusable checklists editable through the web that prints them out in duplicate or triplicate at nearby printers. People really liked having the tangible paper copy.
"The Checklist Manifesto" by Atul Gawande was published while I worked there. TIL pilots have been using checklists for process control in order to reduce error for many years.
Evernote, RememberTheMilk, Google Tasks, and Google Keep all support checklists. Asana and Gitea and TaskWarrior support task dependencies.
A person could carry around a Hipster PDA with Bullet Journal style tasks lists with checkboxes; printed from a GTD service with an API and a @media print CSS stylesheet: https://en.wikipedia.org/wiki/Hipster_PDA
I'm not aware of very many tools that support authoring reusable checklists with structured data elements and data validation.
...
There are a number of configuration management systems like Puppet, Chef, Salt, and Ansible that build a graph of completable and verifiable tasks and then depth-first traverse said graph (either with hash randomization resulting in sometimes different traversals or with source order as an implicit ordering)
Resource scheduling systems like operating systems and conference room schedulers can take ~task priority into account when optimally ordering tasks given available resources; like triage.
Scheduling algorithms: https://news.ycombinator.com/item?id=15267146
TodoMVC catalogs Todo list implementations with very many MV* JS Frameworks: http://todomvc.com
Reusable Checklists could be done with a simple text document that you duplicate every time you need it no ?
For sure. Though many tools don't read .txt (or .md/.markdown) files.
GitHub and GitLab support (multiple) Issue and Pull Request templates:
Default: /.github/ISSUE_TEMPLATE.md || Configure in web interface
/.github/ISSUE_TEMPLATE/Name.md || /.gitlab/issue_templates/Name.md
Default: /.github/PULL_REQUEST_TEMPLATE.md || Configure in web interface
/.github/PULL_REQUEST_TEMPLATE/Name.md || /.gitlab/merge_request_templates/Name.md
There are template templates in awesome-github-templates [1] and checklist template templates in github-issue-templates [2].
I really want slack to adopt these as well...
I'll often be on call with customer and create a checklist on MacOS Notes on the fly. Then will copy paste that in slack or github for simple tracking.
Mattermost supports threaded replies and Markdown with checklist checkboxes
You can post GitHub/GitLab project updates to a Slack/Mattermost channel with webhooks (and search for and display GH/GL issues with /slash commands); though issue edits and checkbox state changes aren't (yet?) included in the events that channels receive.
Apply to Y Combinator
The next Startup School course begins in July: https://www.startupschool.org/
> Learn how to start a startup with YC’s free 10-week online course.
> Here is a list of the top 100 Y Combinator companies by valuation (why valuation?), as of October 2018. We also included YC’s top 12 exits.
Here's the list of the 1,900 Y Combinator companies through Winter 2019 (W19) https://www.ycombinator.com/companies/
"Startup Playbook" by Sam Altman (YC Founder) and Illustrated by Gregory Koberger is also a good read: https://playbook.samaltman.com/
Trunk-Based Development vs. Git Flow
One major advantage of the gitflow/hubflow git workflows is that there is a standard way of merging across branches. For example, a 'hotfix' branch is merged into the stable master branch and also develop with one standard command; there's no need to re-explain and train new devs on how the branches were supposed to work here. I even copied the diagram(s) into my notes: https://westurner.github.io/tools/#hubflow
IMHO, `git log` on the stable master branch containing each and every tagged release is preferable to having multiple open release branches.
Requiring tests to pass before a PR gets merged is a good policy that's independent of the trunk or gitflow workflow decision.
Ask HN: Anyone else write the commit message before they start coding?
I feel like I just learned how to use Git: writing the message first thing has made me a lot more productive. I'm wondering if anyone else does this; I know test driven development is a thing, where people write tests before code, and this seems like a logical extension.
Ask HN: Datalog as the only language for web programming, logic and database
Can Datalog be used as the only language which we can use for writing server-side web application, complex domain business logic and database querying?
Are there any efforts made in this direction.
To quote myself from a post the other day https://news.ycombinator.com/item?id=19407170 :
> PyDatalog does Datalog (which is ~Prolog, but similar and very capable) logic programming with SQLAlchemy (and database indexes) and apparently NoSQL support. https://sites.google.com/site/pydatalog/
> Datalog: https://en.wikipedia.org/wiki/Datalog
> ... TBH, IDK about logic programming and bad facts. Resilience to incorrect and incredible information is - I suppose - a desirable feature of any learning system that reevaluates its learnings as additional and contradictory information makes its way into the datastores.
I'm not sure that Datalog is really necessary for most CRUD operations; SQLAlchemy and the SQLAlchemy ORM are generally sufficient for standard database querying CRUD.
The cortex is a neural network of neural networks
Is there a program like codeacademy but for learning sysadmin?
if not, anyone wanna build one?
A few sysadmin and devops curriculum resources; though none but Beaker and Molecule are interactive with any sort of testing AFAIU:
"System Administrator" https://en.wikipedia.org/wiki/System_administrator
"Software Configuration Management" (SCM) https://en.wikipedia.org/wiki/Software_configuration_managem...
"DevOps" https://en.wikipedia.org/wiki/DevOps
"OpsSchool Curriculum" http://www.opsschool.org
- Soft Skills 101, 201
- Labs Exercises
- Free. Contribute
awesome-sysadmin > configuration-management https://github.com/kahun/awesome-sysadmin/blob/master/README...
- This could list reusable module collections such as Puppet Forge and Ansible Galaxy;
- And module testing tools like Puppet Beaker and Ansible Molecule (that can use Vagrant or Docker to test a [set of] machines)
https://github.com/stack72/ops-books
- I'd add "Time Management for System Administrators" (2005)
https://landing.google.com/sre/books/
- There's now a "Site Reliability Workbook" to go along with the Google SRE book. Both are free online.
https://response.pagerduty.com
- The PagerDuty Incident Response Documentation is also free online.
- OpsGenie has a free plan also with incident response alerting and on-call management.
There are a number of awesome-devops lists.
Minikube and microk8s package Kubernetes into a nice bundle of distributed systems components that'll run on Lin, Mac, Win. You can convert docker-compose.yml configs to Kubernetes pods when you decide that it should've been HA with a load balancer SPOF and x.509 certs and a DRP (Disaster Recovery Plan) from the start!
Maybe You Don't Need Kubernetes
The argument that Kubernetes adds complexity is, in my opinion, bogus. Kubernetes is a "define once, forget about it" type of infrastructure. You define the state you want your infrastructure to and Kubernetes takes care of maintaining that state. Tools like Ansible and Puppet, as great as they are, do not guarantee your infrastructure will end up in the state you defined and you easily end up with broken services. The only complexity in kubernetes is the fact that it forces you to think and carefully design your infra in a way people aren't used to, yet. More upfront, careful thinking isn't complexity. It can only benefit you in the long run.
There is, however, a learning curve to Kubernetes, but it isn't this sharp. It does require you to sit down and read the doc for 8 hours, but that a small price to pay.
A few month back I wrote a blog post[1] that, through walking through the few different infrastructures my company experimented with over the years, surfaces many reasons one would want to use [a managed] Kubernetes. (For a shorter read, you can probably start at reading at [2])
[1]: https://boxunix.com/post/bare_metal_to_kube
[2]: https://boxunix.com/post/bare_metal_to_kube/#_hardware_infra...
It's pretty common for new technologies to advertise themselves as "adopt, and forget about it", but in my experience it's unheard of that any actually deliver on this promise.
Any technology you adopt today is a technology you're going to have to troubleshoot tomorrow. (I don't think the 15,000 Kubernetes questions on StackOverflow are all from initial setup.) I can't remember the last [application / service / file format / website / language / anything related to computer software] that was so simple and reliable that I wasn't searching the internet for answers (and banging my head against the wall because of) the very next month. It was probably something on my C=64.
As Kernighan said back in the 1970's, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" I've never used Kubernetes, but I've read some articles about it and watched some videos, and despite the nonstop bragging about its simplicity (red flag #1), I'm not sure I can figure out how to deploy with it. I'm fairly certain I wouldn't have any hope of fixing it when it breaks next month.
Hearing testimonials only from people who say "it doesn't break!" is red flag #2. No technology works perfectly for everyone, so I want to hear from the people who had to troubleshoot it, not the people who think it's all sunshine and rainbows. And those people are not kind, and make it sound like the cost is way more than just "8 hours reading the docs" -- in fact, the docs are often called out as part of the problem.
> As Kernighan said back in the 1970's, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
What a great quote. Thanks
Some more quotes taken from http://typicalprogrammer.com/what-does-code-readability-mean that you might find interesting.
Any fool can write code that a computer can understand. Good programmers write code that humans can understand. – Martin Fowler
Just because people tell you it can’t be done, that doesn’t necessarily mean that it can’t be done. It just means that they can’t do it. – Anders Hejlsberg
The true test of intelligence is not how much we know how to do, but how to behave when we don’t know what to do. – John Holt
Controlling complexity is the essence of computer programming. – Brian W. Kernighan
The most important property of a program is whether it accomplishes the intention of its user. – C.A.R. Hoare
No one in the brief history of computing has ever written a piece of perfect software. It’s unlikely that you’ll be the first. – Andy Hunt
> Tools like Ansible and Puppet, as great as they are, do not guarantee your infrastructure will end up in the state you defined and you easily end up with broken services.
False dilemma. Ansible and Puppet are great tools for configuring kubernetes, kubernetes worker nodes, and building container images.
Kubernetes does not solve for host OS maintenance; though there are a number of host OS projects which remove most of what they consider to be unnecessary services, there's still need to upgrade kubernetes nodes and move pods out of the way first (which can be done with e.g. Puppet or Ansible).
As well, it may not be appropriate for monitoring to depend upon kubernetes; there again you have nodes to manage with an SCM tool.
Quantum Machine Appears to Defy Universe’s Push for Disorder
Maybe an example or related to time crystals?
"Scar (physics)": https://en.wikipedia.org/wiki/Scar_(physics)
> Scars are unexpected in the sense that stationary classical distributions at the same energy are completely uniform in space with no special concentrations along periodic orbits, and quantum chaos theory of energy spectra gave no hint of their existence
Pytype checks and infers types for your Python code
How does pytype compare with the PyAnnotate [1] and MonkeyType [2] dynamic / runtime PEP-484 type annotation type inference tools?
How I'm able to take notes in mathematics lectures using LaTeX and Vim
As a feat in itself this is definitely very impressive, but I wonder if it's really worth anyone's time to spend precious lecture time with your mind fully occupied in the mechanical task of taking notes rather than actually absorbing and engaging with the content. Especially for an extremely content-dense subject like Mathematics, where you need all your concentration just to process what you are reading and follow along the logic.
There is absolutely no dearth of study material on the internet. It's one thing to take notes as part of your study process. There it helps solidify your understanding. But surely taking notes on the fly when you have been barely introduced to the subject isn't going to help with that.
> mechanical task of taking notes rather than actually absorbing and engaging with the content
The mechanical task of taking notes is one of the most important parts of actually absorbing the material. It is not an either-or. Hearing/seeing the information, processing it in a way that makes sense to you individually, and then mechanically writing it down in a legible manner is one of the main methods that your brain learns. It's one of the primary reasons that taking notes is important in the first place. This is referred to as the "encoding hypothesis" [1].
There are actually even studies [2] that show that tools that assist in more efficient note taking, such as taking notes via typing rather than by hand, are actually detrimental to absorbing information, as it makes it easier for you to effectively pass the information directly from your ears to your computer without actually doing the processing that is required when writing notes by hand. This is why many universities prefer (or even require) notes to be taken by hand, and disallow laptops in class.
1: https://www.sciencedirect.com/science/article/pii/0361476X78...
2: https://www.npr.org/2016/04/17/474525392/attention-students-...
> The mechanical task of taking notes is one of the most important parts of actually absorbing the material. It is not an either-or. Hearing/seeing the information, processing it in a way that makes sense to you individually, and then mechanically writing it down in a legible manner is one of the main methods that your brain learns. It's one of the primary reasons that taking notes is important in the first place. This is referred to as the "encoding hypothesis" [1].
There's almost certainly an advantage to learning to think about math using a publishable symbol set like LaTeX.
We learn by reinforcement; with feedback loops that may take until weeks later in a typical university course.
> There are actually even studies [2] that show that tools that assist in more efficient note taking, such as taking notes via typing rather than by hand, are actually detrimental to absorbing information, as it makes it easier for you to effectively pass the information directly from your ears to your computer without actually doing the processing that is required when writing notes by hand.
Handwriting notes is impractical for some people due to e.g. injury and illegibility.
The linked study regarding retention and handwritten versus typed notes has been debunked with references that are referenced elsewhere in comments on this post. There have been a few studies with insufficient controls (lack of randomization, for one) which have been widely repeated by educators who want to be given attention.
Doodling has been shown to increase information retention. Maybe doodling as a control really would be appropriate.
Banning laptops from lectures is not respectful of students with injury and illegible handwriting. Asking people to put their phones on silent (so they can still make and take emergency calls) and refrain from distracting other students with irrelevant content on their computers is reasonable and considerate.
(What a cool approach to math note-taking. I feel a bit inferior because I haven't committed to learning that valuable, helpful skill and so that's stupid and you're just wasting your time because that's not even necessary when all you need to do is retain the information you've paid for for the next few months at most. If course, once you get on the job, you'll never always be using that tool and e.g. latex2sympy to actually apply that theory to solving a problem that people are willing to pay for. So, thanks for the tips and kudos, idiot)
LHCb discovers matter-antimatter asymmetry in charm quarks
So, does this disprove all of supersymmetry? https://en.wikipedia.org/wiki/Supersymmetry
No, supersymmetry and charge-parity symmetry are different.
Ah, thanks.
"CPT Symmetry" https://en.wikipedia.org/wiki/CPT_symmetry
"CP Violations" https://en.wikipedia.org/wiki/CP_violation
"Charm quark" https://en.wikipedia.org/wiki/Charm_quark :
> The antiparticle of the charm quark is the charm antiquark (sometimes called anticharm quark or simply anticharm), which differs from it only in that some of its properties have equal magnitude but opposite sign.
React Router v5
Using react router on one of my personal projects (~ 30k loc) was probably one of my largest regrets. At every turn it seemed designed to do the thing I wouldn’t expect, or have arbitrary restrictions that made my life tougher.
Some examples:
* there's no relative routes https://github.com/ReactTraining/react-router/issues/2172
* there's no way to refresh the page https://github.com/ReactTraining/react-router/issues/1982 ("that's your responsibility, not ours")
* The scroll position will stick when you navigate to a new route, causing you to need to create a custom component wrapper to manage scrolling https://github.com/ReactTraining/react-router/issues/3950
* React router's <link> do not allow you to link outside of the current site https://github.com/ReactTraining/react-router/issues/1147 so you have to make an in those cases. This doesn’t sound bad, but it’s particularly frustrating when dealing with dynamic or user generated content. Why can’t they handle this simple case?
> there's no relative routes
> React router's <link> do not allow you to link outside of the current site
Both valid points. Not show stoppers though. And not enough to make me regret using the library. The solution to the external links issue is a one-liner.
> there's no way to refresh the page https://github.com/ReactTraining/react-router/issues/1982 ("that's your responsibility, not ours")
That is your responsibility. Why do you need the routing library to handle page refreshing for you?
> the scroll position will stick when you navigate to a new route
Making the page scroll to the top when navigating to a new page is trivial. I would 100% rather have this problem instead of the opposite: scroll jumping to the top when I don't want it to. That's so much harder to fix.
> Navigating to the same page would not actually reload the page, it would just trigger a componentDidMount() on all components in the page again, which led me to have a lot of bugs when I did some initialization in my constructor
That's exactly what it's supposed to do. It's a client side routing solution. (I'm also pretty sure that it doesn't remount)
From the issues that you've had with the library, it seems like client side routing is not actually what you're looking for. If you regret using it so much, may I ask what the alternative would be?
IMO a project either needs client side routing, or it doesn't. If it does, then React Router is the obvious choice. Otherwise, of course, don't use it and save yourself from unnecessary complexity.
> The solution to the external links issue is a one-liner.
It is most definitely not a one-liner.
> Making the page scroll to the top when navigating to a new page is trivial.
So do it for me. There’s no reason a routing library should break default behavior.
> That's exactly what it's supposed to do. It's a client side routing solution. (I'm also pretty sure that it doesn't remount)
Then it’s “supposed” to have inconsistent behavior. Navigating anywhere else in my site will call constructors to all my components. Navigating to the same page won’t.
> From the issues that you've had with the library, it seems like client side routing is not actually what you're looking for.
My site was a music site. I wanted clientside routing so I could keep music playing while you navigated between links. Seemed like a slam dunk for clientside routing to me.
> may I ask what the alternative would be?
Quite honestly I would go onto GitHub and look for any routing solution that didn’t have multiple closed unfixed issues with hundreds of thumbs up.
> So do it for me.
I think this will work:
Put this wherever you put your reusable functions:
`const scrollToTop = () => document.getElementById('root').scrollIntoView();`
And put this in the components / functions you want the scrollToTop effect to work on:
`useEffect(() => { scrollToTop() }, []);`
And put this in your CSS:
`html { scroll-behavior: smooth }`
(Edit to add: Not saying React Router is something you should / shouldn’t use. Just wanted to share that code in case it helps unblock anyone.)
Accidentally downvoted on mobile (and upvoted two others). Thanks for this.
"Scroll Restoration" https://reacttraining.com/react-router/web/guides/scroll-res...
Experimental rejection of observer-independence in the quantum world
Objective truth!? A question for epistemologists to decide.
How could they record their high entropy (?) solipsistic observations in an immutable datastore in such as way as to have probably zero knowledge of the other party's observations?
Anyways, that's why I only read the title and the abstract.
Wigner's friend experiment: https://en.wikipedia.org/wiki/Wigner%27s_friend
Show HN: A simple Prolog Interpreter written in a few lines of Python 3
Cool tests! PyDatalog does Datalog (which is ~Prolog, but similar and very capable) logic programming with SQLAlchemy (and database indexes) and apparently NoSQL support. https://sites.google.com/site/pydatalog/
Datalog: https://en.wikipedia.org/wiki/Datalog
... TBH, IDK about logic programming and bad facts. Resilience to incorrect and incredible information is - I suppose - a desirable feature of any learning system that reevaluates its learnings as additional and contradictory information makes its way into the datastores.
Thanks for the feedback :) I'll definitely check out Datalog - I didn't realize they had logic programming integrated with SQLAlchemy, so it definitely sounds interesting!
How to earn your macroeconomics and finance white belt as a software developer
Thanks for the wealth of resources in this post. Here are a few more:
"Python for Finance: Analyze Big Financial Data" (2014, 2018) https://g.co/kgs/qkY8J6 ... https://pyalgo.tpq.io also includes the "Finance with Python" course and this book as a PDF and Jupyter notebooks.
Quantopian put out a call for the best Value Investing algos (implemented in quantopian/zipline) awhile back. This post links to those and other value investing resources: https://westurner.github.io/hnlog/#comment-19181453 (Ctrl-F "econo")
"Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 links to these excellent lectures and a number of tools for working with actual data from FRED, ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank, Quandl.
One thing that many finance majors, courses, and resources often fail to identify is the role that startup and small businesses play in economic growth and actual value creation: jobs, GDP, return on direct capital investment. Most do not succeed, but it is possible to do better than index funds and have far more impact in terms of sustainable investment than as an owner of a nearly-sure-bet index fund that owns some shares and takes a hands-off approach to business management, research, product development, and operations.
Is it possible to possess a comprehensive understanding of finance and economics but still not have personal finance down? Personal finance: r/personalfinance/wiki, "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894632
Ask HN: Relationship between set theory and category theory
I have an idea about the relationship between set theory and category theory and I would like some feedback. I would like others to see it too, and I don't know how to do it. I think it's at least interesting to look at as a slightly crazy collage, but I was a bit more excited than normal when the idea hit, so I just had to dump it all at once in this image: https://twitter.com/FamilialRhino/status/1101777965724168193 (You will have to zoom the picture in order to be able to read the scribbles.)
It has to do with resonance in the energy flowing in emergent networks. Can't quite put my finger on it, so I'll be here to answer any questions.
Thanks for reading.
"Categorical set theory" > "References" https://en.wikipedia.org/wiki/Categorical_set_theory#Referen...
From "Homotopy category" > "Concrete categories" https://en.wikipedia.org/wiki/Homotopy_category#Concrete_cat... :
> While the objects of a homotopy category are sets (with additional structure), the morphisms are not actual functions between them, but rather a classes of functions (in the naive homotopy category) or "zigzags" of functions (in the homotopy category). Indeed, Freyd showed that neither the naive homotopy category of pointed spaces nor the homotopy category of pointed spaces is a concrete category. That is, there is no faithful functor from these categories to the category of sets.
My "understanding" of category theory is extremely shallow, but that's exactly why I think my proposal makes sense. It is the kind of thing that everybody ignores for decades precisely because it's transparently obvious, like a fish that doesn't understand water.
Here is the statement:
The meaning of no category is every category.
reference: https://terrytao.wordpress.com/2008/02/05/the-blue-eyed-isla...
This was already understood by everybody in the field, no doubt. It's just that somebody has to actually say it to someone else in order for the symmetry to break. The link above has the exact description of this, from Terence Tao.
The most popular docker images each contain at least 30 vulnerabilities
Although vulnerability scanners can be a useful tool, I find it very troublesome that you can utter the sentence "this package contains XX vulnerabilities, and that package contains YY vulnerabilities" and then stop talking. You've provided barely any useful information!
The quantity of vulnerabilities in an image is not really all that useful information. A large amount of vulnerabilities in a Docker image does not necessarily imply that there's anything insecure going on. Many people don't realize that a vulnerability is usually defined as "has a CVE security advisory", and that CVEs get assigned based on a worst-case evaluation of the bug. As a result, having a CVE in your container barely tells you anything about your actual vulnerability position. In fact, most of the time you will find that having a CVE in some random utility doesn't matter. Most CVEs in system packages don't apply to most of your containers' threat models.
Why not? Because an attacker is very unlikely to be able to use vulnerabilities in these system libraries or utilities. Those utilities are usually not in active use in the first place. Even if they are used, you are not usually in a position to exploit these vulnerabilities as an attacker.
Just as an example, a hypothetical outdated version of grep in one of these containers can hypothetically contain many CVEs. But if your Docker service doesn't use grep, then you would need to manually run grep to be vulnerable. And an attacker that is able to run grep in your Docker container has already owned you - it doesn't make a difference that your grep is vulnerable! This hypothetical vulnerable version of grep therefore makes no difference in the security of your container, despite containing many CVEs.
It's the quality of these vulnerabilities that matters. Can an attacker actually exploit the vulnerabilities to do bad things? The answer for almost all of these CVEs is "no". But that's not really the product that Snyk sells - Snyk sells a product to show you as many vulnerabilities as possible. Any vulnerability scanner company thinks it can provide most business value (and make the most money) by reporting as many vulnerabilities as it can. For sure it can help you to pinpoint those few vulnerabilities that are exploitable, but that's where your own analysis comes in.
I'm not saying there's not a lot to improve in terms of container security. There's a whole bunch to improve there. But focusing on quantities like "amount of CVEs in an image" is not the solution - it's marketing.
This entire depending on a base container for setup and then essentially throwing away everything you get from the package manager is part of the issue. There is no real package management for Docker. Hell, there isn't even an official way to determine if your image needs an upgrade (I wrote a ruby script that gets the latest tags from a docker repo, extracts the ones with numbers and sorts them in order and compares them to what I have running).
Relying on an Alpine/Debian/Ubuntu base helps to get dependencies installed quickly. Docker could have just created their own base distro and some mechanism to track package updates across images, but they did not.
There are guides for making bare containers, they contain nothing .. no ip, grep, bash .. only the bare minimum libraries and requirements to run your service. They are minimal, but incredibly difficult to debug (sysdig still sucks unless you shell out money for enterprise).
I feel like containers are alright, but Docker is a partial dumpster fire. cgroup isolation is good, the crazy way we deal with packages in container systems is not so good.
Sure if you're just checking for base-distro packages for security vulnerabilities, you're going to find security issues that don't apply (e.g. an exploit in libpng even though your container runs nothing that even links to libpng), but it does excuse the whole issue with the way containers are constructed.
I think this space is really open too, for people to find better systems that are also portable: image formats that are easy to build, easy to maintain dependencies for, and easy to run as FreeBSD jails OR Linux cgroup managed containers (Docker for FreeBSD translated images to jails, but it's been unmaintained for years).
I agree the tooling is a tire fire :(
> e.g. an exploit in libpng even though your container runs nothing that even links to libpng
it's a problem because some services or api's could be abused giving the attacker a path to these vulnerable resources and then use that vulnerability regardless if it is used currently by your service.
I like my images to only contain what's absolutely needed to do that job only. It's not so difficult to do, provided people would be willing to architect systems from the ground up, instead of pulling in a complete debian or fedora installation and then removing things (that should be outlawed imho lol). Not only do I get less attack surface but also smaller updates (which again then is incentive to do more often), less complexity, less logs, easier auditabile (now every log file or even log line might give valid clues), faster incident response, easier troubleshooting, sorry for going on and on ...
It's a cultural problem too: where people working in an environment where it's normal to have every command available on a production system (really?), and where there is no barrier to install anything new that is "required" without discussion & peer review (what are we pair programming for?) or where nobody even tracks the dead weight in production or whether configs are locked down?.
I sometimes think many companies lost control over this long ago. [citation needed] :(
I'm not familiar with Docker infrastructure but what is the alternative to "pulling in a complete debian or fedora installation and then removing things"? Compiling your own kernel and doing the whole "Linux From Scratch" thing? Isn't that incredibly time-intensive to do for every single container?
Just have an image with very a minimal user land. Compiling your own kernel is irrelevant because you need the host kernel to run the container, and container images don't contain a kernel.
The busybox image is a good starting point. Take that, then copy your executables and libraries. If you are willing to go further, you can rather easily compile your own busybox with most utilities stripped out. It's not time intensive because you need to do it just once, and it takes just an afternoon to figure out how.
I don't think this is a tooling problem at all.
"The tooling makes it too easy to do it wrong." Compared to shell scripts with package manager invocations? Nobody configures a system with just packages: there are always scripts to call, chroots to create, users and groups to create, passwords to set, firewall policies to update, etc.
There are a bunch of ways to create LXC containers: shell scripts, Docker, ansible. Shell scripts preceded Docker: you can write a function to stop, create an intermediate tarball, and then proceed (so that you don't have to run e.g. debootstrap without a mirror every time you manually test your system build script; so that you can cache build steps that completed successfully).
With Docker images, the correct thing to do is to extend FROM the image you want to use, build the whole thing yourself, and then tag and store your image in a container repository. Neither should you rely upon months-old liveCD images.
"You should just build containers on busybox." So, no package management? A whole ensemble of custom builds to manually maintain (with no AppArmor or SELinux labels)? Maintainers may prefer for distros to field bug reports for their own common build configurations and known-good package sets. Please don't run as root in a container ("because it's only a container that'll get restarted someday"). Busybox is not a sufficient OS distribution.
It's not the tools, it's how people are choosing to use them. They can, could, and should try and use idempotent package management tasks within their container build scripts; but they don't and that's not Bash/Ash/POSIX's fault either.
> With Docker images, the correct thing to do is to extend FROM the image you want to use, build the whole thing yourself, and then tag and store your image in a container repository. Neither should you rely upon months-old liveCD images.
This should rebuild all. There should be an e.g. `apt-get upgrade -y && rm -rf /var/lib/apt/lists` in there somewhere (because base images are usually not totally current (and neither are install ISOs)).
`docker build --no-cache --pull`
You should check that each Dockerfile extends FROM `tag:latest` or the latest version of the tag that you support. Its' not magical, you do have to work it.
Also, IMHO, Docker SHOULD NOT create another Linux distribution.
Tinycoin: A small, horrible cryptocurrency in Python for educational purposes
The 'dumbcoin' jupyter notebook is also a good reference: "Dumbcoin - An educational python implementation of a bitcoin-like blockchain" https://nbviewer.jupyter.org/github/julienr/ipynb_playground...
When does the concept of equilibrium work in economics?
"Modeling stock return distributions with a quantum harmonic oscillator" (2018) https://iopscience.iop.org/article/10.1209/0295-5075/120/380...
> We propose a quantum harmonic oscillator as a model for the market force which draws a stock return from short-run fluctuations to the long-run equilibrium. The stochastic equation governing our model is transformed into a Schrödinger equation, the solution of which features "quantized" eigenfunctions. Consequently, stock returns follow a mixed χ distribution, which describes Gaussian and non-Gaussian features. Analyzing the Financial Times Stock Exchange (FTSE) All Share Index, we demonstrate that our model outperforms traditional stochastic process models, e.g., the geometric Brownian motion and the Heston model, with smaller fitting errors and better goodness-of-fit statistics. In addition, making use of analogy, we provide an economic rationale of the physics concepts such as the eigenstate, eigenenergy, and angular frequency, which sheds light on the relationship between finance and econophysics literature.
"Quantum harmonic oscillator" https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
The QuantEcon lectures have a few different multiple agent models:
"Rational Expectations Equilibrium" https://lectures.quantecon.org/py/rational_expectations.html
"Markov Perfect Equilibrium" https://lectures.quantecon.org/py/markov_perf.html
"Robust Markov Perfect Equilibrium" https://lectures.quantecon.org/py/rob_markov_perf.html
"Competitive Equilibria of Chang Model" https://lectures.quantecon.org/py/chang_ramsey.html
... "Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 (data sources (pandas-datareader, pandaSDMX), tools, latex2sympy)
"Econophysics" https://en.wikipedia.org/wiki/Econophysics
> Indeed, as shown by Bruna Ingrao and Giorgio Israel, general equilibrium theory in economics is based on the physical concept of mechanical equilibrium.
Simdjson – Parsing Gigabytes of JSON per Second
> Requirements: […] A processor with AVX2 (i.e., Intel processors starting with the Haswell microarchitecture released 2013, and processors from AMD starting with the Rizen)
Also noteworthy that on Intel at least, using AVX/AVX2 reduces the frequency of the CPU for a while. It can even go below base clock.
iirc, it's complicated. Some instructions don't reduce the frequency; some reduce it a little; some reduce it a lot.
I'm not sure AVX2 is as ubiquitous as the README says: "We assume AVX2 support which is available in all recent mainstream x86 processors produced by AMD and Intel."
I guess "mainstream" is somewhat subjective, but some recent Chromebooks have Celeron processors with no AVX2:
https://us-store.acer.com/chromebook-14-cb3-431-c5fm
https://ark.intel.com/products/91831/Intel-Celeron-Processor...
Because someone wanting 2.2GB/s JSON parsing is deploying to a chromebook...
It doesn't seem that laughable to me to want faster JSON parsing on a Chromebook, given how heavily JSON is used to communicate between webservers and client-side Javascript.
"Faster" meaning faster than Chromebooks do now; 2.2 GB/s may simply be unachievable hardware-wise with these cheap processors. They're kinda slow, so any speed increase would be welcome.
AVX2 also incurs some pretty large penalties for switching between SSE and AVX2. Depending on the amount of time taken in the library between calls, it could be problematic.
This looks mostly applicable to server scenarios where the runtime environment is highly controlled.
There is no real penalty for switching between SSE and AVX2, unless you do it wrong. What are you referring to specifically?
Are you talking about state transition penalties that can occur if you forget a vzeroupper? That's the only thing I'm aware of which kind of matches that.
A faster, more efficient cryptocurrency
Full disclosure: I work on the cryptocurrency in this article, Algorand.
There are a lot of questions and speculation here about this paper and Algorand. I would be happy to try an answer them to your satisfaction. Some context may be helpful first, though. This paper is an innovation about one aspect of our technology. Algorand has a very fast consensus mechanism and can add blocks as quickly as the network can deliver them. We become a victim of our success. The blockchain will grow very rapidly. A terabyte a month is possible. The storage issue associated with our performance can quickly become an issue. The Vault paper is focused on solving this and other storage scaling problems.
The Algorand pure proof-of-stake blockchain and associated cryptocurrency has many novel innovations aside from Vault. It possesses security and scalability properties beyond what any other blockchain technology allows while still being completely decentralized. Our website, algorand.com, and whitepaper are great places to start to learn more.
If you learn best from videos then I suggest you watch Turing award winner and cryptographic pioneer, Silvio Micali, talk about Algorand: https://youtu.be/NykZ-ZSKkxM. He is a captivating speaker and the founder of Algorand.
Are there reasons that e.g. Bitcoin and Ethereum and Stellar could not implement some of these more performant approaches that Algorand [1] and Vault [2] have developed, published, and implemented? Which would require a hard fork?
[2] https://dspace.mit.edu/handle/1721.1/117821
My understanding is that PoS approaches follow normal byzantine agreement theory which states that adversaries cannot control more than 1/3rd of the accounts (or money in the case of algorand). You can also delay new blocks more easily.
Ethereum is scared or that so they are implementing some hybrid form.
Bitcoin is doomed from my perspective, because of the focus on proof of work and the confirmation times. When you realize that algorand is super fast, there is no "confirmation time", and there is no waste in energy to mine, then it is hard to back up any cryptocurrency focusing on proof of work.
And what of decentralized premined chains (with no PoW, no PoS, and far less energy use) that release coins with escrow smart contracts over time such as Ripple and Stellar (and close a new ledger every few seconds)?
> Algorand has a very fast consensus mechanism and can add blocks as quickly as the network can deliver them. We become a victim of our success. The blockchain will grow very rapidly. A terabyte a month is possible. The storage issue associated with our performance can quickly become an issue. The Vault paper is focused on solving this and other storage scaling problems.
What prevents a person from using a chain like IPFS?
Ethereum Casper PoS has been under review for quite some time.
Why isn't all Bitcoin on Lightning Network?
Bitcoin could make bootstrapping faster by choosing a considered-good blockhash and balances, but AFAIU, re-verifying transactions like Bitcoin and derivatives do prevents hash collision attacks that are currently considered infeasible for SHA-256 (especially given a low block size).
There was an analysis somewhere where they calculated the cloud server instance costs of mounting a ~51% attack (which applies to PoW chains) for various blockchains.
Bitcoin is not profitable to mine in places without heavily subsidized dirty/clean energy anymore: energy and Bitcoin commodity costs and prices have intersected. They'll need any of: inexpensive clean energy, more efficient chips, higher speculative value.
Energy arbitrage (grid-scale energy storage) may be more profitable now. We need energy storage in order to reach 100% renewable energy (regardless of floundering policy support).
Ripple is not decentralized. I don't know enough about Stellar to answer.
Bitcoin is software and can easily implement these features but the community is divided and can't reach consensus on anything. Lightning Network as layer two solution is pretty good from what I know.
Ethereum improvements are coming along very slowly and that's good. They're the only blockchain with active engagement by thousands of multiple parties.
Aragaon and Vault's papers might sound good, but who knows how they'll turn out in production.
People argue this all day. There's a lot of FUD.
Ripple only runs ~7% of validator nodes; which is far less centralized control than major Bitcoin mining pools and businesses (who do the deciding in regards to the many Bitcoin hard forks); that's one form of decentralization.
Ripple clients can use their own UNL or use the Ripple-approved UNL.
Ripple is traded on a number of exchanges (though fewer than Bitcoin for certain); that's another form of decentralization.
As an open standard, ILP will further reduce vendor lock in (and increase interoperability between) networks that choose to implement it.
There are forks of Ripple (e.g. Stellar) just like there are forks of Bitcoin and Ethereum.
From https://ripple.com/insights/the-inherently-decentralized-nat... :
> In contrast, the XRP Ledger requires 80 percent of validators on the entire network, over a two-week period, to continuously support a change before it is applied. Of the approximately 150 validators today, Ripple runs only 10. Unlike Bitcoin and Ethereum — where one miner could have 51 percent of the hashing power — each Ripple validator only has one vote in support of an exchange or ordering a transaction.
How does your definition of 'decentralized' differ?
[deleted]
Git-signatures – Multiple PGP signatures for your commits
Is there anything out there that doesn't need GPG? Having a working GPG install is a huge lift for developers.
I take this to mean: apart from the barnacles on GPG, could there be a system which does what GPG does for software development (signing), without the non-functioning web-of-trust of GPG, or the hierarchical system of x509 signing? Something that deals with lost keys, compromised keys/accounts, loss of DNS control, MitMing, MitBing, etc?
I think it is probably in the class of problems where there are no great foolproof solutions. However, I can imagine that techniques like certificate transparency (all signed x509 certificates pushed to a shared log) would be quite useful. Even blockchain techniques. Maybe send someone to check on me, I'm feeling unwell having written that.
> I think it is probably in the class of problems where there are no great foolproof solutions. However, I can imagine that techniques like certificate transparency (all signed x509 certificates pushed to a shared log) would be quite useful.
Securing DNS: "https://news.ycombinator.com/item?id=19181362"
> Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?" https://news.ycombinator.com/item?id=18961724
> Namecoin (decentralized blockchain DNS): https://en.wikipedia.org/wiki/Namecoin
(Your first link is broken.)
My main problem with blockchain is the excessive energy consumption of PoW. I know there are PoS efforts, but they seem problematical.
I like the recent CertLedger paper: https://eprint.iacr.org/2018/1071.pdf
My mistake. How ironic. Everything depends upon the red wheelbarrow. Here's that link without the trailing ": https://news.ycombinator.com/item?id=19181362
> My main problem with blockchain is the excessive energy consumption of PoW. I know there are PoS efforts, but they seem problematical.
One report said that 78% of Bitcoin energy usage is from renewable sources (many of which would otherwise be curtailed and otherwise unfunded due to flat-to-falling demand for electricity). But PoW really is expensive and hopefully the market will choose less energy-inefficient solutions from the existing and future blockchain solutions while keeping equal or better security assurances.
>> Proof of Work (Bitcoin, ...), Proof of Stake (Ethereum Casper), Proof of Space, Proof of Research (GridCoin, CureCoin,)
The spec should be: DDOS resiliant (without a SPOF), no one entity with control over API and/or database credentials and database backups and the clock, and immutable.
Immutability really cannot be ensured with hashed records that incorporate the previous record's hash as a salt in a blocking centralized database because someone ultimately has root and the clock and all the backups and code vulnerable to e.g. [No]SQL injection; though distributed 'replication' and detection of record modification could be implemented. git push -f may be detected if it's on an already-replicated branch; but git depends upon local timestamps. google/trillian does Merkle trees in a centralized database (for Certificate Transparency).
In quickly reading the git-signatures shell script sources, I wasn't certain whether the git-notes branch with the .gitsigners that are fetched from all n keyservers (with DNS) is also signed?
I also like the "Table 1: Security comparison of Log Based Approaches to Certificate Management" in the CertLedger paper. Others are far more qualified to compare implementations.
You read my mind. I'd love if it could be rooted in a Yubikey.
Decoupling the "signing" and "verifying" parts seem like a good idea. As random Person signs something, how someone else figures out how to go trust that signature is a separate problem.
> I'd love if it could be rooted in a Yubikey.
FIDO2 and Yubico helped develop the new W3C WebAuthn standard: https://en.wikipedia.org/wiki/WebAuthn
But WebAuthn does not solve for WoT or PKI or certificate pinning.
> Decoupling the "signing" and "verifying" parts seem like a good idea. As random Person signs something, how someone else figures out how to go trust that signature is a separate problem.
Someone can probably help with terminology here. There's identification (proving that a person has the key AND that it's their key (biometrics, challenge-response)), signing (using a key to create a cryptographic signature – for the actual data or a reasonably secure cryptographic hash of said data – that could only could have been created with the given key), signature verification (checking that the signature was created by the claimed key for the given data), and then there's trusting that the given key is authorized for a specific purpose (Web of Trust (key-signing parties), PKI, ACME, exchange of symmetric keys over a different channel such as QKD) by e.g. signing a structured document that links cryptographic keys with keys for specific authorized functions and trusting the key(s) used to sign said authorizing document.
Private (e.g. Zero Knowledge) blockchains can be used for key exchange and key rotation. Public blockchains can be used for sharing (high-entropy) key components; also with an optional exchange of money to increase the cost of key compromise attempts.
There's also WKD: "Web Key Directory"; which hosts GPG keys over HTTPS from a .well-known URL for a given user@domain identifier: https://wiki.gnupg.org/WKD
Compared to existing PGP/GPG keyservers, WKD does rely upon HTTPS.
TUF is based on Thandy. TUF: "The Update Framework" does not presume channel security (is designed to withstand channel compromise) https://en.wikipedia.org/wiki/The_Update_Framework_(TUF)
The TUF spec doesn't mention PGP/GPG: https://github.com/theupdateframework/specification/blob/mas...
There's a derivative of TUF for automotive applications called Uptane: https://uptane.github.io
The Bitcoin article on multisignature; 1-of-2, 2-of-2, 2-of-3, 3-of-5, etc.: https://en.bitcoin.it/wiki/Multisignature
Running an LED in reverse could cool future computers
"Near-field photonic cooling through control of the chemical potential of photons" (2019) https://www.nature.com/articles/s41586-019-0918-8
Compounding Knowledge
Buffet’s approach to life is interesting for the same reason an Olympic gymnast is interesting. He has specialized to an extreme and is taking advantage of the rewards of that specialization and natural talent in a unique way.
It’s easy for me to feel shame that I don’t read 8 hours per day, as Warren and Charlie do. Buffett is a phenomenal investor but by all accounts, rather odd. He eats like crap, doesn’t exercise, had a profoundly weird relationship with his wife, and seems addicted to his work at the expense of everything else.
My point is this: his practice of reading income statements and business reports 8 hours per day for 60 years doesn’t make him the kind of person I wish to emulate.
Don’t get me wrong, I respect his levelheadedness toward money, lack of polish wrt PR, and generous philanthropic efforts. There’s a lot of good. But it’s easy to idolize the guy.
Controversial take: Buffett is actually not a great investor in the way people think. Via the float in his insurance companies, he receives a 0% infinite maturity loan to plow into the market. That financial leverage gives him the ability to beat the market year after year - not his own stockpicking prowess.
If you were to start with $1B, then get an extra $2B that you never had to pay back, you too would do quite well holding Coca-Cola and other safe companies with moats. Your returns would be 3x everyone else's.
Buffett himself has tried to explain the impact of float on Berkshire Hathaway but it never seems to sink in with people.
Ill take it even farther: Buffet isn't statistically different than average. He just found a strategy that happened to work, was stubborn enough to stick to it through bad times, and used copious amounts of leverage to juice returns.
A really interesting paper called "Buffet's Alpha" talks about this, and was able to replicate his performance by following a few simple rules. They found that he produced very little actual alpha. To his credit, he seems to have been observant enough to stumble into factors(value, quality, and low beta) before anyone else knew they existed, which is his real strength and contribution.
A summary of that paper from the authors is here: https://www.aqr.com/Insights/Research/Journal-Article/Buffet....
They work at AQR, a firm which is notable for being one of the only successful hedge funds to actually publish meaningful research.
The paper's conclusion is noteworthy:
> The efficient market counterargument is that Buffett was simply lucky. Our findings suggest that Buffett’s success is neither luck nor magic but is a reward for a successful implementation of value and quality exposures that have historically produced high returns. Second, we illustrated how Buffett’s record can be viewed as an expression of the practical implementability of academic factor returns after transaction costs and financing costs. We simulated how investors can try to take advan- tage of similar investment principles. Buffett’s success shows that the high returns of these academic factors are not simply “paper” returns; these returns can be realized in the real world after transaction costs and funding costs, at least by Warren Buffett. Furthermore, Buffett’s exposure to the BAB factor and his unique access to leverage are consistent with the idea that the BAB factor represents reward to the use of leverage.
BTW, AQR funded the initial development of pandas; which now powers tools like alphalens (predictive factor analysis) and pyfolio.
There's your 'compounding knowledge'.
(Days later)
"7 Best Community-Built Value Investing Algorithms Using Fundamentals" https://blog.quantopian.com/fundamentals-contest-winners/
(The Zipline backtesting library also builds upon Pandas)
How can we factor ESG/sustainability reporting into these fundamentals-driven algorithms in order to save the world?
I wonder to what extent Warren Buffet is Warren Buffet because of how he thinks and acts (like all of these non-fiction authors selling books by using his name would have us believe), and to what extent he is the product of media selection bias. -- If you take a large enough group of people who take risky stakes that are large enough (like the world of financial asset management), then one of them is bound to be as successful as Warren Buffet, even if they all behave randomly.
Funnily enough, Buffett actually calculated this in his essay "The Super Investors of Graham and Doddsville"
Basically, advocates of the Free Market Hypotheses believed that it was impossible for anyone to deliberately, repeatedly generate alpha. The market was rational, and success was probabilistically distributed. With a large enough population, you will get Buffett level returns - therefor Buffett is a fluke.
However, Buffett calculated that there weren't enough investors for his success to be a factor of random distribution, plus there were 2 dozen others who followed a similar strategy who also consistently generate alpha
"The Superinvestors of Graham and Doddsville" (1984) https://scholar.google.com/scholar?cluster=17265410477248371...
From https://en.wikipedia.org/wiki/The_Superinvestors_of_Graham-a... :
> The speech and article challenged the idea that equity markets are efficient through a study of nine successful investment funds generating long-term returns above the market index.
This book probably doesn't mention that he's given away over 71% to charity since Y2K. Or that it's really cold and windy and snowy in Omaha; which makes for lots of reading time.
"Warren Buffett and the Interpretation of Financial Statements: The Search for the Company with a Durable Competitive Advantage" (2008) [1], "Buffetology" (1999) [2], and "The Intelligent Investor" (1949, 2009) [3] are more investment-strategy-focused texts.
[1] https://smile.amazon.com/Warren-Buffett-Interpretation-Finan...
[2] https://smile.amazon.com/Buffettology-Previously-Unexplained...
[3] https://smile.amazon.com/Intelligent-Investor-Definitive-Inv...
Value Investing: https://en.wikipedia.org/wiki/Value_investing https://www.investopedia.com/terms/v/valueinvesting.asp
Why CISA Issued Our First Emergency Directive
There are a number of efforts to secure DNS (and SSL/TLS which generally depends upon DNS; and upon which DNS-over-HTTPS depends) and the identity proof systems which are used for record-change authentication and authorization.
Domain registrars can and SHOULD implement multi-factor authentication. https://en.wikipedia.org/wiki/Multi-factor_authentication
Are there domain registrars that support FIDO/U2F or the new W3C WebAuthn spec? https://en.wikipedia.org/wiki/WebAuthn
Credentials and blockchains (and biometrics): https://gist.github.com/westurner/4345987bb29fca700f52163c33...
DNSSEC: https://en.wikipedia.org/wiki/Domain_Name_System_Security_Ex...
ACME / LetsEncrypt certs expire after 3 months (*) and require various proofs of domain ownership: https://en.wikipedia.org/wiki/Automated_Certificate_Manageme...
Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency
Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?" https://news.ycombinator.com/item?id=18961724
Namecoin (decentralized blockchain DNS): https://en.wikipedia.org/wiki/Namecoin
DNSCrypt: https://en.wikipedia.org/wiki/DNSCrypt
DNS over HTTPS: https://en.wikipedia.org/wiki/DNS_over_HTTPS
DNS over TLS: https://en.wikipedia.org/wiki/DNS_over_TLS
Chrome will Soon Let You Share Links to a Specific Word or Sentence on a Page
I would like to urge the browser developers/makers to adopt existing proposals which came through open consensus which do precisely cover the same use cases (and more!)
W3C Reference Note on Selectors and States: https://www.w3.org/TR/selectors-states/
It is part of the suite of specs that came through the W3C Web Annotation Working Group: https://www.w3.org/annotation/
More examples in W3C Note Embedding Web Annotations in HTML: https://www.w3.org/TR/annotation-html/
Different kinds of Web resources can combine multiple selectors and states. Here is a simple one using `TextQuoteSelector` handled by the https://dokie.li/ clientside application:
http://csarven.ca/dokieli-rww#selector(type=TextQuoteSelecto...
A screenshot/how-to: https://twitter.com/csarven/status/981924087843950595
"Integration with W3C Web Annotations" https://github.com/bokand/ScrollToTextFragment/issues/4
> It would be great to be able to comment on the linked resource text fragment. W3C Web Annotations [implementations] don't recognize the targetText parameter, so AFAIU comments are then added to the document#fragment and not the specified text fragment. [...]
> Is there a simplified mapping of W3C Web Annotations to URI fragment parameters?
Guidelines for keeping a laboratory notebook
My paper notebooks were always horrific. Writing has always been painful and awkward for me. But I could type like nobody's business thanks to programming. I survived college in the early 80s by being one of the first students to get a word processor.
By the time I was keeping a notebook, my work was generating mountains of computer readable data, source code, and so forth. We managed by agreeing on a format for data files, where the filename referenced a notebook page, and it worked OK.
Today, it's unavoidable that people are going to keep their notes electronically, and there are no perfect solutions for doing this. Wet chemists still like paper notebooks, since it's hard to get a computer close to the bench, and to type while wearing rubber gloves. Academic workers are expected to supply their own computers, and are nervous about getting them damaged or contaminated. Plus, drawing pictures and writing equations on a computer are both awkward.
Computation related fields lend themselves well to purely electronic notebooks, no surprise. Today, a lot of my work fits perfectly in a Jupyter notebook.
Commercial notebook software exists, but it tends to be sold largely for enterprise use, i.e., the solution it solves is how to control lab workers and secure their results, not how to enable independent, creative work.
> Computation related fields lend themselves well to purely electronic notebooks, no surprise. Today, a lot of my work fits perfectly in a Jupyter notebook.
Some notes and ideas regarding Jupyter notebooks as lab notebooks from "Keeping a Lab Notebook [pdf]": https://news.ycombinator.com/item?id=15710815
Superalgos and the Trading Singularity
Though others didn't, you might find this interesting: "Ask HN: Why would anyone share trading algorithms and compare by performance?" https://news.ycombinator.com/item?id=15802785 ( https://westurner.github.io/hnlog/#story-15802785 )
I think there is value in a back-testing module, however, sharing an algo doesn't make sense to me, until unless someone wants to buy mine for an absurd amount.
I think part of the value of sharing knowledge and algorithmic implementations comes from getting feedback from other experts; like peer review and open science and teaching.
Case in point: the first algorithm on this list [1] of community contributed algorithms that were migrated to their new platform is "minimum variance w/ constraint" [2]. Said algorithm showed returns of over 200% as compared with 77% returns from the SPY S&P 500 ETF over the same period, ceteris paribus. In the 69 replies, there are modifications by community members and the original author that exceed 300%.
Working together on open algorithms has positive returns that may exceed advantages of closed algorithmic development without peer review.
[1] https://www.quantopian.com/posts/community-algorithms-migrat...
[2] https://www.quantopian.com/posts/56b6021b3f3b36b519000924
How well does it do in production though and what happens when multiple algos execute the same trades? Does it cause the rest of the algos to adapt and change results? It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it. I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.
> How well does it do in production though and what happens when multiple algos execute the same trades?
Price inflation.
> Does it cause the rest of the algos to adapt and change results?
Trading index ETFs? IDK
> It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it.
Why does it need to do lots of trades? Is it possible for anyone other than e.g. SEC to review trades by buyer or seller?
> I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.
pyfolio does tear sheets with Zipline algos: pyfolio/examples/zipline_algo_example.ipynb https://nbviewer.jupyter.org/github/quantopian/pyfolio/blob/...
alphalens does performance analysis of predictive factors: alphalens/examples/pyfolio_integration.ipynb https://nbviewer.jupyter.org/github/quantopian/alphalens/blo...
awesome-quant lists a bunch of other tools for algos and superalgos: https://github.com/wilsonfreitas/awesome-quant
What's a good platform for paper trading (with e.g. zipline or moonshot algorithms)?
I disagree with price inflation just because everything is hedged, but it may be true.
The too many trades is if there are 300 algos, and I look in the order book and see different orders from different exchanges at the same price point, then I would be adapting to see what's happening, not myself, but there are people who watch order flows.
I don't paper trade, either it works in production with real money or not. Have to get a feel for spreads, commissions, and so on.
Also, in my case, I am hesitant to even use paid services as someone can be watching it, so most my tools are made by me. Good luck with your trading though, if it works out, let me know, I'd pay to use it along side my other trades.
Crunching 200 years of stock, bond, currency and commodity data
Coming from a CS/Econ/Finance background....
Efficiency as “the market fully processes all information” is decidedly untrue.
Efficiency as “Its very hard to arb markets without risk better than a passive strategy” is very true.
Most professional money managers don’t beat the market once you factor in fees. Many private equity funds produce outsize returns, but it’s based on higher risk and taking advantage of tax laws.
Over time, positive performance for mutual funds don’t persist. (If you’re in the top decile performance one year, you’re no more likely to be there next year)
Despite all of this, there are ways people make money. But it’s a small subset of professionals. It’s high frequency traders who find ways to cut the line on mounds of pennies. It inside information from hedge funds. Or earlier access to non public information. But it’s generally not in areas that normal people like you and me can access.
It inside information from hedge funds. Or earlier access to non public information. But it’s generally not in areas that normal people like you and me can access.
Asymmetric information is pretty far from what used to be said about the perfect market and rational actors. It's "there's a sucker born every minute" and "if it seems too good to be true it probably is" economics.
I might be misunderstanding what you're saying here, but are you sure you're right? Fama originally predicated the model of the efficient market (the efficient market hypothesis) on the idea of informational efficiency. Information asymmetry is a fundamental measure involved in the idealized model of an efficient market.
What you're mentioning about rational actors is actually a different topic altogether in economics.
Or have I misunderstood what you're getting at?
I was interested, so I did some research here.
Rational Choice Theory https://en.wikipedia.org/wiki/Rational_choice_theory
Rational Behavior https://www.investopedia.com/terms/r/rational-behavior.asp
> Most mainstream academic economics theories are based on rational choice theory.
> While most conventional economic theories assume rational behavior on the part of consumers and investors, behavioral finance is a field of study that substitutes the idea of “normal” people for perfectly rational ones. It allows for issues of psychology and emotion to enter the equation, understanding that these factors alter the actions of investors, and can lead to decisions that may not appear to be entirely rational or logical in nature. This can include making decisions based primarily on emotion, such as investing in a company for which the investor has positive feelings, even if financial models suggest the investment is not wise.
Behavioral finance https://www.investopedia.com/terms/b/behavioralfinance.asp
Bounded rationality > Relationship to behavioral economics https://en.wikipedia.org/wiki/Bounded_rationality
Perfectly rational decisions can be and are made without perfect information; bounded by the information available at the time. If we all had perfect information, there would be no entropy and no advantage; just lag and delay between credible reports and order entry.
Information asymmetry https://en.wikipedia.org/wiki/Information_asymmetry
Heed these words wisely: What foolish games! Always breaking my heart.
https://deepmind.com/blog/game-theory-insights-asymmetric-mu...
> Asymmetric games also naturally model certain real-world scenarios such as automated auctions where buyers and sellers operate with different motivations. Our results give us new insights into these situations and reveal a surprisingly simple way to analyse them. While our interest is in how this theory applies to the interaction of multiple AI systems, we believe the results could also be of use in economics, evolutionary biology and empirical game theory among others.
https://en.wikipedia.org/wiki/Pareto_efficiency
> A Pareto improvement is a change to a different allocation that makes at least one individual or preference criterion better off without making any other individual or preference criterion worse off, given a certain initial allocation of goods among a set of individuals. An allocation is defined as "Pareto efficient" or "Pareto optimal" when no further Pareto improvements can be made, in which case we are assumed to have reached Pareto optimality.
Which, I think, brings me to equitable availability of maximum superalgo efficiency and limits of real value creation in capital and commodities markets; which'll have to be a topic for a different day.
Show HN: React-Schemaorg: Strongly-Typed Schema.org JSON-LD for React
I have a slightly longer post describing this work and the reasoning behind it on dev.to[1].
[1]: https://dev.to/eyassh/react-schemaorg-strongly-typed-schemaorg-json-ld-for-react-4lhd
https://dev.to/eyassh/react-schemaorg-strongly-typed-schemao...
Is there a good way to generate JSONschema and thus forms from schema.org RDFS classes and (nested, repeatable) properties?
By JSONschema do you mean [this standard](https://json-schema.org/)? I don't know of a tool that does that yet, the JSON Schema is general enough with allOf/anyOf that it should express schema.org schema as well.
Depends on the purpose here. With this, my goal was to speed up the Write-Update-Debug development loop. Depends on the use case, simply using Google's Structured Data Testing Tool [1] might be a better way to verify schema than JSON-schema?
[1]: https://search.google.com/structured-data/testing-tool
There are a number of tools for generating forms and requisite client and serverside data validations from JSONschema; but I'm not aware of any for RDFS (and thus the schema.org schema [1]). A different use case, for certain.
https://schema.org/docs/developers.html#defs
Definitely a cool area for exploration. I'm not aware of JSON Schema generators from RDFS either.
It should be possible to model the basics (nested structure, repeated structure, defining the "domain" and "range" of a value).
Schema.org definitions however have no conception of "required" values[], however, so some of the cool form validation we see in some of these tools might not apply here.
[*] _Consumers_ of Schema.org JSON-LD or microdata, however, might define their own data requirements. E.g. Google has some concept of required fields, which you can see when using the Structured Data Testing Tool.
Consumer Protection Bureau Aims to Roll Back Rules for Payday Lending
From the article:
> The way payday loans work is that payday lenders typically offer small loans to borrowers who promise to pay the loans back by their next paycheck. Interest on the loans can have an annual percentage rate of 390 percent or more, according to a 2013 report by the CFPB. Another bureau report from the following year found that most payday loans — as many as 80 percent — are rolled over into another loan within two weeks. Borrowers often take out eight or more loans a year.
390%
From https://www.npr.org/2019/02/06/691944789/consumer-protection... :
> TARP recovered funds totalling $441.7 billion from $426.4 billion invested, earning a $15.3 billion profit or an annualized rate of return of 0.6% and perhaps a loss when adjusted for inflation.[2][3]
0.6%
Is your point that the US government should get into the payday lending business?
Lectures in Quantitative Economics as Python and Julia Notebooks
It's amazing how we are watching use cases for notebooks and spreadsheets converging. I wonder what the killer feature will be to bring a bigger chunk of the Excel world into a programmatic mindset... Or alternatively, whether we will see notebook UIs embedded in Excel in the future in place of e.g. VBA.
That’s not a bad idea. Spreadsheets are pure functional languages built that use literal spaces instead of namespaces.
Notebooks are cells of logic. You could conceivably change the idea of notebook cells to be an instance of a function that points to raw data and returns raw data.
Perhaps this just Alteryx though
This is brilliant.
I'm picturing the ability to write a Python function with the parameters being just like the parameters in an Excel function. You can drag the cell and have it duplicated throughout a row, updating the parameters to correspond to the rows next to it.
It would exponentially expand the power of excel. I wouldn't be limited to horribly unmaintainable little Excel functions.
VBA can't be used to do that, can it? As far as I understand (and I haven't investigated VBA too much) VBA works on entire spreadsheets.
Essentially, replace the excel formula `=B3-B4` with a Python function `subtract(b3, b4)` where Subtract is defined somewhere more conveniently (in a worksheet wide function definition list?).
This would require a reactive recomputing of cells to be anything like a spreadsheet. > Essentially, replace the excel formula `=B3-B4` with a Python function `subtract(b3, b4)`
as of now jupyter/ipython would not recompute `subtract(b3, b4)` if you change b3 or b4, this has positive and negative (reliance on hidden state and order of execution) effects.
I too would really like something like this, but I think it is pretty far away from where jupiter is now.
You can build something like this with Jupyter today.
> Traitlets is a framework that lets Python classes have attributes with type checking, dynamically calculated default values, and ‘on change’ callbacks. https://traitlets.readthedocs.io/en/stable/
> Traitlet events. Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the observe method of the widget can be used to register a callback https://ipywidgets.readthedocs.io/en/stable/examples/Widget%...
You can definitely build interactive notebooks with Jupyter Notebook and JupyterLab (and ipywidgets or Altair or HoloViews and Bokeh or Plotly for interactive data visualization).
> Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your DataFrames with intuitive scrolling, sorting, and filtering controls, as well as edit your DataFrames by double clicking cells. https://github.com/quantopian/qgrid
Qgrid's API includes event handler registration: https://qgrid.readthedocs.io/en/latest/
> neuron is a robust application that seamlessly combines the power of Visual Studio Code with the interactivity of Jupyter Notebook. https://marketplace.visualstudio.com/items?itemName=neuron.n...
"Excel team considering Python as scripting language: asking for feedback" (2017) https://news.ycombinator.com/item?id=15927132
OpenOffice Calc ships with Python 2.7 support: https://wiki.openoffice.org/wiki/Python
Procedural scripts written in a general purpose language with named variables (with no UI input except for chart design and persisted parameter changes) are reproducible.
What's a good way to review all of the formulas and VBA and/or Python and data ETL in a spreadsheet?
Is there a way to record a reproducible data transformation script from a sequence of GUI interactions in e.g. OpenRefine or similar?
OpenRefine/OpenRefine/wiki/Jupyter
"Within the Python context, a Python OpenRefine client allows a user to script interactions within a Jupyter notebook against an OpenRefine application instance, essentially as a headless service (although workflows are possible where both notebook-scripted and live interactions take place. https://github.com/OpenRefine/OpenRefine/wiki/Jupyter
Are there data wrangling workflows that are supported by OpenRefine but not Pandas, Dask, or Vaex?
This interesting need to have a closer look, possibly refine can be more efficient? But haven't used it enough to know, just payed around with it a bit. Didn't realise you could combine it with jupyter.
There are undergraduate and graduate courses in each language:
Python version: https://lectures.quantecon.org/py/
Julia version: https://lectures.quantecon.org/jl/
Does anyone else find it strange that there is no real-world data in these notebooks? It's all simulations or abstract problems.
This gives me the sense, personally, that economists aren't interested in making accurate predictions about the world. Other fields would, I think, test their theories against observations.
pandas-datareader can pull data from e.g. FRED, Eurostat, Quandl, World Bank: https://pandas-datareader.readthedocs.io/en/latest/remote_da...
pandaSDMX can pull SDMX data from e.g. ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank; with requests-cache for caching data requests: https://pandasdmx.readthedocs.io/en/latest/#supported-data-p...
The scikit-learn estimator interface includes a .score() method. "3.3. Model evaluation: quantifying the quality of predictions" https://scikit-learn.org/stable/modules/model_evaluation.htm...
statsmodels also has various functions for statistically testing models: https://www.statsmodels.org/stable/
"latex2sympy parses LaTeX math expressions and converts it into the equivalent SymPy form" and is now merged into SymPy master and callable with sympy.parsing.latex.parse_latex(). It requires antlr-python-runtime to be installed. https://github.com/augustt198/latex2sympy https://github.com/sympy/sympy/pull/13706
IDK what Julia has for economic data retrieval and model scoring / cost functions?
If Software Is Funded from a Public Source, Its Code Should Be Open Source
From the US Digital Services Playbook [1]:
> PLAY 13
> Default to open
> When we collaborate in the open and publish our data publicly, we can improve Government together. By building services more openly and publishing open data, we simplify the public’s access to government services and information, allow the public to contribute easily, and enable reuse by entrepreneurs, nonprofits, other agencies, and the public.
> Checklist
> - Offer users a mechanism to report bugs and issues, and be responsive to these reports
> [...]
> - Ensure that we maintain contractual rights to all custom software developed by third parties in a manner that is publishable and reusable at no cost
> [...]
> - When appropriate, publish source code of projects or components online
> [...]
> Key Questions
> [...]
> - If the codebase has not been released under an open source license, explain why.
> - What components are made available to the public as open source?
> [...]
Apache Arrow 0.12.0
> Apache Arrow is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. It also provides computational libraries and zero-copy streaming messaging and interprocess communication. Languages currently supported include C, C++, C#, Go, Java, JavaScript, MATLAB, Python, R, Ruby, and Rust.
Statement on Status of the Consolidated Audit Trail (2018)
U.S. Federal District Court Declared Bitcoin as Legal Money
"Application of FinCEN's Regulations to Persons Administering, Exchanging, or Using Virtual Currencies" (2013) https://www.fincen.gov/resources/statutes-regulations/guidan...
"Legality of bitcoin by country or territory" https://en.wikipedia.org/wiki/Legality_of_bitcoin_by_country...
"Know your customer" https://en.wikipedia.org/wiki/Know_your_customer
"Anti-money-laundering measures by region" https://en.wikipedia.org/wiki/Money_laundering#Anti-money-la...
"Anti-money-laundering measures by region > United States" https://en.wikipedia.org/wiki/Money_laundering#United_States
Post Quantum Crypto Standardization Process – Second Round Candidates Announced
> As the latest step in its program to develop effective defenses, the National Institute of Standards and Technology (NIST) has winnowed the group of potential encryption tools—known as cryptographic algorithms—down to a bracket of 26. These algorithms are the ones NIST mathematicians and computer scientists consider to be the strongest candidates submitted to its Post-Quantum Cryptography Standardization project, whose goal is to create a set of standards for protecting electronic information from attack by the computers of both tomorrow and today.
> “These 26 algorithms are the ones we are considering for potential standardization, and for the next 12 months we are requesting that the cryptography community focus on analyzing their performance,”
Links to the 17 public-key encryption and key-establishment algorithms and 9 digital signature algorithms are here: "Round 2 Submissions" https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Rou...
"Quantum Algorithm Zoo" has moved to https://quantumalgorithmzoo.org .
Ask HN: How do you evaluate security of OSS before importing?
What tools can I use to evaluate the security posture of an OSS project before I approve its usage with high confidence?
Oddly, whether a project has at least one CVE reported could be interpreted in favor of the project. https://www.cvedetails.com
Do they have a security disclosure policy? A dedicated security mailing list?
Do they pay bounties or participate in e.g Pwn2own?
Do they cryptographically sign releases?
Do they cryptographically sign VCS tags (~releases)? commits? `git tag -s` / `git commit/merge -S` https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work
Downstream packagers do sometimes/often apply additional patches and then sign their release with the repo (and thus system global) GPG key.
Whether they require "Signed-off-by" may indicate that the project has mature controls and possibly a formal code review process requirement. (Look for "Signed-off-by:" in the release branch (`git commit/merge -s/--signoff`)
How have they integrated security review into their [iterative] release workflow?
Is the software formally verified? Are parts of the software implementation or spec formally verified?
Does the system trust the channel? The host? Is it a 'trustless' system?
What are the single points of failure?
How is logging configured? To syslog?
Do they run the app as root in a Docker container? Does it require privileged containers?
If it has to run as root, does it drop privileges at startup?
Does the package have an SELinux or AppArmor policy? (Or does it say e.g. "just set SELinux to permissive mode)
Is there someone you can pay to support the software in an enterprise environment? Open or closed, such contacts basically never accept liability; but if there is an SLA, do you get a pro-rated bill?
As far as indicators of actual software quality:
How much test coverage is there? Line coverage or statement coverage?
Do they run static analysis tools for all pull requests and releases? Dynamic analysis? Fuzzing?
Of course, closed or open source projects may do none or all of these and still be totally secure, insecure, or unsecure.
This is a pretty extensive list. Thanks for sharing!
Ask HN: How can I use my programming skills to support nonprofit organizations?
Lately I've been thinking about doing programming for nonprofits, both because I want to help out with what I'm good at but also to hone my skills and potentially get some open source credit.
So far I've had a hard time finding nonprofit projects where I can just pick up something and start programming. I know about freecodecamp.org, but they force you to go through their courses, and as I already have multiple years of experience as a developer, I feel like that would be a waste of time.
Isn't there a way to contribute to nonprofit organization in a more direct and simple manner like how you would contribute to an open source project on GitHub?
There are lots of project management systems with issue tracking and kanban boards with swimlanes. Because it's unreasonable to expect all volunteers to have a GH account or even understand what GH is for, support for external identity management and SSO may be essential to getting people to actually log in and change their password regularly.
Sidling a nonprofit with custom built software with no other maintainers is not what they need. Build (and pay for development, maintenance, timely security upgrades and security review) or Buy (where is our data? who backs it up? how much does it cost for a month or a few years? Is it open source with a hosted option; so that we can pay a developer to add or fix what we need?)
"Solutions architect" may be a more helpful objective title for what's needed. https://en.wikipedia.org/wiki/Solution_architecture
What are their needs? Marketing, accounting, operations, HR
Marketing: web site, maps service, directions, active social media presence that speaks to their defined audience
Accounting: Revenue and expenses, payroll/benefits/HR, projections, "How can we afford to do more?", handle donations and send receipts for tax purposes, reports to e.g. https://charitynavigator.org/ and infographics for wealth-savvy donors
Operations: Asset inventory, project management, volunteer scheduling
HR: payroll, benefits, volunteer scheduling, training, turnover, retaining selfless and enlightenedly-self-interested volunteers
Create a spreadsheet. Rows: needs/features/business processes. Columns: essential, nice to have, software products and services.
Create another spreadsheet. Rows: APIs. Columns: APIs.
Training: what are the [information systems] processes/workflows/checklists? How can I suggest a change? How do we reach consensus that there's a better way to do this? Is there a wiki? Is there a Q&A system?
"How much did you sink on that? Probably seemed like the best option according to the information available at the time, huh? Do you have a formal systems acquisition process? Who votes according to what type of prepared analysis? How much would it cost to switch? What do we need to do to ETL (extract, transform, and load) into a newer better system?"
When estimating TCO for a nonprofit, turnover is a very real consideration. People move. Chances are, as with most organizations TBH, there's a patchwork of partially-integrated and maybe-integrable systems that it may or may not be more cost-effective and maintainable to replace with a cloud ERP specifically designed for nonprofits.
Who has access rights to manually update which parts of the website? How can we include dynamic ([other] database-backed) content in our website? What is a CMS? What is an ERP? What is a CRM? Are these customers, constituents, or both? When did we last speak with those guys? How can people share our asks with social media networks?
If you're not willing or able to make a long-term commitment, the more responsible thing to do is probably to disclose any conflicts of interest recommend a SaaS solution hosted in a compliant data center.
q="nonprofit erp"
q="nonprofit crm"
q="nonprofit cms" + donation campaign visibility
What time of day are social media posts most likely to get maximum engagement from which segments of our audience? What is our ~ARPU "average revenue per user/follower"?
... As a volunteer and not a FTE, it may be a worthwhile exercise to build a prototype of the new functionality with whatever tools you happen to be familiar with with the expectation that they'll figure out a way to accomplish the same objectives with their existing systems. If that's not possible, there may be a business opportunity: are there other organizations with the same need? Is there a sustainable market for such a solution? You may be building to be acquired.
Ask HN: Steps to forming a company?
Hey guys, I'm leaving my firm very shortly to form a startup.
Does why have a checklist of proper ways to do things?
Ie. 1. Form Chapter C Delaware company with Clerky 2. Hire payroll company x 3. use this company for patents.
any info there?
From "Ask HN: What are your favorite entrepreneurship resources" https://news.ycombinator.com/item?id=15021659 :
> USA Small Business Administration: "10 steps to start your business." https://www.sba.gov/starting-business/how-start-business/10-...
> "Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California" https://github.com/leonar15/startup-checklist
> FounderKit has reviews for Products, Services, and Software for founders: https://founderkit.com
... I've heard good things about Gusto for payroll, HR, and benefits through Guideline: https://gusto.com/product/pricing
A Self-Learning, Modern Computer Science Curriculum
Outstanding resource.
jwasham/coding-interview-university also links to a number of also helpful OER resources: https://github.com/jwasham/coding-interview-university
MVP Spec
> The criticism of the MVP approach has led to several new approaches, e.g. the Minimum Viable Experiment MVE[19] or the Minimum Awesome Product MAP[20].
https://en.wikipedia.org/wiki/Minimum_viable_product#Critici...
Can we merge Certificate Transparency with blockchain?
From "REMME – A blockchain-based protocol for issuing X.509 client certificates" https://news.ycombinator.com/item?id=18868540 :
""" In no particular order, there are a number of blockchain PKI (and DNS (!)) proposals and proofs of concept.
"CertLedger: A New PKI Model with Certificate Transparency Based on Blockchain" (2018) https://arxiv.org/pdf/1806.03914 https://scholar.google.com/scholar?q=related:LF9PMeqNOLsJ:sc...
"TABLE 1: Security comparison of Log Based Approaches to Certificate Management" (p.12) lists a number of criteria for blockchain-based PKI implementations:
- Resilient to split-world/MITM attack
- Provides revocation transparency
- Eliminates client certificate validation process
- Eliminates trusted key management
- Preserves client privacy
- Require external auditing
- Monitoring promptness
... These papers also clarify why a highly-replicated decentralized trustless datastore — such as a blockchain — is advantageous for PKI. WoT is not mentioned.
"Blockchain-based Certificate Transparency and Revocation Transparency" (2018) https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final29.pdf
https://scholar.google.com/scholar?q=related:oEsKmJvdn-MJ:sc...
Who can update and revoke which records in a permissioned blockchain (or a plain old database, for that matter)?
Letsencrypt has a model for proving domain control with ACME; which AFAIU depends upon DNS, too. """
TLA references "Certificate Transparency Using Blockchain" (2018) https://eprint.iacr.org/2018/1232.pdf https://scholar.google.com/scholar?q="Certificate+Transparen...
Thanks for the references! The main issue isn't the support and maintenance of a such distributed network, but its integration with current solutions and avoiding centralized middleware services that will weaken the schema described in the documents.
> The main issue isn't the support and maintenance of a such distributed network,
Running a permissioned blockchain is nontrivial. "Just fork XYZ and call it a day" doesn't quite describe the amount of work involved. There's read latency at scale. There's merging things to maintain vendor strings,
> but its integration with current solutions
- Verify issuee identity
- Update (domain/CN/subjectAltName, date) index
- Update cached cert and CRL bundles
- Propagate changes to all clients
> and avoiding centralized middleware services that will weaken the schema described in the documents.
Eventually, a CDN will look desireable. IPFS may fit the bill, IDK?
> Running a permissioned blockchain is nontrivial.
You are right. It's needed a relevant BFT protocol, a lot of work with masternodes community and smart economic system inside. You can look at an example of a such protocol: https://github.com/Remmeauth/remme-core/tree/dev
google/trillian https://github.com/google/trillian
> Trillian is an implementation of the concepts described in the Verifiable Data Structures white paper, which in turn is an extension and generalisation of the ideas which underpin Certificate Transparency.
> Trillian implements a Merkle tree whose contents are served from a data storage layer, to allow scalability to extremely large trees.
Why Don't People Use Formal Methods?
Which universities teach formal methods?
- q=formal+verification https://www.class-central.com/search?q=formal+verification
- q=formal-methods https://www.class-central.com/search?q=formal+methods
Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?
Is there a certification for formal methods? Something like for engineer-status in other industries?
What are some examples of tools and [OER] resources for teaching and learning formal methods?
- JsCoq
- Jupyter kernel for Coq + nbgrader
- "Inconsistencies, rolling back edits, and keeping track of the document's global state" https://github.com/jupyter/jupyter/issues/333 (jsCoq + hott [+ IJavascript Jupyter kernel], STLC: Simply-Typed Lambda Calculus)
- TDD tests that run FV tools on the spec and the implementation
What are some examples of open source tools for formal verification (that can be integrated with CI to verify the spec AND the implementation)?
What are some examples of formally-proven open source projects?
- "Quark : A Web Browser with a Formally Verified Kernel" (2012) (Coq, Haskell) http://goto.ucsd.edu/quark/
What are some examples of projects using narrow and strong AI to generate perfectly verified software from bad specs that make the customers and stakeholders happy?
From reading though comments here, people don't use formal methods because: cost-prohibitive, inflexibile, perceived as incompatible with agile / iterative methods that are more likely to keep customers who don't know what formal methods are happy, lack of industry-appropriate regulation, and cognitive burden of often-incompatible shorthand notations.
Almost University of New South Wales (Sydney, Australia) alum, and it's the biggest difference between the Software Engineering course and Comp Sci. There are two mandatory formal methods and more additional ones to take. They are really hard, and mean a lot of the SENG cohort ends up graduating as COMPSCI, since they can't hack it and don't want to repeat formal methods.
Steps to a clean dataset with Pandas
To add to the three points in the article:
Data quality https://en.wikipedia.org/wiki/Data_quality
Imputation https://en.wikipedia.org/wiki/Imputation_(statistics)
Feature selection https://en.wikipedia.org/wiki/Feature_selection
datacleaner can drop NaNs, do imputation with "the mode (for categorical variables) or median (for continuous variables) on a column-by-column basis", and encode "non-numerical variables (e.g., categorical variables with strings) with numerical equivalents" with Pandas DataFrames and scikit-learn. https://github.com/rhiever/datacleaner
sklearn-pandas "[maps] DataFrame columns to transformations, which are later recombined into features", and provides "A couple of special transformers that work well with pandas inputs: CategoricalImputer and FunctionTransformer" https://github.com/scikit-learn-contrib/sklearn-pandas
Featuretools https://github.com/Featuretools/featuretools
> Featuretools is a python library for automated feature engineering. [using DFS: Deep Feature Synthesis]
auto-sklearn does feature selection (with e.g. PCA) in a "preprocessing" step; as well as "One-Hot encoding of categorical features, imputation of missing values and the normalization of features or samples" https://automl.github.io/auto-sklearn/master/manual.html#tur...
auto_ml uses "Deep Learning [with Keras and TensorFlow] to learn features for us, and Gradient Boosting [with XGBoost] to turn those features into accurate predictions" https://auto-ml.readthedocs.io/en/latest/deep_learning.html#...
Reahl – A Python-only web framework
I feel bad about projects like these, I really do. I'm sure a lot of effort have been put in and it probably satisfies what OP wants to do. But in the end, no one serious would ever consider using it.
>But in the end, no one serious would ever consider using it.
Why not?
I would say it is the other way around. For any web application project of decent size there is almost always a transpilation step that converts whatever programming language your source code is in, into JavaScript (usually ES5). We also had very successful and widely used projects like GWT for years (GWT was first released in 2006!!!).
Before GWT, there was Wt framework (C++); and then JWt (Java), which do the server and clientsides (with widgets in a tree).
Wt: https://en.wikipedia.org/wiki/Wt_(web_toolkit)
JWt: https://en.m.wikipedia.org/wiki/JWt_(Java_web_toolkit)
GWT: https://en.wikipedia.org/wiki/Google_Web_Toolkit
Now we have Babel, ES YYYY, and faster browser release cycles.
[deleted]
Ask HN: How can you save money while living on poverty level?
I freelance remotely, making roughly $1200 a month as a programmer because I only work 10 hours maximum each week (limited by my contract). I share the apartment with my mom, and It's a section 8 so our rent contributions are based on the income we make. My contribution towards rent is $400 a month.
Although I make more money than my mom (she's of retirement age and only works 1-2 days a week), while I'm looking for more work I want to figure out how to move out and live more independently on only $1200 a month.
I need to live frugally and want to know what I can cut more easily. I own a used car (already paid in full), and pay my own car insurance, electricity, phone and internet. After all that I have about $400 left each month which can be eaten up by going out or some emergency funds.
More recently I had to pay for my new city parking sticker so that's $100 more in expenses this particular month. I would be satisfied just living in a far off town paying the same $400 a month, I feel my dollars would stretch further since I now get 100% more privacy for the same price.
On top of that this job is a contract job so I need to put money aside to pay my own taxes. This $1200 is basically living on poverty level. Any ideas to make saving work? Is it very possible for people in the US to still save while on poverty?
That's not a living wage (or a full time job). There are lots of job search sites.
Spending some time on a good resume / CV / portfolio would probably be a good investment with positive ROI.
Is there a nonprofit that you could volunteer with to increase your hireability during the other 158 hours of the week?
Or an online course with a credential that may or may not have positive ROI as a resume item?
Is there a code school in your city with a "you don't pay unless you land a full time job with a living wage and benefits" guarantee?
What is your strategy for business and career networking?
From https://westurner.github.io/hnlog/#comment-17894632 :
> Personal Finance (budgets, interest, growth, inflation, retirement)
Personal Finance https://en.wikipedia.org/wiki/Personal_finance
Khan Academy > College, careers, and more > Personal finance https://www.khanacademy.org/college-careers-more/personal-fi...
"CS 007: Personal Finance For Engineers" https://cs007.blog
A DNS hijacking wave is targeting companies at an almost unprecedented scale
> The National Cybersecurity and Communications Integration Center issued a statement [1] that encouraged administrators to read the FireEye report. [2]
[1] https://www.us-cert.gov/ncas/current-activity/2019/01/10/DNS...
[2] https://www.fireeye.com/blog/threat-research/2019/01/global-...
Show HN: Generate dank mnemonic seed phrases in the terminal
From https://github.com/lukechilds/doge-seed :
> The first four words will be a randomly generated Doge-like sentence.
The seed phrases are fully valid checksummed BIP39 seeds. They can be used with any cryptocurrency and can be imported into any BIP39 compliant wallet.
> […] However there is a slight reduction in entropy due to the introduction of the doge-isms. A doge seed has about 19.415 fewer bits of entropy than a standard BIP39 seed of equivalent length.
Can you sign a quantum state?
> Abstract. Cryptography with quantum states exhibits a number of surprising and counterintuitive features. In a 2002 work, Barnum et al. argued informally that these strange features should imply that digital signatures for quantum states are impossible [6].
> In this work, we perform the first rigorous study of the problem of signing quantum states. We first show that the intuition of [6] was correct, by proving an impossibility result which rules out even very weak forms of signing quantum states. Essentially, we show that any non-trivial combination of correctness and security requirements results in negligible security.
> This rules out all quantum signature schemes except those which simply measure the state and then sign the outcome using a classical scheme. In other words, only classical signature schemes exist.
> We then show a positive result: it is possible to sign quantum states, provided that they are also encrypted with the public key of the intended recipient. Following classical nomenclature, we call this notion quantum signcryption. Classically, signcryption is only interesting if it provides superior efficiency to simultaneous encryption and signing. Our results imply that, quantumly, it is far more interesting: by the laws of quantum mechanics, it is the only signing method available.
> We develop security definitions for quantum signcryption, ranging from a simple one-time two-user setting, to a chosen-ciphertext-secure many-time multi-user setting. We also give secure constructions based on post-quantum public-key primitives. Along the way, we show that a natural hybrid method of combining classical and quantum schemes can be used to “upgrade” a secure classical scheme to the fully-quantum setting, in a wide range of cryptographic settings including signcryption, authenticated encryption, and chosen-ciphertext security.
"Quantum signcryption"
Lattice Attacks Against Weak ECDSA Signatures in Cryptocurrencies [pdf]
From the paper:
Abstract. In this paper, we compute hundreds of Bitcoin private keys and dozens of Ethereum, Ripple, SSH, and HTTPS private keys by carrying out cryptanalytic attacks against digital signatures contained in public blockchains and Internet-wide scans.
REMME – A blockchain-based protocol for issuing X.509 client certificates
pity it's blockchain based
It's unclear to me why you would want a distributed PKI to authenticate a centralized app. Or maybe it's only for dapps?
In no particular order, there are a number of blockchain PKI (and DNS (!)) proposals and proofs of concept.
"CertLedger: A New PKI Model with Certificate Transparency Based on Blockchain" (2018) https://arxiv.org/pdf/1806.03914 https://scholar.google.com/scholar?q=related:LF9PMeqNOLsJ:sc...
"TABLE 1: Security comparison of Log Based Approaches to Certificate Management" (p.12) lists a number of criteria for blockchain-based PKI implementations:
- Resilient to split-world/MITM attack
- Provides revocation transparency
- Eliminates client certificate validation process
- Eliminates trusted key management
- Preserves client privacy
- Require external auditing
- Monitoring promptness
... These papers also clarify why a highly-replicated decentralized trustless datastore — such as a blockchain — is advantageous for PKI. WoT is not mentioned.
"Blockchain-based Certificate Transparency and Revocation Transparency" (2018) https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final29.pdf
https://scholar.google.com/scholar?q=related:oEsKmJvdn-MJ:sc...
Who can update and revoke which records in a permissioned blockchain (or a plain old database, for that matter)?
Letsencrypt has a model for proving domain control with ACME; which AFAIU depends upon DNS, too.
California grid data is live – solar developers take note
> It looks like California is at least two generations of technology ahead of other states. Let’s hope the rest of us catch up, so that we have a grid that can make an asset out of every building, every battery, and every solar system.
+1. Are there any other states with similar grid data available for optimization; or any plans to require or voluntarily offer such a useful capability?
Why attend predatory colleges in the US?
> Why would people attend predatory colleges?
Why would people make an investment with insufficient ROI (Return on Investment)?
Insufficient information.
College Scorecard [1] is a database with a web interface for finding and comparing schools according to a number of objective criteria. CollegeScorecard launched in 2015. It lists "Average Annual Cost", "Graduation Rate", and "Salary After Attending" on the search results pages. When you review a detail page for an institution, there are many additional statistics; things like: "Typical Total Debt After Graduation" and "Typical Monthly Loan Payment".
The raw data behind CollegeScorecard can be downloaded from [2]. The "data_dictionary" tab of the "Data Dictionary" spreadsheet describes the data schema.
[1] https://collegescorecard.ed.gov
[2] https://collegescorecard.ed.gov/data/
Khan Academy > "College, careers, and more" [3] may be a helpful supplement for funding a full-time college admissions counselor in a secondary education institution
[3] https://www.khanacademy.org/college-careers-more
(I haven't the time to earn 10 academia.stackexchange points in order to earn the prestigious opportunity to contribute this answer to such a forum with threaded comments. In the academic journal system, journals sell academics' work (i.e. schema.org/ScholarlyArticle PDFs, mobile-compatible responsive HTML 5, RDFa, JSON-LD structured data) and keep all of the revenue).
"Because I need money for school! Next question. CPU: College Textbook costs and CPI: All over time t?!"
Ask HN: Data analysis workflow?
What kind of workflow do you employ when designing a data-flow or analyzing data?
Let me give a concrete example. For the past year, I have been selling stuff on the interwebs through two payment processors one of them being PayPal.
The selling process was put together with a bunch of SaaS hooking everything together through webhooks and notifications.
Now I need to step it that control and produce a proper flow to handle sign up, subscription and payment.
Before doing that I'm analyzing and trying to conciliate all transactions to make sure the books are OK and nothing went unseen. There lies the problem. I have data coming from different sources such as databases, excel files, CSV exports and some JSON files.
At first, I started dealing with it by having all the data in CSV files and trying to make sense of them using code and running queries within the code.
As I found holes in the data I had to dig up more data from different sources and it became a pain to continue with code. I now imported everything into Postgres and have been "debugging" with SQL.
As I advanced through the process I had to generate a lot of routines to collect and match data. I also have to keep all the data files around and organized which is very hard to do because I'm all over the place trying to find where the problem is.
How do you handle with it? What kind of workflow? Any best practices or recommendations from people who do this for a living?
Pachyderm may be basically what you're looking for. It does data version control with/for language-agnostic pipelines that don't need to always redo the ETL phase. https://www.pachyderm.io
Dask-ML works with {scikit-learn, xgboost, tensorflow, TPOT,}. ETL is your responsibility. Loading things into parquet format affords a lot of flexibility in terms of (non-SQL) datastores or just efficiently packed files on disk that need to be paged into/over in RAM. http://ml.dask.org/examples/scale-scikit-learn.html
Sklearn.pipeline.Pipeline API: {fit(), transform(), predict(), score(),} https://scikit-learn.org/stable/modules/generated/sklearn.pi...
https://docs.featuretools.com can also minimize ad-hoc boilerplate ETL / feature engineering :
> Featuretools is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning.
The PLoS 10 Simple Rules papers distill a number of best practices:
"Ten Simple Rules for Reproducible Computational Research" http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fj...
“Ten Simple Rules for Creating a Good Data Management Plan” http://journals.plos.org/ploscompbiol/article?id=10.1371/jou...
In terms of the scientific method, a null hypothesis like "there is no significant relation between the [independent and dependent] variables" may be dangerously unprofessional p-hacking and data dredging; and may result in an overfit model that seems to predict or classify the training and test data (when split with e.g. sklearn.model_selection.train_test_split and a given random seed).
One of these days (in the happy new year!) I'll get around to updating these notes with the aforementioned tools and docs: https://wrdrd.github.io/docs/consulting/data-science#scienti...
IDK what https://kaggle.com/learn has specifically in terms of analysis workflow? Their docker containers have very many tools configured in a reproducible way: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...
Ask HN: What is your favorite open-source job scheduler
Too many business scripts rely on cron(8) to run. Classic cron cannot handle task duration, fail (only with email), same-task piling, linting, ...
So what is your favorite open-source, easy to bundle/deploy job scheduler, that is easy to use, has logging capacity, config file linting, and can handle common use-cases : kill if longer than, limit resources, prevent launching when previous one is nor finished, ...
systemd-crontab-generator may be usable for something like linting classic crontabs? https://github.com/systemd-cron/systemd-cron
Systemd/Timers as a cron replacement: https://wiki.archlinux.org/index.php/Systemd/Timers#As_a_cro...
Celery supports periodic tasks:
> Like with cron, the tasks may overlap if the first task doesn’t complete before the next. If that’s a concern you should use a locking strategy to ensure only one instance can run at a time (see for example Ensuring a task is only executed one at a time).
http://docs.celeryproject.org/en/latest/userguide/periodic-t...
How to Version-Control Jupyter Notebooks
Mentioned in the article: manual nbconvert, nbdime, ReviewNB (currently GitHub only), jupytext.
Jupytext includes a bit of YAML in the e.g. Python/R/Julia/Markdown header. https://github.com/mwouts/jupytext
Huge +1s for both nbdime and jupytext. Excellent tools both.
Really enjoying jupytext. I do a bunch of my training from Jupyter and it has made my workflow better.
Teaching and Learning with Jupyter (A book by Jupyter for Education)
Still under heavy development, but already useful. Add we accept Pull Requests to add items or fix issues. Join us!
A small suggestion.
Since Jupyter Notebook can be used for both programming and documentation, why don't you use Jupyter Notebook itself as the source of your document?
It is actually very easy to setup a Jupyter Notebook driven .ipynb -> .html publishing pipeline with github + a local Jupyter instance
Here is a toy example (for my own github page)
https://ontouchstart.github.io/
https://github.com/ontouchstart/ontouchstart.github.io
The convert script is here (also a Jupyter Notebook)
https://github.com/ontouchstart/ontouchstart.github.io/blob/...
You got the ideas.
BTW, to make the system fully replicable, I use docker for the local Jupyter instance, which can be launched via the Makefile
https://github.com/ontouchstart/ontouchstart.github.io/blob/...
Here is the custom Dockerfile:
https://github.com/ontouchstart/ontouchstart.github.io/blob/...
Margin Notes: Automatic code documentation with recorded examples from runtime
Slightly related are Go examples - they're tests, and documentation at the same time. It'd be nice if someone hooks in a listener to automatically collect examples tho
And Elixir's doctests https://elixir-lang.org/getting-started/mix-otp/docs-tests-a...
And Python's Axe!
I mean doctest: https://docs.python.org/3/library/doctest.html
1. sys.settrace() for {call, return, exception, c_call, c_return, and c_exception}
2. Serialize as/to doctests. Is there a good way to serialize Python objects as Python code?
3. Add doctests to callables' docstrings with AST
Mutation testing tools may have already implemented serialization to doctests but IDK about docstring modification.
... MOSES is an evolutionary algorithm that mutates and simplifies a combo tree until it has built a function with less error for the given input/output pairs.
Time to break academic publishing's stranglehold on research
Add in that the quality of the system is massively massively broken...peer review is about as accurate as the flip of a coin. It does not promote gorund breaking or novel research, it barely (arguably doesn't) even contribute to quality research. I had a colleague recently be told by a journal editor 'we don't publish critiques from junior scholars.' So much for the nature of peer review being entirely driven by the quality of the work.
As one of those academics...I keep getting requests to peer review, I respectfully make clear I don't review for non open source journals anymore. Same with publishing. I'm not tenure-track so am not primarily evaluated based on output.
Publishing is broken, but it is really just part of the broader and even more broken nature of academic research.
What would be a good alternative to peer-review though? Genuinely interested.
Open publishing and commenting would be a good start---having a dialog, like is done at conferences. Older academic journal articles (pre 1900) read much more like discussions than like the hundred dollar word vomits of modern academic publishing. The broken incentives are at the core of this rotten fruit, though. Just making journals open isn't enough.
We have (almost) open publishing and open commenting. Did that improve anything?
There's open commenting? I've never seen the back and forth of the review process be published. It should be published.
https://hypothes.is supports threaded comments on anything with a URI; including PDFs and specific sentences or figures thereof. All you have to do is register an account and install the browser extension or include the JS in the HTML.
It's based on open standards and an open platform.
W3C Web Annotations: http://w3.org/annotation
About Hypothesis: https://web.hypothes.is/about/
Ask HN: How can I learn to read mathematical notation?
There are a lot of fields I'm interested in, such as machine learning, but I struggle to understand how they work as most resources I come across are full of complex mathematical notation that I never learned how to read in school or University.
How do you learn to read this stuff? I'm frequently stumped by an academic paper or book that I just can't understand due to mathematical notation that I simply cannot read.
https://en.wikipedia.org/wiki/List_of_logic_symbols
https://en.wikipedia.org/wiki/Table_of_mathematical_symbols
These might help a bit.
But as someone with similar problems, I'm beginning to think there's no real solution other than thousands of hours of studying.
There are a number of Wikipedia pages which catalog various uses of symbols for various disciplines:
Outline_of_mathematics#Mathematical_notation https://en.wikipedia.org/wiki/Outline_of_mathematics#Mathema...
List_of_mathematical_symbols https://en.wikipedia.org/wiki/List_of_mathematical_symbols
List_of_mathematical_symbols_by_subject https://en.wikipedia.org/wiki/List_of_mathematical_symbols_b...
Greek_letters_used_in_mathematics,_science,_and_engineering https://en.wikipedia.org/wiki/Greek_letters_used_in_mathemat...
Latin_letters_used_in_mathematics https://en.wikipedia.org/wiki/Latin_letters_used_in_mathemat...
For learning the names of symbols (and maybe also their meaning as conventially utilized in a particular field at a particular time in history), spaced repetition with flashcards with a tool like Anki may be helpful.
For typesetting, e.g. Jupyter Notebook uses MathJax to render LaTeX with JS.
latex2sympy may also be helpful for learning notation.
… data-science#mathematical-notation https://wrdrd.github.io/docs/consulting/data-science#mathema...
New law lets you defer capital gains taxes by investing in opportunity zones
I believe the way that the program works is you can defer taxes from your original capital gains (and the cost basis gets increased so your deferred taxes are less than you would pay otherwise), reinvest them in an "opportunity zone", and not pay capital gains on your investment on the opportunity zone, if you hold it long enough
e.g. Bob bought Apple stock for $50 a share back in the day, and sells it for $200/share. He defers his taxes until 2026. Instead of paying capital gains tax on $150/share, the cost basis is adjusted by 15% so in addition to benefiting from the time value of money, future Bob will only be taxed on $142.50 of capital gains. Bob can buy a house in an "opportunity zone" (from scrolling around the embedded map, there's plenty of million dollar+ houses in these areas. There's also lots of sports teams and stadiums in these areas, so maybe Bob buys an NFL team or a parking lot next to their stadium), rent it out for 10 years, sell it, and not have to pay any capital gains tax on the appreciation. Definitely not a bad deal for him!
Yes its more akin to a 1031 exchange, but opened up so that non-real estate capital gains are eligible.
Is it just capital gains? Wondering if it applies to any other forms of active or passive income.
How are profits from these investments treated?
Can you "swap til you drop" like with a 1031 exchange?
> Is it just capital gains? Wondering if it applies to any other forms of active or passive income.
I would also like some information about this.
+1 for investing in distressed areas; self-nominated with intent or otherwise.
If it's capital gains only, -1 on requiring sale of capital assets in order to be sufficiently incentivized. (Because then the opportunity to tax-advantagedly invest in Opportunity Zones is denied to persons without assets to liquidate; i.e. unequal opportunity).
How to Write a Technical Paper [pdf]
A few years ago i found a great version of a similar piece that proposed something like a 'sandwich' model for each section, and the work as a whole. A sandwich layer was a hook (something interesting), the meat (what you actually did), and a summary.
I failed to save it and i haven't been able to dig it up again, but I liked the idea. The paper was written in the style it described, as well.
5 paragraph essay? https://en.wikipedia.org/wiki/Five-paragraph_essay
> The five-paragraph essay is a format of essay having five paragraphs: one introductory paragraph, three body paragraphs with support and development, and one concluding paragraph. Because of this structure, it is also known as a hamburger essay, one three one, or a three-tier essay.
The digraph presented in the OP really is a great approach, IMHO:
## Introduction
## Related Work, System Model, Problem Statement
## Your Solution
## Analysis
## Simulation, Experimentation
## Conclusion
... "Elements of the scientific method" https://en.wikipedia.org/wiki/Scientific_method#Elements_of_...
no, it was on arxiv somewhere. It was written as a journal article.
edit: AH! it was linked upthread on bioarxiv.
JSON-LD 1.0: A JSON-Based Serialization for Linked Data
JSON-LD 1.1 https://www.w3.org/TR/json-ld11/
"Changes since 1.0 Recommendation of 16 January 2014" https://www.w3.org/TR/json-ld11/#changes-from-10
Jeff Hawkins Is Finally Ready to Explain His Brain Research
Cortical column: https://en.wikipedia.org/wiki/Cortical_column
> In the neocortex 6 layers can be recognized although many regions lack one or more layers, fewer layers are present in the archipallium and the paleopallium.
What this means in terms of optimal artificial neural network architecture and parameters will be interesting to learn about; in regards to logic, reasoning, and inference.
According to "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function" https://www.frontiersin.org/articles/10.3389/fncom.2017.0004... , the human brain appears to be [at most] 11-dimensional (11D); in terms of algebraic topology https://en.wikipedia.org/wiki/Algebraic_topology
Relatedly,
"Study shows how memories ripple through the brain" https://www.ninds.nih.gov/News-Events/News-and-Press-Release...
> The [NeuroGrid] team was also surprised to find that the ripples in the association neocortex and hippocampus occurred at the same time, suggesting the two regions were communicating as the rats slept. Because the association neocortex is thought to be a storage location for memories, the researchers theorized that this neural dialogue could help the brain retain information.
Re: Topological graph theory [1], is it possible to embed a graph on a space filling curve [2] (such as a Hilbert R-tree [3])?
[1] https://en.wikipedia.org/wiki/Topological_graph_theory
[2] https://en.wikipedia.org/wiki/Space-filling_curve
[3] https://en.wikipedia.org/wiki/Hilbert_R-tree
[4] https://github.com/bup/bup (git packfiles)
Interstellar Visitor Found to Be Unlike a Comet or an Asteroid
It's too bad we're not ready to launch probes to visit and explore such transient objects at a moment's notice.
I think it would make sense to plan a probe mission where the probe would be put into storage to stand by for these kind of events. It would still be a challenge and might be impossible to find a launch window and reach the passing object in time, but it would be worth trying.
Just imagine how tragic it would be, if an ancient artifact of an alien civilization drifts by earth and we don't manage to have at least a look at it.
Wouldn’t such an object be emitting radio signals that earth could receive?
Not if it's something like another civilization's Tesla Roadster.
> Not if it's something like another civilization's Tesla Roadster.
'Oumuamua is red and headed toward Pegasus (the winged horse) after a very long journey starting longtime in spacetime ago. It is wildly tumbling off-kilter and potentially creating a magnetic field that would be useful for interplanetary spacetravel.
They're probably pointing us to somewhere else from somewhere else.
If this is any indication of the state of another civilization's advanced physics, and it missed us by a wide margin, they're probably laughing at our energy and water markets; and indicating that we should be focused on asteroid impact avoidance (and then we will really laugh about rockets and red electromagnetic kinetic energy machines and asteroid mining). https://en.wikipedia.org/wiki/Asteroid_impact_avoidance
"Amateurs"
[We watch it fly by, heads all turning]
Maybe it would've been better to have put alone starman in the passenger seat or two starpeoples total?
Given the skull shape of October 2015 TB145 [1] (due to return in November 2018), maybe 'Oumuamua [2] is a pathology of Mars and an acknowledgement of our spacefaring intentions? Red, subsurface water, disrupted magnetic field.
[1] https://en.wikipedia.org/wiki/2015_TB145
[2] https://en.wikipedia.org/wiki/%CA%BBOumuamua
In regards to a red, unshielded, earth vehicle floating in solar orbit with a suited anthropomorphic creature whose head is too big for the windshield:
"What happened here?"
Publishing more data behind our reporting
Publishing raw data itself is definitely a good start but there also needs to be a push towards a standardized way of sharing data along with it's lineage (dependent sources, experimental design/generation process, metadata, graph relationship of other uses, etc.).
> Publishing raw data itself is definitely a good start but there also needs to be a push towards a standardized way of sharing data along with it's lineage (dependent sources, experimental design/generation process, metadata, graph relationship of other uses, etc.).
Linked Data based on URIs is reusable. ( https://5stardata.info )
The Schema.org Health and Life Sciences extension is ahead of the game here, IMHO. MedicalObservationalStudy and MedicalTrial are subclasses of https://schema.org/MedicalStudy . {DoubleBlindedTrial, InternationalTrial, MultiCenterTrial, OpenTrial, PlaceboControlledTrial, RandomizedTrial, SingleBlindedTrial, SingleCenterTrial, and TripleBlindedTrial} are subclasses of schema.org/MedicalTrial.
A schema.org/MedicalScholarlyArticle (a subclass of https://schema.org/ScholarlyArticle ) can have a https://schema.org/Dataset. https://schema.org/hasPart is the inverse of https://schema.org/isPartOf .
More structured predicates which indicate the degree to which evidence supports/confirms or disproves current and other hypotheses (according to a particular Person or Persons on a given date and time; given a level of scrutiny of the given information) are needed.
In regards to epistemology, there was some work on Fact Checking ( e.g. https://schema.org/ClaimReview ) in recent times. To quote myself here, from https://news.ycombinator.com/item?id=15528824 :
> In terms of verifying (or validating) subjective opinions, correlational observations, and inferences of causal relations; #LinkedMetaAnalyses of documents (notebooks) containing structured links to their data as premises would be ideal. Unfortunately, PDF is not very helpful in accomplishing that objective (in addition to being a terrible format for review with screen reader and mobile devices): I think HTML with RDFa (and/or CSVW JSONLD) is our best hope of making at least partially automated verification of meta analyses a reality.
"#LinkedReproducibility"; "#LinkedMetaAnalyses", "#StudyGraph"
CSV 1.1 – CSV Evolved (for Humans)
Well, if you want to improve tabular data formats:
1. Add a version identifier / content-type on the first line!
2. Create a formal grammar for this CSV format
3. Specify preferred character-encoding
4. Provide some tooling (validation, CSV 1.1 => HTML, CSV => Excel)
5. Add the option to specify column type (string, int, date)
6. Specify ISO-8601 as the preferred date format
7. Allow 'reheading' the columns in the file itself. This is useful in streaming data.
8. Specify the format of the newlines.
CSVW: CSV on the Web https://w3c.github.io/csvw/
"CSV on the Web: A Primer" http://www.w3.org/TR/tabular-data-primer/
"Model for Tabular Data and Metadata on the Web" http://www.w3.org/TR/tabular-data-model/
"Generating JSON from Tabular Data on the Web" (csv2json) http://www.w3.org/TR/csv2json/
"Generating RDF from Tabular Data on the Web" (csv2rdf) http://www.w3.org/TR/csv2rdf/
...
N. Allow authors to (1) specify how many header rows are metadata and (2) what each row is. For example: 7 metadata header rows: {column label, property URI [path], datatype URI, unit URI, accuracy, precision, significant figures}
With URIs, we can merge, join, and concatenate data (when e.g. study control URIs for e.g. single/double/triple blinding/masking indicate that the https://schema.org/Dataset meets meta-analysis inclusion criteria).
"#LinkedReproducibility"; "#LinkedMetaAnalyses"
Ask HN: Which plants can be planted indoors and easily maintained?
Chlorophytum comosum (spider plants) are good air-filtering houseplants that are also easy to take starts of: https://en.wikipedia.org/wiki/Chlorophytum_comosum
Houseplant: https://en.wikipedia.org/wiki/Houseplant
Graduate Student Solves Quantum Verification Problem
"Classical Verification of Quantum Computations" Mahadev. (2018)
https://arxiv.org/abs/1804.01082
https://www.arxiv-vanity.com/papers/1804.01082/
https://scholar.google.com/scholar?cluster=10138991277567750...
The down side to wind power
>To estimate the impacts of wind power, Keith and Miller established a baseline for the 2012‒2014 U.S. climate using a standard weather-forecasting model. Then, they covered one-third of the continental U.S. with enough wind turbines to meet present-day U.S. electricity demand. The researchers found this scenario would warm the surface temperature of the continental U.S. by 0.24 degrees Celsius, with the largest changes occurring at night when surface temperatures increased by up to 1.5 degrees. This warming is the result of wind turbines actively mixing the atmosphere near the ground and aloft while simultaneously extracting from the atmosphere’s motion.
I am confused: How does the warming work exactly and is this actually a global climate effect? Because this part of the article makes it sound to me as if it's just a very localised change of temperature caused by the exchange of different air layers, which can't be right? Because you couldn't really compare that to climate change on a global scale.
The example is clearly hypothetical only. We're never going to cover one third of the continental US with wind turbines.
The more important information to me is that neither wind nor solar have the power density that has been claimed.
For wind, we found that the average power density — meaning the rate of energy generation divided by the encompassing area of the wind plant — was up to 100 times lower than estimates by some leading energy experts
...
For solar energy, the average power density (measured in watts per meter squared) is 10 times higher than wind power, but also much lower than estimates by leading energy experts.
Then you have the separate problem that the wind doesn't always blow and the sun doesn't always shine, so you need a huge storage infrastructure (batteries, presumably) alongside the wind and solar generating infrastructure.
IMO nuclear is the only realistic alternative to coal to provide reliable, zero-emission "base load" power generation. Wind and solar could make sense in some use cases but not in general.
> IMO nuclear is the only realistic alternative to coal to provide reliable, zero-emission "base load" power generation. Wind and solar could make sense in some use cases but not in general.
How much heat energy does a reactor with n meters of concrete around it, located on a water supply in order to use water in an open closed loop, protected with national security resources, waste into the environment?
I'd be interested to see which power sources the authors of this study would choose as a control for these just sensational stats.
From https://news.ycombinator.com/item?id=17806589 :
> Canada (2030), France (2021), and the UK (2025) are all working to entirely phase out coal-fired power plants for very good reasons (such as neonatal health).
Would you burn a charcoal grill in an enclosed space like a garage? No.
Thermodynamics of Computation Wiki
"Quantum knowledge cools computers: New understanding of entropy" (2011) https://www.sciencedaily.com/releases/2011/06/110601134300.h...
> The new study revisits Landauer's principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.
"The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123
Landauer's principle: https://en.wikipedia.org/wiki/Landauer%27s_principle
"Thin film converts heat from electronics into energy" (2018) http://news.berkeley.edu/2018/04/16/thin-film-converts-heat-...
> This study reports new records for pyroelectric energy conversion energy density (1.06 Joules per cubic centimeter), power density (526 Watts per cubic centimeter) and efficiency (19 percent of Carnot efficiency, which is the standard unit of measurement for the efficiency of a heat engine).
"Pyroelectric energy conversion with large energy and power density in relaxor ferroelectric thin films" (2018) https://www.nature.com/articles/s41563-018-0059-8
Carnot heat engine > Carnot cycle, Carnot's theorem, "Real heat engines": https://en.wikipedia.org/wiki/Carnot_heat_engine
Carnot's theorem > Applicability to fuel cells and batteries: https://en.wikipedia.org/wiki/Carnot%27s_theorem_(thermodyna...
> Since fuel cells and batteries can generate useful power when all components of the system are at the same temperature [...], they are clearly not limited by Carnot's theorem, which states that no power can be generated when [...]. This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells and batteries instead convert chemical energy to work.[6] Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell and battery energy conversion
[deleted]
Is there enough heat energy from a datacenter to -- rather than heating oceans (which can result in tropical storms) -- turn a turbine (to convert heat energy back into electrical energy)?
Is there a statistic which captures the amount of heat energy discharged into ocean/river/lake water? "100% clean energy with PPAs (Power Purchase Agreements)" while bleeding energy into the oceans isn't quite representative of the total system.
"How to Reuse Waste Heat from Data Centers Intelligently" (2016) https://www.datacenterknowledge.com/archives/2016/05/10/how-...
> There are two big issues with data center waste heat reuse: the relatively low temperatures involved and the difficulty of transporting heat. Many of the reuse applications to date have used the low-grade server exhaust heat in an application physically adjacent to the data center, such as a greenhouse or swimming pool in the building next door. This is reasonable given the relatively low temperatures of data center return air, usually between 28° and 35°C (80-95°F), and the difficulty in moving heat around. Moving heat energy frequently requires insulated ducting or plumbing instead of cheap, convenient electrical cables. Trenching and installation to run a hot water pipe from a data center to a heat user may cost as much as $600 per linear foot. Just the piping to share heat with a facility one-quarter mile away might add $750,000 or more to a data center construction project. There’s currently not much that can be done to reduce this cost.
> To address the low-temperature issue, some data center operators have started using heat pumps to increase the temperature of waste heat, making the thermal energy much more valuable, and marketable. Waste heat coming out of heat pumps at temperatures in the range of 55° to 70°C (130-160°F) can be transferred to a liquid medium for easier transport and can be used in district heating, commercial laundry, industrial process heat, and many more. There are even High Temperature (HT) and Very High Temperature (VHT) heat pumps capable of moving low-grade data center heat up to 140°C.
Heat Pump: https://en.wikipedia.org/wiki/Heat_pump
"Data Centers That Recycle Waste Heat" https://www.datacenterknowledge.com/data-centers-that-recycl...
Why Do Computers Use So Much Energy?
> Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!
Thermodynamics of Computation Wiki https://centre.santafe.edu/thermocomp/Santa_Fe_Institute_Col...
Justice Department Sues to Stop California Net Neutrality Law
Sorry if I get some basic understanding of the law wrong but...
Isn't this the same thing as regulating car emissions? Doesn't 822 only apply to providers in the state itself? Wouldn't it be that the telecoms are welcome to engage in another method of end-customer billing in other states?
What am I missing?
> Like California’s auto emissions laws that forced automakers to adopt the standards for all production, the state’s new net neutrality rules could push broadband providers to apply the same rules to other states.
I think you're right, the motives are very similar.
> Attorney General Jeff Sessions said that California’s net neutrality law was illegal because Congress granted the federal government, through the F.C.C., the sole authority to create rules for broadband internet providers. “States do not regulate interstate commerce — the federal government does,” Mr. Sessions said in a statement.
I thought Republicans were pro-states rights and limited government? How does their position on this jive with their ideology?
Expansion of federal jurisdiction under the Commerce Clause is an egregious violation of Constitutional law.
Does the federal government have the enumerated right under the Commerce Clause to, for example, ban football for anyone that doesn't have a disability? No!
Was the Commerce Clause sufficient authorization for Federal prohibition of alcohol? No! An Amendment to the Constitution was necessary. And, Federal Alcohol and the unequal necessary State Alcohol prohibitions miserably failed to achieve the intended outcomes.
Where is the limit? How can they claim to support a states' rights, limited government position while expanding jurisdiction under the Interstate Commerce Clause? "Substantially affecting" interstate commerce is a very slippery slope.
Furthermore, de-classification from Title II did effectively - as the current administration's FCC very clearly argued (in favor of special interests over those of the majority) - relieve the FCC of authority to regulate ISPs: they claimed that it's FTC's job and now they're claiming it's their job.
Without Title II classification, FCC has no authority to preempt state net neutrality regulation. California and Washington have the right to regulate ISPs within their respective states.
Outrageous!
Limited government: https://en.wikipedia.org/wiki/Limited_government
States' rights: https://en.wikipedia.org/wiki/States%27_rights
[Interstate] Commerce Clause: https://en.wikipedia.org/wiki/Commerce_Clause
Net neutrality in the United States > Repeal of net neutrality policy: https://en.m.wikipedia.org/wiki/Net_neutrality_in_the_United...
The limit is established, and constantly reevaulated by the Supreme Court. For example it was held that gun control cannot be done through the Commerce Clause.
Car emissions and ISPs are different. As ISPs are very much perfect examples of truly local things (they need to reach your devices with EM signals either via cables or air radio), the Federal government might try to argue that the net neutrality regulation of California affects the whole economy substantially, because it allows too much interstate competition due to the lack of bundling/throttling by ISPs.
Similarly, the problem with car emissions might be that requiring thing at the time of sale affects what kind of cars are sold to CA.
Is the Commerce Clause too vague? Yes. Is there a quick and sane way to fix it? I see none. Is it at least applied consistently? Well, sort of. But we shall see.
ISPs are the very opposite of local, as the only reason I have an ISP is to deliver bits from the rest of the world. Of course, the FCC doesn't seem to understand that...
To summarize the points made in [1]: products can be sold across state lines, internet service sold in one state cannot be sold across state lines.
[1] https://news.ycombinator.com/item?id=18111651
In my opinion, the court has significantly erred in redefining interstate commerce to include (1) intrastate-only-commerce; and (2) non-commerce (i.e. locally grown and unsold wheat)
Furthermore - and this is a bit off topic - unalienable natural rights (Equality, Life, Liberty, and pursuit of Happiness) are of higher precedence. I mention this because this is yet another case where the court will be interpreting the boundary between State and Federal rights; and it's very clear that the founders intended for the powers of the federal government to be limited -- certainly not something that the Commerce Clause should be interpreted to supersede.
What penalties and civil fines are appropriate for States or executive branch departments that violate the Constitution; for failure to uphold Oaths to uphold the Constitution?
The problem is, someone has to interpret what kind of economy the Founders intended.
Is it okay if a State opts to withdraw from the interstate market for wheat? Because without power to meddle with intra-state production, consumption and transactions, it's entirely possible.
White House Drafts Order to Probe Google, Facebook Practices
This sounds like they want the equal representation policies that the Republican Party got rolled back in the 80s (ruled unconstitutional iirc). It’s what allowed the rise of partisan “news”. It seems like any “equal exposure” policies would hit the same issues.
That said the primary “imbalanced exposure” seems to be due to evicting people who simply spend their time attacking minorities, attack equal rights, and promoting violence towards anyone that they dislike. For whatever reason the Republican Party seems to have decided that those people represent “conservative” views that private companies should have to support.
i. e. : People who are exercising their right to free speech.
'Right to free speech' does not exist outside the government. It never has, unless there's an amendment to the first amendment that no one is telling us about..
Repeating what I said in an earlier comment....they were able to grow to the size they have become because they are exempted from liable laws under safe harbor. The argument for that was that they were neutral platforms. They no longer are so they either need to remove the protections or be subject to the first amendment but they should not be able to have it both ways.....and I don't think there is case law to back this one way or the other yet...
> they were able to grow to the size they have become because they are exempted from liable laws under safe harbor
This was not a selective protection. When the government grants limited resources like electromagnetic spectrum and right of way, they're not directly making a monopoly, but the FCC does then claim right to regulate speech.
In the interest of fairness, the FCC classed telecommunication service providers as common carriers; thus authorizing FCC to pass net neutrality protections which require equal prioritization of internet traffic. (No blocking, No throttling, No paid prioritization). The current administration doesn't feel that that's fair, and so they've moved to dismantle said "burdensome regulations".
The current administration is now apparently attempting to argue that information service providers - which are all equally granted safe harbor and obligated to comply with DMCA - have no right to take down abuse and harassment because anti-trust monopoly therefore Freedom of Speech doesn't apply to these corporation persons.
Selective bias, indeed! Broadcast TV and Radio are subject to different rules than Cable (non-broadcast) TV.
Other regimes have attempted to argue that the government has the right to dictate the media as well.
Taking down abuse and harassment is necessary and well within the rights of a person and a corporation in the United States. Taking down certain content is now legally required within 24 hours of notice from the government in the EU.
Where is the line between a media conglomerate that produces news entertainment and an information service provider? If there is none, and the government has the right to regulate "equal time" on non-granted-spectrum media outlets, future administrations could force ConservativeNewsOutletZ and LiberalNewsOutletZ to carry specific non-emergency content, to host abusive and offense rhetoric, and to be sued for being forced to do so because no safe harbor.
Can anyone find the story of how the GOP strongarmed and intimidated Facebook into "equal time" (and then we were all shoved full of apparently Russian conservative "fake news" propaganda) before the most recent election where the GOP won older radio, TV, and print voters and young people didn't vote because it appeared to be unnecessary?
Meanwhile, the current administration rolled back the "burdensome regulation" that was to prevent ISPs from selling complete internet usage history; regardless of age.
Maybe there's an exercise that would be helpful for understanding the "corporate media filter" and the "social media filter"?
You, having no money -- while watching corporate profits soar and income inequality grow to unprecedented heights -- will choose to take a job that requires you to judge whether thousands of reported pieces of content a day are abusive, harassing, making specific threats, inciting specific destructive acts, recruiting for hate groups, depicting abuse; or just good 'ol political disagreement over issues, values, and the appropriate role of the punishing and/or nurturing state. You will do this for weeks or months, because that's your best option, because nobody else is standing in the mirror behind these people who haven't learned to respectfully disagree over facts and data (evidence).
Next, you will plan segments of content time interspersed with ads paid for by people who are trying to sell their products, grow their businesses, and reach people. You will use a limited amount of our limited electromagnetic spectrum which the government has sold your corporate overlords for a limited period of time, contingent upon your adherence to specific and subjective standards of decency as codified in the stated regulations.
In both cases, your objective is to maximize profit for shareholders.
Your target audiences may vary from undefined (everyone watching), to people who only want to review fun things that they agree with in their safe little microcosm of the world, to people who know how to find statistics like corporate profits, personal savings rate, infant morality, healthcare costs per capita, and other Indicators identified as relevant to the Targets and Goals found in the UN Sustainable Development Goals (Global Goals Indicators).
Do you control what the audience shares?
Ask HN: Books about applying the open source model to society
I've been thinking for some time now that as productivity keeps growing, not all people will need to work any more. Society will eventually start to resemble an open source project where a few core contributors do the real work (and get to decide the direction), some others help around, and the majority of people just benefit without having to do anything. I'm wondering if any books have been written to explore this concept further?
> I've been thinking for some time now that as productivity keeps growing, not all people will need to work any more.
How much energy do autotrophs and heterotrophs need to thrive?
"But then we'll be rewarding laziness!"
Some people do enjoy the work they've chosen to do. We enjoy the benefits of upward mobility here in the US; the land of opportunity.
Why would I fully retire at 65 (especially if lifespan extension really is in reach)?
> Society will eventually start to resemble an open source project where a few core contributors do the real work (and get to decide the direction), some others help around, and the majority of people just benefit without having to do anything.
Open-source governance https://en.wikipedia.org/wiki/Open-source_governance
Free-rider problem https://en.wikipedia.org/wiki/Free-rider_problem
As we continue to reward work, the people who are investing in the means of production (energy, labor, automation, raw materials) and science (research and development; education) continue to amass wealth and influence.
This concentration of wealth -- wealth inequality -- has historically presaged and portended unrest.
How contributions to open source projects are reinforced, what motivates people who choose to contribute (altruism, enlightened self interest, compassion, acceptance,), and what makes a competitive and thus sustainable open source project is an interesting study.
... Business models for open-source software: https://en.wikipedia.org/wiki/Business_models_for_open-sourc...
... Political Science: https://en.wikipedia.org/wiki/Political_science
... National currencies are valued in FOREX markets: https://en.wikipedia.org/wiki/Foreign_exchange_market
> I'm wondering if any books have been written to explore this concept further?
"The Singularity is Near: When Humans Transcend Biology" (2005) contains a number of extrapolated predictions; chief among these is that there will continue to be exponential growth in technological change https://en.wikipedia.org/wiki/The_Singularity_Is_Near
... Until we reach limits; e.g. the carrying capacity of our ecosystem, the edge of the universe.
"The Limits to Growth" (1972, 2004) https://en.wikipedia.org/wiki/The_Limits_to_Growth
"Leverage Points: Places to Intervene in a System" (2010) https://news.ycombinator.com/item?id=17781927
Who owns what and who 'gets to' just chill while the solar robots brush their teeth? Heady questions. "Tired yet?"
The Aragon Project has a really interesting take on open source governance:
""" IMAGINE A NATION WITHOUT LAND AND BORDERS
A digital jurisdiction
> Aragon Network will be the first community governed decentralized organization whose goal is to act as a digital jurisdiction, an online decentralized court system that isn’t bound by traditional artificial barriers such as national jurisdictions or the borders of a single country.
Aragon organizations can be upgraded seamlessly using our aragonOS architecture. They can solve disputes between two parties by using the decentralized court system, a digital jurisdiction that operates only online and utilizes your peers to resolve issues.
The Aragon Network Token, ANT, puts the power into the hands of the people participating in the operation of the Network. Every single aspect of the Network will be governed by those willing to make an effort for a better future. """
Today, Europe Lost The Internet. Now, We Fight Back
Here's a quote from this excellent article:
> An error rate of even one percent will still mean tens of millions of acts of arbitrary censorship, every day.
And a redundant -- positively defiant -- link and page title:
"Today, Europe Lost The Internet. Now, We Fight Back." https://www.eff.org/deeplinks/2018/09/today-europe-lost-inte...
Firms with 50 or less employees should stay that small, really.
VPN providers in North and South America FTW.
> VPN providers in North and South America FTW.
Article 13 will affect the entire Internet, not just people in Europe. Most people on the Internet use large, multinational platforms. Those platforms will set rules according to the lowest common denominator, because it's the easiest to implement.
This means that people all over the world are going to have a much more difficult time with any user-generated content. That's true even for user-created content, even if there is no other copyright holder involved (look at how badly Content ID has played out).
Or they just stop doing business in Europe.
Since europe is a quite big market, maybe not an option and easier to geolocate and restrict just EU-Traffic
Or just not, if there's no nexus to europe. I suspect for a small business, the right thing to do would be to simply ignore EU directives. I would not be surprised if the US passes a law to make judgements against US companies without a European nexus (users in Europe would not count) unenforceable in the US. That was done with the SPEECH act to stop libel tourism.
Technically, the phrase "Useful Arts and Sciences" in the Copyright Clause of the US Constitution applies to just that; the definitions of which have coincidentally changed over the years.
The harms to Freedom of Speech -- i.e. impossible 99% accuracy in content filtering still results in far too much censorship -- so significantly outweigh the benefits for a limited number of special interests intending to thwart inferior American information services which also currently host "art" and content pertaining to the "useful arts"; that it's hard to believe this new policy will have it's intended effects.
Haven't there been multiple studies which show that free marketing from e.g. content piracy -- people who experience and recommended said goods at $0 -- is actually a net positive for the large corporate entertainment industry? That, unimpeded, content spreads like the common cold through word of mouth; resulting in greater number of artful impressions.
How can they not anticipate de-listing of EU content from news and academic article aggregators as an outcome of these new policies? (Resulting in even greater outsized impact on one possible front page that consumers can choose to consume)
For countries in the EU with less than 300 million voters, if you want:
- time for your headline: $
- time for your snippet: $$
- time for your og:description: $$
- free video hosting: $$$
- video revenue: $$$$
- < 30% American content: $$$$$
Pay your bill.
And what of academic article aggregators? Can they still index schema:ScholarlyArticle titles and provide a value-added information service for science?
Consumer science (a.k.a. home economics) as a college major
> That's why we need to bring back the old home economics class. Call it "Skills for Life" and make it mandatory in high schools. Teach basic economics along with budgeting, comparison shopping, basic cooking skills and time management.
Some Jupyter notebooks for these topics that work with https://mybinder.org could be super helpful. A self-paced edX course could also be a great intro to teaching oneself though online learning.
* Personal Finance (budgets, interest, growth, inflation, retirement)
* Food Science (nutrition, meal planning for n people, food prep safety, how long certain things can safely be left out on the counter)
* Productivity Skills (GTD, context switching overhead, calendar, email labels, memo app / shared task lists)
There were FACS (Family and Consumer Studies/Sciences) courses in our middle and high school curricula. Nutrition, cooking, sewing; family planning, carry a digital baby for awhile
Home economics https://en.wikipedia.org/wiki/Home_economics
* Family planning
https://en.wikipedia.org/wiki/Family_planning
> * Personal Finance (budgets, interest, growth, inflation, retirement)
Personal Finance https://en.wikipedia.org/wiki/Personal_finance
Khan Academy > College, careers, and more > Personal finance https://www.khanacademy.org/college-careers-more/personal-fi...
"CS 007: Personal Finance For Engineers" https://cs007.blog
https://reddit.com/r/personalfinance/wiki
> * Food Science (nutrition, meal planning for n people, food prep safety, how long certain things can safely be left out on the counter)
Food Science https://en.wikipedia.org/wiki/Food_science
Dietary management https://en.wikipedia.org/wiki/Dietary_management
Nutrition Education: https://en.wikipedia.org/wiki/Nutrition_Education
MyPlate https://en.wikipedia.org/wiki/MyPlate
Healthy Eating Plate https://www.hsph.harvard.edu/nutritionsource/healthy-eating-...
How to make salads, smoothies, sandwiches
How to compost and avoid unnecessary packaging
* School, College, Testing, "How Children Learn"
GED, SAT, ACT, MCAT, LSAT, GRE, GMAT, ASVAB
Defending a Thesis, Bar Exam, Boards
Khan Academy > College, careers, and more https://www.khanacademy.org/college-careers-more
Educational Testing https://wrdrd.github.io/docs/consulting/educational-testing
529 Plans (can be used for qualifying educational expenses for any person) https://en.wikipedia.org/wiki/529_plan
Middle School "Glimpse" project: Past, Present, Future. Present, Future: plan your 4-year highschool course plan, pick 3 careers, pick 3 colleges (and how much they cost)
High school literature: write a narrative essay for college admissions
* Health and Medicine
How to add emergency contact and health information to your phone, carseat (ICE: In Case of Emergency)
How to get health insurance ( https://healthcare.gov/ )
"What's your blood type?" (?!)
Khan Academy > Science > Health and Medicine https://www.khanacademy.org/science/health-and-medicine
Facebook vows to run on 100 percent renewable energy by 2020
Miami Will Be Underwater Soon. Its Drinking Water Could Go First
Now, now, let's focus on the positives here:
- more pollution from shipping routes through the Arctic circle (and yucky-looking icebergs that tourists don't like)
- less beachfront property
- more desalinatable water
- hotter heat
- more revulsive detestable significant others (displaced global unrest)
- costs of responding to natural disasters occurring with greater frequency due to elevated ocean temperatures
- less parking spaces (!)
What are the other costs and benefits here?
I've received a number of downvotes for this comment. I think it's misunderstood, and that's my fault: I should have included [sarcasm] around the whole comment [/sarcasm].
I've written about our need to address climate change here in past comments. I think the administration's climate change denials (see: "climate change politifact') and regulatory rollbacks are beyond despicable: they're sabotaging the United States by allowing more toxic chemicals into the environment that we all share, and allowing more sites that must be protected with tax dollars that aren't there because these industries pay far less than benchmarks in terms of effective tax rate. We know that vehicle emissions, mercury, and coal ash are toxic: why would we allow people to violate the rights of others in that way?
A person could voluntarily consume said toxic byproducts and not have violated their own rights or the rights of others, you understand. There's no medical value and low potential for abuse, so we just sit idly by while they're violating the rights of other people by dumping toxic chemicals into the environment that are both poisonous and strongly linked to climate change.
What would help us care about this? A sarcastic list of additional reasons that we should care? No! Miami underwater during tourist season is enough! I've had enough!
So, my mistake here - my downvote-earning mistake - was dropping my generally helpful, hopeful tone for cynicism and sarcasm that wasn't motivating enough.
We need people to regulate pollution in order to prevent further costs of climate change. Water in the streets holds up commerce, travel, hampers national security, and destroys the road.
We must stop rewarding pollution if we want it - and definitely resultant climate change - to stop. What motivates other people to care?
Free hosting VPS for NGO project?
The Burden: Fossil Fuel, the Military and National Security
Here's a link to the video: https://vimeo.com/194560636
Scientists Warn the UN of Capitalism's Imminent Demise
The actual document title: "Global Sustainable Development Report 2019 drafted by the Group of independent scientists: Invited background document on economic transformation, to chapter: Transformation: The Economy" (2018) https://bios.fi/bios-governance_of_economic_transition.pdf [PDF]
Why I distrust command economies (beyond just because of our experiences with violent fascism and defense overspending and the subsequent failures of various communist regimes):
We have elections today. We don't choose to elect people that regard the environment (our air, water, land, and other natural resources) as our most important focus. A command economy driven by these folks for longer than a term limit would be even more disastrous.
The market does not solve for 'externalities': things that aren't costed in. We must have regulation to counteract the blind optimization for profit (and efficiency) which capitalism rewards most.
Environmental regulation is currently insufficient; worldwide. That is the consensus from the Paris Agreement which 195 countries signed in 2015. https://en.wikipedia.org/wiki/Paris_Agreement
Maybe incentives?
We could sell tokens for how much pollutants we're allowed to f### everyone else over with and penalize exceeding the amount we've purchased. That would incentivize firms to pollute less so that they can save money by having to buy fewer tokens. (Europe does this already; and it's still not going to save the planet from industrial production externalities)
So, while I'm wary of any suggestion that a command economy would somehow bring forth talent in governance, I look to this article for actionable suggestions that penalize and/or incentivize sustainable business and living practices.
Sustainable reporting really is a must: how can I design an investment portfolio that excludes reckless, irresponsible, indifferent, and careless investments and highly values sustainability?
No one likes to be driven by harsh penalties; everyone likes to be rewarded (even with carrots as incentives).
Markets do not solve for long term outcomes. Case in point: the market has not chosen the most energy efficient cryptocurrencies. Is this an information asymmetry issue: people just don't know, or just don't care because the incentives are so alluring, the brand is so strong, or the perceived security assurances of the network outweighs the energy use (and environmental impact) in comparison to dry cleaning and fossil fuel transport.
How would a command economy respond to this? It really is denial and delusion to think that the market will cast aside less energy efficient solutions in order to save the environment all on its own.
So, what do we do?
Do we incentivize getting inefficient vehicles off of the road and into a recycling plant where they belong?
Do we shut down major sources of pollution (coal plants, vehicle emissions)?
Do we create tokens to account for pollution allowances (for carbon and other toxic f###ing chemicals)?
Do we cut irrational subsidies for industries that don't pay their taxes (even when they make money); so that we're aware of the actual costs of our behavior?
Do we grow hemp to absorb carbon, clean up the soil, replace emissions, and store energy?
Who's in the mood to dom these greedy shortsighted idiots into saving themselves and preventing the violation of our right to health (life)? No, you can't because you're busy violating your own rights and finding drugs/druggies and that's not allowed? Is that a lifetime position?
"Go burn a charcoal grill and your gas vehicle in your closed garage for awhile and come talk to me." That's really what we're dealing with here.
Anyways, this paper raises some good points; although I have my doubts about command economies.
[strikethrough] You can't do that to yourself. [/strikethrough] You can't do that to others (even if you pay for their healthcare afterwards).
Where's Captain Planet when you need 'em, anyways?
The problem may actually be solved by a free market... But it has to have different optimizations.
"Value" or "Quality" of a company has to stop being measured in unconstrained growth and instead measured in minimized externalities, while achieving your stated goal.
Accounting has to become a hell of a lot more complicated. We're talking "keep track of your industrial waste product", possibly even having to take responsibility in some manner for dealing with it directly.
There also needs to be some societal change as well. We need to Look at how we've stretched our supply and industry chains worldwide and start to figure out ways to minimize transportation and production costs.
Competition may need to fundamentally change it's nature as well. Trade secrets need to be ripped out into the light of day. It's not a contest of out doing the other guy, but to see who can find a way to make the entire industry more efficient.
Definitely a lot of change needed. That is for sure.
Firefox Nightly Secure DNS Experimental Results
> The experiment generated over a billion DoH transactions and is now closed. You can continue to manually enable DoH on your copy of Firefox Nightly if you like.
...
> Using HTTPS with a cloud service provider had only a minor performance impact on the majority of non-cached DNS queries as compared to traditional DNS. Most queries were around 6 milliseconds slower, which is an acceptable cost for the benefits of securing the data. However, the slowest DNS transactions performed much better with the new DoH based system than the traditional one – sometimes hundreds of milliseconds better.
Long-sought decay of Higgs boson observed at CERN
Furthermore, both teams measured a rate for the decay that is consistent with the Standard Model prediction, within the current precision of the measurement.
And now everyone: Noooo, not again.
(Explanation: it's well-known that the Standard Model can't be completely correct but again and again physicists fail to find an experiment contradicting its predictions, see https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo... for example)
Well, the standard model can be correct. It is correct until some experiment proves otherwise.
It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.
> It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.
https://en.wikipedia.org/wiki/Magic_number_(programming)#Unn...
> The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code
Who is going to request changes when doing a code review on a pull request from God?
But which branch do you pull from? God has been forked so many times, it's hard to keep track. There are several distros available, so I guess you just get to pick the one that checks the most of your needs at the time.
Sen. Wyden Confirms Cell-Site Simulators Disrupt Emergency Calls
Building a Model for Retirement Savings in Python
re: pulling historical data with pandas-datareader, backtesting, algorithmic trading: https://www.reddit.com/r/Python/comments/7zxptg/pulling_stoc...
re: historical returns
- [The article uses a constant 7% annual return rate]
- "The current average annual return from 1923 (the year of the S&P’s inception) through 2016 is 12.25%." https://www.daveramsey.com/blog/the-12-reality (but that doesn't account for inflation)
- https://www.quantopian.com/posts/56b62019a4a36a79da000059 (300%+ over n years (from a down market))
Is there a Jupyter notebook with this code (with a requirements.txt for https://mybinder.org (repo2docker))?
New E.P.A. Rollback of Coal Pollution Regulations Takes a Major Step Forward
Would you move your family downwind from a coal plant? Why or why not?
Coal ash pollutes air, water, rain (acid rain), crops (our food), and soil. Which rights of victims does coal pollution infringe? Who is liable for the health effects?
Canada (2030), France (2021), and the UK (2025) are all working to entirely phase out coal-fired power plants for very good reasons (such as neonatal health).
~"They're just picking on coal": No, we're choosing renewables that are lower cost AND don't make workers and citizens sick.
If you can mine for coal, you can set up solar panels and wind turbines.
If you can run a coal mine; you can buy some cheap land, put up solar panels and wind turbines, and connect it to the grid.
Researchers Build Room-Temp Quantum Transistor Using a Single Atom
"Quasi-Solid-State Single-Atom Transistors" (2018) https://onlinelibrary.wiley.com/doi/full/10.1002/adma.201801...
New “Turning Tables” Technique Bypasses All Windows Kernel Mitigations
Um – Create your own man pages so you can remember how to do stuff
Interesting project. I believe these two shell functions in the shell's initialization file (~/.profile, ~/.bashrc, etc.) can serve as a poor man's um:
umedit() { mkdir -p ~/notes; vim ~/notes/"$1.txt"; }
um() { less ~/notes/"$1.txt"; }
If you write these in .rst, you can generate actual manpages with Sphinx: http://www.sphinx-doc.org/en/master/usage/configuration.html...
sphinx.builders.manpage: http://www.sphinx-doc.org/en/master/_modules/sphinx/builders...
[deleted]
Leverage Points: Places to Intervene in a System
Extremely interesting take, with a lot of good stuff to think about.
I'm unclear about the claim that less economic growth would be better, though, and the author seems very committed to it. I wasn't able to find the article they reference as explaining how less growth is what we really need (J.W. Forrester, World Dynamics, Portland OR, Productivity Press, 1971), and it comes from almost 50 years ago, which might as well be another economic era altogether.
Does anyone know what the arguments are, what assumptions they require, and whether they still apply today? My understanding is that "less growth is better" is a distinctly minority take amongst modern economists, but the rest of this article seems very intelligently laid out, so I'd like to dig deeper.
I've always thought that for any dial we have, there's always an optimal setting, whether it's tax rates, growth rates, birth rates, etc., and blindly pushing one way or the other (like both political parties tend to do) is not helpful, or at the very least merely indicates different value systems.
The book is called "limits to growth", and it's a work of the Club of Rome. You can watch this video to have an idea of their work [1]. At first it's a very strange idea, afterall we work and consume everyday in order to grow the economy. But every healthy system as a homeostastic point, a point where it doesn't need to grow, only to be maintained. We are now a fat society and we need to get our health back, we need to degrow. We need to work much less hours and consume much less. Reducing working hours is a great leverage point. People will start to have time to care about the community, and take care of their own health. Maybe have more time to take a walk instead of using the car. This can improve health and environment but will not contribute to economic growth, healthy people that don't use cars are not good friends of economic growth based on GDP. English is not my mother language, Ursula Le Guin had an eloquent post about this on her blog, but was removed to be on the last book. The post is called "Clinging to a Metaphor" (the metaphor is economic growth) and the book is called "No Time to Spare". [1] https://youtu.be/kz9wjJjmkmc
"The Limits to Growth" (1972) https://en.wikipedia.org/wiki/The_Limits_to_Growth
"Thinking in Systems: a Primer" (2008) https://g.co/kgs/B71ebC
Glossary of systems theory https://en.wikipedia.org/wiki/Glossary_of_systems_theory
Systems Theory https://en.wikipedia.org/wiki/Systems_theory
...
Computational Thinking https://en.wikipedia.org/wiki/Computational_thinking
Which of the #GlobalGoals (UN Sustainable Development Goals) Targets and Indicators are primary leverage points for ensuring - if not growth - prosperity? https://en.wikipedia.org/wiki/Sustainable_Development_Goals
SQLite Release 3.25.0 adds support for window functions
https://www.windowfunctions.com is a good introduction to window functions.
Besides that, the comprehensive testing and evaluation of SQLite never ceases to amaze me. I'm usually hesitant to call software development "engineering", but SQLite is definitely well-engineered.
This looks great, but I couldn't get through the first question on aggregate functions. Are there any SQL books/tutorials that go over things like this?
A lot of material I've seen has been like the classic image of "How to draw an owl. First draw two circles, then draw the rest of the owl", where they tell you the super basic stuff, then assume you know everything.
Ibis uses windowing functions for aggregations if the database supports them. IDK when support for the new SQLite support will be implemented? http://docs.ibis-project.org/sql.html#window-functions
[EDIT]
I created an issue for this here: https://github.com/ibis-project/ibis/issues/1597
Update on the Distrust of Symantec TLS Certificates
Is the certifi bundle (2018.8.13) on PyPI also updated? https://pypi.org/project/certifi/
https://github.com/certifi/certifi.io/issues/18
> Are these still in the bundle?
> Should projects like requests which depend on certifi also implement this logic?
The Transport Layer Security (TLS) Protocol Version 1.3
Academic Torrents – Making 27TB of research data available
All the .torrent files are served over http so with a simple MITM attack a bad actor could swap in their own custom tweaked version of any data set here in order to achieve whatever goals that might serve for the bad actor's interests.
I really wish we could get basic security concepts added to the default curriculum for grade schoolers. You shouldn't need a PhD in computer security to know this stuff. These site creators have PhDs in other fields, but obviously no concept of security. This stuff should be basic literacy for everyone.
> This stuff should be basic literacy for everyone.
Arguably, one compromised PKI x.509 CA jeopardizes all SSL/TLS channel sec if there's no certificate pinning and an alternate channel for distributing signed cert fingerprints (cryptographically signed hashes).
We could teach blockchain and cryptocurrency principles: private/secret key, public key, hash verification; there there's money on the table.
GPG presumes secure key distribution (`gpg --verify .asc`).
TUF is designed to survive certain role key compromises. https://theupdateframework.github.io
1/0 = 0
1/0 = 1(±∞)
https://twitter.com/westurner/status/960508624849244160
> How many times does zero go into any number? Infinity. [...]
> How many times does zero go into zero? infinity^2?
Power Worth Less Than Zero Spreads as Green Energy Floods the Grid
I don't truly understand this "problem". I understand storing the energy in batteries is currently very expensive economically and materially.
However I believe there are plenty of "goods" (irrespective of if they are bulk materials, or partially processed products) which have a high processing energy per volume ratio (this does not need to be recoverable stored energy).
Allow me to give an example: currently we have a drought in Belgium (or at least Flanders). We are not landlocked, there is plenty of water in the sea. Desalination is energy intensive. Instead of only looking at energy storage, why can't we increase the processing capacity (more desalination sites capable of working in parallel), and desalinate say sea water during the energy flood? I don't expect this to be an ideal real world example, only a pattern for identifying such examples: any product (could be composite parts, or bulk material) which is relatively compact and has some high energy per product volume processinng step. Just do the process (desalination, welding some part to another part...) when the sun shines, and store them for later.
Products with very high step energy density are good candidates for storing, and could help flatten daily variations, and perhaps even seasonal variations!
Now some companies would prefer avoiding risk if they don't have guaranteed orders far enough into the future, then perhaps there should be a market for insurance or loans, so that the company is encouraged to take the risk, instead of wasting the cheap energy...
Capital costs vs. marginal costs.
You're going to build a $100M desalination plant and run it for three hours a day? That's a ton of money sitting idle most of the day, far more than what is recovered with zero operating costs.
(This is called the utilization factor -- how long a piece of equipment is used vs. staying idle)
Ideally you want useful processes with low capital costs and expensive marginal/energy costs. Desalination is not one of those.
A desal plant is a useful thing and can be run around the clock off traditional energy sources. The "free energy" hours would help lower the costs.
I can even imagine a SETI-like application where people who over-generate power are able to donate it to causes of their preference...
But if you are using traditional energy sources to run it all the time, it is no longer 'solving' the problem of burning off excess energy during peak renewable times.
Someone is having to build a lot of highly wasteful, redundant infrastructure.
Rational cryptocurrency mining firms can use the excess (unstorable) energy by converting it back to money (while the sun shines and the wind blows).
Money > Energy > Money
> Someone is having to build a lot of highly wasteful, redundant infrastructure.
We're nowhere near having the energy infrastructure necessary to support everyone having an electric vehicle yet.
Energy storage is key to maximizing returns from renewables and minimizing irreversible environmental damage.
Kernels, a free hosted Jupyter notebook environment with GPUs
Hey Ben, are these going to support arbitrary CUDA?
At the moment, we're focused on providing great support for the Python and R analytics/machine learning ecosystems. We'll likely expand this in the future, and in the meantime it's possible to hack through many other usecases we don't formally support well.
How do you handle custom environment requirements, whether it’s Python version, library version, or more complex things in the environment that some code might run on?
Basically, suppose I wanted everything that I could define in a Docker container to be available “as the environment” in which the notebook is running. How do I do that?
I ask because I’ve started to see an alarming proliferation of “notebook as a service” platforms that don’t offer that type of full environment spec, if they offer any configuration of the run time environment at all.
I’ve taught probability and data science at university level and worked in machine learning in a variety of businesses too, and I’d say for literally all use cases, from the quickest little pure-pedagogy prototype of a canned Keras model to a heavily customized use case with custom-compiled TensorFlow, different data assets for testing vs ad hoc exploration vs deployment, etc., the absolutely minimum thing needed before anything can be said to offer “reproducibility” is complete specification of the run time environment and artifacts.
The trend to convince people that a little “poke around with scripts in a managed environment” offering is value-additive is dangerous, very similar to MATLAB’s approach to entwine all data exploration with the atrocious development havits that are facilitated by the console environment (and to specifically target university students with free licenses, to use a drug dealer model to get engineers hooked on MATLAB’s workflow model and use that to leverage employers to oblige by buying and standardizing on abjectly bad MATLAB products).
Any time I meet young data scientists I always try to encourage them to avoid junk like that. It’s vital to begin experiments with fully reproducible artifacts like thick archive files or containers, and to structure code into meaningful reproducible units even for your first ad hoc explorations, and to absolutely always avoid linear scripting as an exploratory technique (it is terrible and ineffective for such a task).
Kaggle Kernels seems like a cool idea, so long as the programmer must fully define artifacts that describe the complete entirety of the run time environment, and nobody is sold on the Kool Aid of just linear scripting in some other managed environment.
Each kernel for example could have a link back to a GitHub repo containing a Dockerfile and build scripts for what defined the precise environment the notebook is running in. Now that’s reproducible.
Here are the Kaggle Kernels Dockerfiles:
- Python: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...
- R: https://github.com/Kaggle/docker-rstats/blob/master/Dockerfi...
https://mybinder.org builds containers (and launches free cloud instances) on demand with repo2docker from a (commit hash, branch, or tag) repo URL: https://repo2docker.readthedocs.io/en/latest/config_files.ht...
That’s a great first step! Adding the ability to customize on a per-notebook basis would be impressive.
Solar and wind are coming. And the power sector isn’t ready
I don't know that fatalism and hopelessness are motivating for decision makers (who are seeking greater margins regardless of policy and lobbies).
Is our transformation to 100% clean energy ASAP a certain eventuality? On a long enough timescale, it would be irrational for utilities to not choose both lower cost and more sustainable environmental impact ('price-rational', 'environment-rational').
We should expect storage and generation costs to continue to fall as we realize even just the current pipeline of capitalizable [storage] research.
Solar energy is free.
Solar Just Hit a Record Low Price in the U.S
Relevant bits:
> “On their face, they’re less than a third the price of building a new coal or natural gas power plant,” Ramez Naam, an energy expert and lecturer at Singularity University, told Earther in an email. “In fact, building these plants is cheaper than just operating an existing coal or natural gas plant.”
> There’s a 30 percent federal investment tax credit for solar projects that helps drive down the cost of this and other solar projects. But Naam said even if you take away that credit, “these bids, un-subsidized, are still cheaper than any new coal or gas plants, and possibly cheaper than operating existing plants.”
(emphasis mine)
>> Relevant bits:
>> “On their face, they’re less than a third the price of building a new coal or natural gas power plant,” Ramez Naam, an energy expert and lecturer at Singularity University, told Earther in an email. “In fact, building these plants is cheaper than just operating an existing coal or natural gas plant.”
>> There’s a 30 percent federal investment tax credit for solar projects that helps drive down the cost of this and other solar projects. But Naam said even if you take away that credit, “these bids, un-subsidized, are still cheaper than any new coal or gas plants, and possibly cheaper than operating existing plants.”
I'm assuming that's without factoring in the health cost externalities.
Yes, this is solely for cost of power. The healthcare savings and quality of life improvements are an additional bonus on top of very cheap power.
https://www.sciencenews.org/article/air-pollution-triggering... (Air pollution is triggering diabetes in 3.2 million people each year)
https://www.scientificamerican.com/article/the-other-reason-... (The Other Reason to Shift away from Coal: Air Pollution That Kills Thousands Every Year)
Causal Inference Book
Causal inference (Causal reasoning) https://en.wikipedia.org/wiki/Causal_inference ( https://en.wikipedia.org/wiki/Causal_reasoning )
Tim Berners-Lee is working a platform designed to re-decentralize the web
Does anyone have some links on Solid that aren't media articles? I can't find anything, not even in Tim's homepage.
Home page: https://solid.mit.edu/
Spec: https://github.com/solid/solid-spec
Source: https://github.com/solid/solid
...
From https://news.ycombinator.com/item?id=16615679 ( https://westurner.github.io/hnlog/#comment-16615679 )
> ActivityPub (and OStatus, and ActivityStreams/Salmon, and OpenSocial) are all great specs and great ideas. Hosting and moderation cost real money (which spammers/scammers are wasting).
> Know what's also great? Learning. For learning, we have the xAPI/TinCan spec and also schema.org/Action.
Mastodon has now supplanted GNU StatusNet.
More States Opting to 'Robo-Grade' Student Essays by Computer
edX can automate short essay grading with edx/edx-ora2 "Open Response Assessment Suite" [1] and edx/ease "Enhanced AI scoring engine" [2].
1: https://github.com/edx/edx-ora2 2: https://github.com/edx/ease
... I believe there's also a tool for peer feedback.
Peer feedback/grading on MOOCs is pretty bad in my experience. There’s too much diversity of skills, language ability, etc. And too many people who bring their own biases and mostly ignore any grading instructions.
Peer discussion and feedback are useful in things like college classes. Much less so with MOOCs.
Ask HN: Looking for a simple solution for building an online course
I want to build an online course on graph algorithms for my university. I've tried to find a solution which would let submit, execute and test student's code (implement an online judge), but have had no success. There are a lot of complex LMS and none of them seem to have this feature as a basic functionality.
Are there any good out-of-box solutions? I'm sure I can build a course using Moodle or another popular LMS with some plugin, but I don't want to spend my time customizing things.
I'm interested both in platforms and self-hosted solutions. Thanks!
Maybe look at Jupyter Notebook? It does much of this out of the box, but may not be exactly what you are looking for.
nbgrader is a "A system for assigning and grading Jupyter notebooks." https://github.com/jupyter/nbgrader
jupyter-edx-grader-xblock https://github.com/ibleducation/jupyter-edx-grader-xblock
> Auto-grade a student assignment created as a Jupyter notebook, using the nbgrader Jupyter extension, and write the score in the Open edX gradebook
... networkx is a graph library written in Python which has pretty good docs: https://networkx.github.io/documentation/stable/reference/
There are a few books which feature networkx.
There is now a backprop principle for deep learning on quantum computers
"A Universal Training Algorithm for Quantum Deep Learning" https://www.arxiv-vanity.com/papers/1806.09729/
New research a ‘breakthrough for large-scale discrete optimization’
Looks like it may be this paper:
"An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation" https://www.arxiv-vanity.com/papers/1804.06355/
The ACM STOC 2018 conference links to "The Adaptive Complexity of Maximizing a Submodular Function" http://dl.acm.org/authorize?N651970 https://scholar.harvard.edu/files/ericbalkanski/files/the-ad...
A DOI URI would be great, thanks.
Wind, solar farms produce 10% of US power in the first four months of 2018
Take note: 10% PRODUCED, not 10% CONSUMED.
This is counting all output by wind and solar regardless if it is needed and usable when the power is being produced. This is quite important because wind and solar are not on-demand sources of power.
> This is counting all output by wind and solar regardless if it is needed and usable when the power is being produced. This is quite important because wind and solar are not on-demand sources of power.
I think you have that backwards: in the US, we lack the ability to scale down coal and nuclear plants. Solar and Wind are generally the first to get pulled offline when generated capacity exceeds demand and storage.
TIL this is called "curtailment" and it's an argument that utilities have used to justify not spending on renewables that are saving the environment from global warming (which is going to require more electricity for air conditioning).
Solar energy production peaks around noon. Demand for electricity peaks in the evening. We need storage (batteries with supercapacitors out front) in order to store the difference between peak generation and peak use. Because they're unable to store this extra energy, they temporarily shut down solar and wind and leave the polluting plants online.
Consumers aren't exposed to daily price fluctuations: they get a flat rate that makes it easy to check their bill; so there's no price incentive to e.g. charge an EV at midday when energy is cheapest.
The 'Duck curve' shows this relation between peak supply and demand in electricity markets: https://en.wikipedia.org/wiki/Duck_curve
Developing energy storage capabilities (through infrastructure and open access basic research that can be capitalized by all) is likely the best solution. According to a fairly recent report, we could go 100% renewable with the energy storage tech that exists today.
But there's no money for it. There's money for subsidizing oil production (regardless of harms (!)), but not so much for wind and solar. There's money for responding to natural disasters caused by global warming, but not so much for non-carbon-based energy sources that don't cause global warming. A film called "The Burden: Fossil Fuel, the Military, and National Security" quotes the actual unsubsidized price of a gallon of gasoline.
Wouldn't it be great if there was some kind of computer workload that could be run whenever energy is cheapest ( 'energy spot instances') so that we can accelerate our migration to renewable energy sources that are saving the environment for future generations? If there were people who had strong incentives to create demand for power-efficient chips and inexpensive clean energy.
Where would be if we had continued with Jimmy Carter's solar panels on the roof of the White House (instead of constant war and meddling with competing oil production regions of the world)?
It's good to see wind and solar growing this fast this year. A chart with cost per kWhr or MWhr would be enlightening.
FDA approves first marijuana-derived drug and it may spark DEA rescheduling
Perhaps someone from the US could help me understand the federal vs state legality of cannabis in the USA?
Can a state override any federal law?
Could federal-level law enforcement theoretically charge someone in a cannabis-legal state for drug offenses?
states cannot override any federal law.
the federal government merely tolerates the semi-autonomous nature of states in order to maintain order. but the legality of the federal government's supremacy is well evolved, its power and might is extremely disproportionally higher than any collective of states, and at this point the semi-autonomous nature of states is pure fiction as the federal government can assume jurisdiction over any intra-state matter if it wanted to through its interstate commerce laws.
yes, federal level law enforcement can theoretically charge someone in a cannabis-legal state for drug offenses. this still happens. the discretion of the words of the President, the heads of the DEA and DOJ prevent the MAJORITY of it from happening and also guide the discretion of the courts and public sympathy. so right now, for the last two administrations it has not been a priority to upset the social order in cannabis-legal states. but it can still happen.
> states cannot override any federal law.
Not actually true: https://en.wikipedia.org/wiki/Nullification_(U.S._Constituti...
> not actually true
a concept which requires agreement from the federal courts themselves, who have never upheld a single argument related to this concept.
not actually what?
Selective incorporation: https://en.wikipedia.org/wiki/Incorporation_of_the_Bill_of_R...
10th Amendment: https://en.wikipedia.org/wiki/Tenth_Amendment_to_the_United_...
> The Tenth Amendment, which makes explicit the idea that the federal government is limited to only the powers granted in the Constitution, has been declared to be a truism by the Supreme Court.
Supremacy clause: https://en.wikipedia.org/wiki/Supremacy_Clause
> The Supremacy Clause of the United States Constitution (Article VI, Clause 2) establishes that the Constitution, federal laws made pursuant to it, and treaties made under its authority, constitute the supreme law of the land.[1] It provides that state courts are bound by the supreme law; in case of conflict between federal and state law, the federal law must be applied. Even state constitutions are subordinate to federal law.[2] In essence, it is a conflict-of-laws rule specifying that certain federal acts take priority over any state acts that conflict with federal law
Natural rights ('inalienable rights': Equal rights, Life, Liberty, pursuit of Happiness): https://en.wikipedia.org/wiki/Natural_and_legal_rights
9th Amendment: https://en.wikipedia.org/wiki/Ninth_Amendment_to_the_United_...
> The Ninth Amendment (Amendment IX) to the United States Constitution addresses rights, retained by the people, that are not specifically enumerated in the Constitution. It is part of the Bill of Rights.
If the 9th Amendment recognizes any unenumerated rights of the people (with Supremacy, regardless of selective incorporation), it certainly recognizes those of the Declaration of Independence ((secession from the king ('CSA')), Equality, Life, Liberty, pursuit of Happiness), our non-binding charter which frames the entirety of the Constitutional convention
All of these things have been interpreted by the courts and their conclusion was not like yours
It is nice that you are interested in these things, but they simply cannot be read verbatim and then extrapolated to other things.
This isn't educational for anybody, this is a view that lacks all consensus and all avenues to ever garner consensus in this country.
Again, I ask you to explain how the current law grants equal rights.
https://news.ycombinator.com/item?id=17401906
> We tend to have issues with Equal rights/protections: slavery, voting rights, [school] segregation. Please help us understand how to do this Equally:
>> Furthermore, (1) write a function to determine whether a given Person has a (natural inalienable) right: what information may you require? (2) write a function to determine whether any two Persons have equal rights.
Abolitionists faced similar criticism from on high.
States Can Require Internet Tax Collection, Supreme Court Rules
I have a hunch that this will, in the end, be a massive win for large retailers vs. small ones. The task of figuring out how to calculate tax for all states is more or less the same amount of work regardless of size, which means for someone like Amazon it's more or less trivial, but for a mom-and-pop store it's a major hassle.
Thankfully with Shopify it is extremely easy and straightforward to manage for my wife's small online store. Their platform does a great job properly charging taxes by state, county and city in certain situations. Then using an inexpensive plan from https://www.taxjar.com/ the entire filing and paying process is 100% automated.
In 10 minutes I was able to file and pay all the sales taxes to several state, dozens of California counties and a handful of cities that charge additional taxes on top.
So you assume. As a small business you are unlikely to be audited, but that software could easily be wrong creating a huge minefield and potential liability.
So that would be the software companies' liability. And such business practice can differentiate good ones from the bad ones. I'm seeing a new business market here even.
https://en.wikipedia.org/wiki/Parable_of_the_broken_window
There is zero economic gain from more complex tax rules. Further, the software does not absolve you of liability. At best they may agree to cover it, but that's unlikely and they can also go broke if they get it wrong.
Actually, current sales tax software provided by South Dakota and other states does absolve a merchant for liability if used to calculate sales tax due.
I agree with this approach!
Actually, the federal government should oblige each member state to provide the algorithm, and sign it cryptographically and have it expire every X fixed time interval, and have signed algorithms for the current and next time interval, so that software can automatically fetch and stay up to date.
Then the "business opportunity" of navigating FUD evaporates. Currently any such enterprise charging for such a service can spend a fraction of their budget lobbying against harmonization...
Since it would be an obligation of the states to the federal government, these algorithms (provided by each member state) should be hosted on a fixed federal government site.
Time to start a petition?
This would reduce costs of tax collection for all parties.
What is the most convenient format for this layered geographic data? Are the tax district boundary polygons already otherwise available as open data? What do localities call these? Sales tax tables, sales tax database, machine-readable flat files in an open format with a common schema?
How much tax revenue should it cost to provide such a service on a national level?
States, Counties, Cities, 'Tax Zones'(?) could be required to host tax.state.us.gov or similar with something like Project Open Data JSONLD /data.json that could be aggregated and shared by a server with a URL registry, a task queue service, and a CDN service.
While the Bitcoin tax payments bill passed the Senate and House in Arizona, it was vetoed in May 2018. Seminole County in Florida now allows tax payment with crytocurrencies such as Bitcoin:
https://cointelegraph.com/news/us-seminole-county-florida-to...
> According to a press release, the county will begin accepting Bitcoin (BTC) and Bitcoin Cash (BCH) to pay for services, including property taxes, driver license and ID card fees, as well as tags and titles. The Seminole County Tax Collector will reportedly employ blockchain payments company BitPay, which will allow the county to receive settlement the next business day directly to its bank account in US dollars.
This could also help reduce the costs of tax collection and possibly increase the likelihood of compliance with the forthcoming tax bills!
these are all very good questions, and only a community discussion of people with the right skills and interests can draft a petition, if enough people contribute to the discussion we can make the proposal more reasonable and robust against valid criticisms... but I believe we can make this happen by just starting the discussion. We can bitch on Hacker News, or we can draft a proposal for the different government levels. The more reasonable we draft it, the higher the probability the petition will be a success. I think it wouldn't be hard to argue against this proposal that a legally enforced computation should be open source, i.e. not just the algorithm for the computation but also all the data lists and boundary polygons used in the algorithm...
Ask HN: Do you consider yourself to be a good programmer?
if not, why? how do you validate your achievements?
> For identifying strengths and weaknesses: "Programmer Competency Matrix":
> - http://sijinjoseph.com/programmer-competency-matrix/
> - https://competency-checklist.appspot.com/
> - https://github.com/hltbra/programmer-competency-checklist
Don’t read too much into that. TDD for example is not leveling up it’s an opinionated approach to development.
Automated testing is not a choice in many industries.
If you're not familiar with TDD, you haven't yet achieved that level of mastery.
There's a productivity boost to being able to change quickly without breaking things.
Is all unit/functional/integration testing and continuous integrating TDD? Is it still TDD if you write the tests after you write the function (and before you commit/merge)?
I think this competency matrix is a helpful resource. And I think that learning TDD is an important thing for a good programmer.
There is absolutely no need to follow TDD to be good at testing.
This is all unfounded conjecture: it seems easier to remember which parameter combinations may exist and need to be tested when writing the function; so "let's all write tests later" becomes a black box exercise which is indeed a helpful perspective for review, but isn't the most effective use of resources.
IMHO being convinced that there's only one true and correct methodology (TDD, Scrum, etc.) or paradigm (functional, objective, reactive programming, etc.) is a sign of being a bad programmer.
A good programmer finds common attributes and behaviors and organizes them into namespaced structs/arrays/objects with functions/methods and tests. Abstractly, which terms should we use to describe hierarchical clusters of things with information and behaviors if not those from a known software development or project management methodology?
And a good programmer asks why people might have spent so much time formalizing project development methodologies. "What sorts of product (team) failures are we dealing with here?" is an expensive question to answer as a team.
By applying tenets of Named agile software development methodologies, teams and managers can feel like they're discussing past and current experiences/successes/failures with comparable implementations of approaches that were or are appropriate for different contexts.
To argue the other side, just cherry picking from different methodologies is creating a new methodology, which requires time to justify basically what we already have terms for on the wall over here.
"We just pop tasks off the queue however" is really convenient for devs but can be kept cohesive by defining sensible queues: [kanban] board columns can indicate task/issue/card states and primacy, [sprint] milestone planning meetings can yield complexity 'points' estimates for completable tasks and their subtasks. With team velocity (points/time), a manager can try to appropriately schedule optimal paths of tasks (that meet the SMART criteria (specific, measurable, achievable, relevant, and Time-bound)); instead of fretting with the team over adjusting dates on a Gantt chart (task dependency graph) deadline, the team can
What about your testing approach makes it 'NOT TDD'?
How long should the pre-release static analysis and dynamic analyses take in my fancy DevOps CI TDD with optional CD? Can we release or deploy right now? Why or why not?
'We can't release today because we spent too much time arguing about quotes like "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines." ("Self Reliance" 1841. Emerson) and we didn't spec out the roof trusses ahead of time because we're continually developing a new meeting format, so we didn't get to that, or testing the new thing, yet.'
A good programmer can answer the three questions in a regular meeting at any time, really:
> 1. What have you completed since the last meeting?
> 2. What do you plan to complete by the next meeting?
> 3. What is getting in your way?
And:
Can we justify refactoring right now for greater efficiency or additional functionality?
The simple solution there is to simply not use specific parameters (outside ovious edge-cases, ie supplying -1 and 2^63 into your memory allocator). Writing a simple reproducible fuzzer is easy for most contained functions.
I find blackbox testing itself also fairly useful. The part where you forget which parameter combinations may occur can be useful since you now A) rely on documentation you made and B) can write your test independent of how you implemented it just like if you had written it beforehand. (Just don't forget to avoid falling into the 'write test to pass function' trap)
IMHO, it's so much easier to write good, comprehensive tests while writing the function (FUT: function under test) because that information is already in working memory.
It's also easier to adversarially write tests with a fresh perspective.
I shouldn't need to fuzz every parameter for every commit. Certainly for releases.
"Building an AppSec Pipeline: Keeping your program, and your life, sane" https://www.owasp.org/index.php/OWASP_AppSec_Pipeline
I mean, I general don't think you should write fuzzers for absolutely everything (most contained functions => doesn't touch a lot of other stuff and few parameters with a known parameter space)
The general solution is to use whatever testing methodology you are comfortable, that is very effective, very efficient and covers a lot of problem space. Of course no testing method does that so you'll have to constantly balance whatever works best (which is why I think pure TDD is overrated)
> Is all unit/functional/integration testing and continuous integrating TDD?
No. They differentiate in the matrix.
> If you're not familiar with TDD, you haven't yet achieved that level of mastery.
That's not true - I've worked on teams with far lower defect rates than the typical TDD team.
TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.
> TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.
We would need to reference some data with statistical power; though randomization and control are infeasible: no two teams are the same, no two projects are the same, no two objective evaluations of different apps' teams' defect rates are an apples to apples comparison.
Maybe it's the coverage expectation: do not add code that is not run by at least one test.
Handles are the better pointers
A few years ago I wanted to make a shoot-em-up game using C# and XNA I could play with my kids on an Xbox. It worked fine except for slight pauses for GC every once in a while which ruined the experience.
The best solution I found was to get rid of objects and allocation entirely and instead store the state of the game in per-type tables like in this article.
Then I realized that I didn't need per-type tables at all, and if I used what amounted to a union of all types I could store all the game state in a single table, excluding graphical data and textures.
Next I realized that since most objects had properties like position and velocity, I could update those with a single method by iterating through the table.
That led to each "object" being a collection of various properties acted on by independent methods which were things like gravity and collision detection. I could specify that a particular game entity was subject to gravity or not, etc.
The resulting performance was fantastic and I found I could have many thousands of on-screen objects at the time with no hiccups. The design also made it really easy to save and replay game state; just save or serialize the single state table and input for each frame.
The final optimization would have been to write a language that would define game entities in terms of the game components they were subject to and automatically generate the single class that was union of all possible types and would be a "row" in the table. I didn't get around to that because it just wasn't necessary for the simple game I made.
I was inspired to do this from various articles about "component based game design" published at the time that were variations on this technique, and the common thread was that a hierarchical OO structure ended up adding a lot of unneeded complexity for games that hindered flexibility as requirements changed or different behaviors for in-game entities were added.
Edit: This is a good article on the approach. http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy...
> The final optimization would have been to write a language that would define game entities in terms of the game components they were subject to and automatically generate the single class that was union of all possible types and would be a "row" in the table
django-typed-models https://github.com/craigds/django-typed-models
> polymorphic django models using automatic type-field downcasting
> The actual type of each object is stored in the database, and when the object is retrieved it is automatically cast to the correct model class
...
> the common thread was that a hierarchical OO structure ended up adding a lot of unneeded complexity for games that hindered flexibility as requirements changed or different behaviors for in-game entities were added.
So, in order to draw a bounding box for an ensemble of hierarchically/tree/graph-linked objects (possibly modified in supersteps for reproducibility), is an array-based adjacency matrix still fastest?
Are sparse arrays any faster for this data architecture?
> django-typed-models https://github.com/craigds/django-typed-models
Interesting, never came across it. But I've got a Django GitHub library (not yet open source, because it's in a project I've been working with on and off) that does the same for managing GitHub's accounts. An Organization and User inherit the same properties and I downcast them based on their polymorphic type field.
ContentType.model_class(), models.Model.meta.abstract=True, django-reversion, django-guardian
IDK how to do partial indexes with the Django ORM? A simple filter(bool, rows) could probably significantly shrink the indexes for such a wide table.
Arrays are fast if the features/dimensions are known at compile time (if the TBox/schema is static). There's probably an intersection between object reference overhead and array copy costs.
Arrow (with e.g. parquet on disk) can help minimize data serialization/deserialization costs and maximize copy-free data interoperability (with columnar arrays that may have different performance characteristics for whole-scene transformation operations than regular arrays).
Many implementations of SQL ALTER TABLE don't have to create a full copy in order to add a column, but do require a permission that probably shouldn't be GRANTed to the application user and so online schema changes are scheduled downtime operations.
If you're not discovering new features at runtime and your access pattern is generally linear, arrays probably are the fastest data structure.
Hacker News also has a type attribute that you might say is used polymorphically: https://github.com/HackerNews/API/blob/master/README.md#item...
Types in RDF are additive: a thing may have zero or more rdf:type property instances. RDF quads can be stored in one SQL table like:
_id,g,s,p,o,xsd:datatype,xml:lang
... with a few compound indexes that are combinations of (s,p,o) so that triple pattern graph queries like (?s,?p,1) are fast. Partial indexes (SQLite, PostgreSQL,) would be faster than full-table indexes for RDF in SQL, too.
Neural scene representation and rendering
This work is a natural progression from a lot of other prior work in the literature... but that doesn't make the results any less impressive. The examples shown are amazingly, unbelievably good! Really GREAT WORK.
Based on a quick skim of the paper, here is my oversimplified description of how this works:
During training, an agent navigates an artificial 3D scene, observing multiple 2D snapshots of the scene, each snapshot from a different vantage point. The agent passes these snapshots to a deep net composed of two main parts: a representation-learning net and a scene-generation net. The representation-learning net takes as input the agent's observations and produces a scene representation (i.e., a lower-dimensional embedding which encodes information about the underlying scene). The scene-generation network then predicts the scene from three inputs: (1) an arbitrary query viewpoint, (2) the scene representation, and (3) stochastic latent variables. The two networks are trained jointly, end-to-end, to maximize the likelihood of generating the ground-truth image that would be observed from the query viewpoint. See Figure 1 on Page 15 of the Open Access version of the paper. Obviously I'm playing loose with language and leaving out numerous important details, but this is essentially how training works, as I understand it based on a first skim.
EDIT: I replaced "somewhat obvious" with "natural," which better conveys what I actually meant to write the first time around.
I, literally just 15 minutes ago, had a chat with a friend of mine exactly about how what we are doing right now with computer vision is all based on a flawed premise (supervised 2D training set). The human brain works in 3D space (or 3D+time) and then projects all this knowledge in a 2D image.
Here I was, thinking I finally had thought of a nice PhD project and then Deepmind comes along and gets the scoop! Haha.
"Spatial memory" https://en.wikipedia.org/wiki/Spatial_memory
It may be splitting hairs, but I think the mammalian brain, at least, can simulate/remember/imagine additional 'dimensions' like X/Y/Z spin, derivatives of velocity like acceleration/jerk/jounce.
Is space 11 dimensional (M string theory) or 2 dimensional (holographic principle)? What 'dimensions' does the human brain process? Is this capacity innate or learned; should we expect pilots and astronauts to have learned to more intuitively cognitively simulate gravity with their minds?
New US Solar Record – 2.155 Cents per KWh
"Cost of electricity by source" https://en.wikipedia.org/wiki/Cost_of_electricity_by_source
"Electricity pricing" https://en.wikipedia.org/wiki/Electricity_pricing
> United States 8 to 17 ; 37[c] 43[c] [cents USD/kWh]
Ask HN: Is there a taxonomy of machine learning types?
Besides classification and regression, and the unsupervised methods for principle components, clustering and frequent item-sets, what tools are there in the ML toolkit and what kinds of problems are amenable to their use?
Outline of Machine Learning https://en.wikipedia.org/wiki/Outline_of_machine_learning
Machine learning # Applications https://en.wikipedia.org/wiki/Machine_learning#Applications
"machine learning map" image search: https://www.google.com/search?q=machine+learning+map&tbm=isc...
Senator requests better https compliance at US Department of Defense [pdf]
The "Mozilla SSL Configuration Generator" has a checkbox for 'HSTS enabled?' and can generate SSL/TLS configs for Apache, Nginx, Lighttpd, HAProxy, AWS, ELB. https://mozilla.github.io/server-side-tls/ssl-config-generat...
You can select 'nginx', then 'modern', and then 'apache' for a modern Apache configuration.
Are the 'modern' configs FIPS compliant?
What browsers/tools does requiring TLS 1.3 break?
Banks Adopt Military-Style Tactics to Fight Cybercrime
> In a windowless bunker here, a wall of monitors tracked incoming attacks — 267,322 in the last 24 hours, according to one hovering dial, or about three every second — as a dozen analysts stared at screens filled with snippets of computer code.
> Cybercrime is one of the world’s fastest-growing and most lucrative industries. At least $445 billion was lost last year, up around 30 percent from just three years earlier, a global economic study found, and the Treasury Department recently designated cyberattacks as one of the greatest risks to the American financial sector.
Is this type of monitoring possible (necessary, even) with blockchains? Blockchains generally silently disregard bad/invalid transactions. Where could discarded/disregarded transactions and forks be reported to in a decentralized blockchain system? Who would pay for log storage? How redundantly replicated should which data be?
How DDOS resistant are centralized and decentralized blockchains?
Exchanges have risk. In terms of credit fraud: some crypto asset exchanges do allow margin trading, many credit card companies either refuse transactions with known exchanges or charge cash advance interest rates, and all transactions are final.
Exchanges hold private keys for customers' accounts, move a lot to offline cold storage, and maybe don't do a great job of explaining that YOU SHOULD NOT LEAVE MONEY ON AN EXCHANGE. One should transfer funds to a different account; such as a hardware or paper wallet or a custody service.
Do/can crypto asset exchanges participate in these exercises? To what extent do/can blockchains help solve for aspects of our unfortunately growing cybercrime losses?
Premined blockchains could reportedly handle card/chip/PIN transaction volumes today.
No, Section 230 Does Not Require Platforms to Be “Neutral”
> It’s foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. Trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would be unconstitutional under the First Amendment.
... https://en.wikipedia.org/wiki/Section_230_of_the_Communicati...
Ask HN: Do battery costs justify “buy all sell all” over “net metering”?
Are batteries the primary justification for "buy all sell all" over "net metering"?
Are next-gen supercapacitors the solution?
> Ask HN: Do battery costs justify "buy all sell all" over "net metering"?
> Are batteries the primary justification for "buy all sell all" over "net metering"?
> Are next-gen supercapacitors the solution?
With "Net Metering", electric utilities buy consumers' excess generated energy at retail or wholesale rates. https://en.wikipedia.org/wiki/Net_metering
With "Buy All, Sell All", electric utilities require consumers to sell all of the energy they generate from e.g. solar panels (usually at wholesale prices, AFAIU) and buy all of the energy they consume at retail rates. They can't place the meter after any local batteries.
Do I have this right?
Net metering:
(used-generated) x (retail || wholesale)
Buy all, sell all:
(used x retail) - (generated x wholesale)
For the energy generating consumer, net metering is a better deal: they have power when the grid is down, and they keep or earn more for the energy generation capability they choose to invest in.
Break-even on solar panels happens sooner with net metering.
Utilities argue that: maintaining grid storage and transfer costs money, which justifies paying energy generating consumers less than they pay for more constant sources of energy like dams, wind farms, and commercial solar plants.
Building a two-way power transfer grid costs money. Batteries require replacement after a limited number of cycles. Spiky or bursting power generation is not good for batteries because they don't get a full cycle. [Hemp] supercapacitors can smooth out that load and handle many more partial charge and discharge cycles.
Is energy storage the primary justifying cost driver for "buy all, sell all"?
What investments are needed in order to more strongly incentivize clean energy generation? Do we need low cost supercapacitors to handle the spiky load?
Are these utilities granted a monopoly? Are they price fixing?
Energy demand from blockchain mining has not managed to keep demand constant so that utilities have profit to invest in clean energy generation and a two-way smart grid that accommodates spiky consumer energy generation. Demand for electricity is falling as we become less wasteful and more energy efficient. As the cost of renewable energy continues to fall (and become less expensive than nonrenewables), there should be more margin for energy utilities which cost-rationally and environmentally-rationally choose to buy renewable energy and sell it to consumers.
Please correct me with the appropriate terminology.
How can we more strongly incentivize consumer solar panel investments?
Here's a discussion about the lower costs of hemp supercapacitors as compared with graphene super capacitors: https://news.ycombinator.com/item?id=16800693
""" Hemp supercapacitors might be a good solution to the energy grid storage problem. Hemp absorbs carbon, doesn't leave unplowable roots in the fields, returns up to 70% of nutrients to the soil, and grows quickly just about anywhere. Hemp bast fiber is normally waste. Hemp anodes for supercapacitors are made from the bast fiber that is normally waste.
Graphene is very useful; but industrial production of graphene is dangerous because lungs and blood-brain barrier.
Hemp is an alternative to graphene for modern supercapacitors (which now have much greater [energy density] in wH/kg)
"Hemp Carbon Makes Supercapacitors Superfast” https://www.asme.org/engineering-topics/articles/energy/hemp...
> “Our device’s electrochemical performance is on par with or better than graphene-based devices,” Mitlin says. “The key advantage is that our electrodes are made from biowaste using a simple process, and therefore, are much cheaper than graphene.”
> Graphene is, however, expensive to manufacture, costing as much as $2,000 per gram. [...] developed a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that outperforms graphene. What’s more, it can be manufactured for less than $500 per ton.
> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.
https://scholar.google.com/scholar?hl=en&q=hemp+supercapacit....
https://en.wikipedia.org/wiki/Supercapacitor
I feel like a broken record mentioning this again and again. ""'
Portugal electricity generation temporarily reaches 100% renewable
Currently we are passing through an atypical wind and rain period. Our dams are full and pouring out water as we are producing more energy than we consume.
Despite all of this energy prices don't drop as they are bound to one of the few legal monopolies in Europe: the grid operation cartel (REN in the article, there is only one allowed for each EU country).
Also this feels as fake news since we have a few very old (and historically insecure, like sines power plant and carregado power plant) fossil fuel power plants that are still operating with no signs of slowing down while making profits through several scams in chemical engineering their way into the 0 emissions lot.
are batteries (or some sort of storage) in the infrastructure plan for the future?
Hemp supercapacitors might be a good solution to the energy grid storage problem. Hemp absorbs carbon, doesn't leave unplowable roots in the fields, returns up to 70% of nutrients to the soil, and grows quickly just about anywhere.
Hemp bast fiber is normally waste. Hemp anodes for supercapacitors are made from the bast fiber that is normally waste.
Graphene is very useful; but industrial production of graphene is dangerous because lungs and blood-brain barrier.
Hemp is an alternative to graphene for modern supercapacitors (which now have much greater power density in wH/kg)
"Hemp Carbon Makes Supercapacitors Superfast” https://www.asme.org/engineering-topics/articles/energy/hemp...
> “Our device’s electrochemical performance is on par with or better than graphene-based devices,” Mitlin says. “The key advantage is that our electrodes are made from biowaste using a simple process, and therefore, are much cheaper than graphene.”
> Graphene is, however, expensive to manufacture, costing as much as $2,000 per gram. [...] developed a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that outperforms graphene. What’s more, it can be manufactured for less than $500 per ton.
> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.
https://scholar.google.com/scholar?hl=en&q=hemp+supercapacit...
https://en.wikipedia.org/wiki/Supercapacitor
I feel like a broken record mentioning this again and again.
If you're going to be mentioning this again in the future please correct your usage of power/energy density. Power density is measured in W/kg, energy density is measured in Wh/kg. Supercapacitors tend to excel in the former but be poor in the latter. You mentioned power density but used units for energy density. This happens so often in media that I feel the need to correct it even in a comment.
> please correct your usage of power/energy density. Power density is measured in W/kg, energy density is measured in Wh/kg. Supercapacitors tend to excel in the former but be poor in the latter.
I'd update the units; good call. You may have that confused? Traditional supercapacitors have had lower power density and faster charging/discharging. Graphene and hemp somewhat change the game, AFAIU.
It makes sense to put supercapacitors in front of the battery banks because they last so many cycles and because they charge and discharge so quickly (a very helpful capability for handling spiky wind and solar loads).
I think you may still be a little confused. Power density is the rate at which energy can be added to or drawn from the the cell per unit mass. So faster charging and discharging means high power density. Energy density is the total amount of energy that can be stored per unit mass. Supercapacitors are typically higher in power density and lower in energy density than batteries[1].
You're right that it makes sense to put supercapacitors in front of the battery banks for the reasons you said.
[1] http://berc.berkeley.edu/storage-wars-batteries-vs-supercapa...
I must have logically assumed that rate of charge and discharge include time (hours) in the unit: Wh/kg.
My understanding is that there's usually a curve over time t that represents the charging rate from empty through full.
[edit]
"C rate"
Battery_(electricity)#C_rate https://en.wikipedia.org/wiki/Battery_(electricity)#C_rate
Battery_charger#C-rates https://en.wikipedia.org/wiki/Battery_charger#C-rates
> Charge and discharge rates are often denoted as C or C-rate, which is a measure of the rate at which a battery is charged or discharged relative to its capacity. As such the C-rate is defined as the charge or discharge current divided by the battery's capacity to store an electrical charge. While rarely stated explicitly, the unit of the C-rate is [h^−1], equivalent to stating the battery's capacity to store an electrical charge in unit hour times current in the same unit as the charge or discharge current.
It does sound amazing and economical, like almost too good to be true, but I very much hope it is true. What are the downsides? Is there a degradation problem or something similar? Other than the stoner connection in people's minds, what kind of resistance is there to this? Why isn't it widely known?
You know, I'm not sure. This article is from a few years ago now and there's not much uptake.
It may be that most people dismiss supercapacitors based on the stats for legacy (pre-graphene/pre-hemp) supercapacitors: large but quick and long-lasting.
It may be that hemp is taxed at up to 90% because it's a controlled substance in the US (but not in Europe, Canada, or China; where we must import shelled hemp seeds from). A historical accident?
GPU Prices Drop ~25% in March as Supply Normalizes
How do these new GPUs compare to those from 10 years ago in terms of FLOPs per Watt? https://en.wikipedia.org/wiki/Performance_per_watt
The new ASICs for Ethereum mining can't be solely responsible for this percent of the market.
(Note that NVIDIA's stock price is up over 1700% over the past 10 years. And that Bitcoin mining on CPUs and GPUs hasn't been profitable for quite awhile. In 2007, I don't think we knew that hashing could be done on GPUs; though there were SSL accelerator cards that were mighty expensive)
Apple says it’s now powered by renewable energy worldwide
As this article [1] explains, Apple does not (and cannot) actually run on 100% renewable energy globally, as any of its stores/premises/facilities that are connected to local municipal power grids will use whatever power generation method is used on that grid, and that is still likely to be fossil fuel in most locations.
But they purchase Renewable Energy Certificates to offset their use of non-renewable energy, so they can make the claim that their net consumption of non-renewable electricity is negative.
[1] https://www.theverge.com/2018/4/9/17216656/apple-renewable-e...
This is weird. So if I run a wind farm that generates 1 MW(...h? why is this an energy unit and not power? but whatever...) and I use all of that electricity myself, I can also sell 1 REC to someone else so that they claim they run on green electricity. Which means that now either (a) I have to legally claim I run on dirty electricity (which is a lie on its face and make no sense???) or (b) we both claim we run on green power, double-dipping and screwing up the accounting of greenness.
Am I misunderstanding something? How does this work?
No that's how it works. That's also the reason why Norway, despite only using hydro, has only 40-50% renewable energy in some statistics. They sell green energy certificates to consumers abroad (e.g. in Germany). Officially, Norwegians then use coal power whereas in reality it's all hydro power. There isn't even enough transmission capacity to the south to get that kind of exchange physically.
Wow! And I just realized there seems to be another loophole: that means (say) a company like Apple could start a separate power company in Norway based on hydro power, have that company completely waste 100% of the energy it produce there, and yet "buy" the equivalent REC in another jurisdiction where they run on coal and suddenly get to 100% "green" power... potentially even making more money in tax credits, if there are any, all while consuming more and more dirty power without actually helping anybody shift to renewable energy. Right?
I think it comes down to giving them credit for funding the construction of massive clean energy projects, even if they don't exclusively use the electricity from that project themselves.
Look at Microsoft. They just funded a deal to build out an absolutely massive solar farm in Virginia.
https://blogs.microsoft.com/on-the-issues/2018/03/21/new-sol...
They do have facilities in state that will only need a fraction of that power to be 100% green, and the excess will be pumped into the local Virginia power grid and used by consumers there.
Basically, Microsoft is saying that they funded the project to generate excess green energy in one place to offset the dirty energy they consume in areas where there is no local green power option available.
100% renewable energy by purchasing and funding renewable energy is an outstanding acheivement.
Is there another statistic for measuring how many KWhr or MWhr are sourced directly from renewable energy sources (or, more logically, 'directly' from batteries + hemp supercapacitors between use and generation)?
Hackers Are So Fed Up with Twitter Bots They’re Hunting Them Down Themselves
This is an interesting approach. Maybe Twitter shouldn't solve the fake accounts problem directly, maybe they should come up with an evaluation criteria and then create a market for identifying fake accounts.
If their evaluation criteria is good, they could get away with 0 cost to build the best possible system (motivated by competition on a market).
There's an open call for papers/proposals for handling the deluge. "Funding will be provided as an unrestricted gift to the proposer's organization(s)" ... "Twitter Health Metrics Proposal Submission" https://blog.twitter.com/official/en_us/topics/company/2018/...
A real hacker move would be to just leave Twitter and go to Mastodon https://joinmastodon.org
Are you suggesting that Mastodon has a better system for identifying harassment, spam, and spam accounts? Or that, given that they're mostly friendly early adopters, they haven't yet encountered the problem?
It seems to me like you don't understand the crucial difference between Twitter and Mastodon.
There's no such thing as Mastodon, a singular social network. Mastodon is a series of instances that talk to each other. A sysadmin running the instance can do whatever he pleases in his instance, including closing the registration, banning entire instances from communicating with his instance, and enforcing whichever rules he wants to enforce.
Mastodon doesn't deal with such issues at all. It's sysadmins running Mastodon instances that are supposed to deal with such issues.
It's more like reddit, where mods of subreddits have nearly complete authority over their own space on the social network, than it is like Twitter, in which a single entity is in charge.
Mastodon is a federated system like StatusNet/GNU Social.
So, in your opinion, Mastodon nodes - by virtue of being federated - would be better equipped to handle the spam and harassment volume that Twitter is subject to?
I find that hard to believe.
ActivityPub (and OStatus, and ActivityStreams/Salmon, and OpenSocial) are all great specs and great ideas. Hosting and moderation cost real money (which spammers/scammers are wasting).
Know what's also great? Learning. For learning, we have the xAPI/TinCan spec and also schema.org/Action.
“We’re committing Twitter to increase the health and civility of conversation”
First Amendment protections apply to suits brought by the government. Civil suits are required to prove damages ("quantum of loss").
There are many open platforms. (I've contributed to those as well). Some are built on open standards. None of said open platforms have procedures or resources for handling the onslaught of disrespectful trash that the people we've raised eventually use these platforms for communicating at other people who have feelings and understand the Golden Rule.
https://en.wikipedia.org/wiki/Golden_Rule
The initial early adopters (who have other better things to do) are fine: helpful, caring, critical, respectful; healthy. And then everyone else comes surging in with hate, disrespect, and vitriol; unhealthy. They don't even realize that being hateful and disrespectful is making them more depressed. They think that complaining and talking smack to people is changing the world. And then they turn off the phone or log out of the computer, and carry on with their lives.
No-one taught them to be the positive, helpful energy they want to attract from the world. No-one properly conditioned them to either respectfully disagree according to the data or sit down and listen. No-one explained to them that a well-founded argument doesn't fit in 140 or 280 characters, but a link and a headline do. No-one explained to them that what they write on the internet lasts forever and will be found by their future interviewers, investors, jurors, and voters. No-one taught them that being respectful and helpful in service of other people - of the group's success, of peaceful coexistence - is the way to get ahead AND be happy. "No-one told me that."
Shareholders of public corporations want to see growth in meaningless numbers, foreign authoritarian governments see free expression as a threat to their ever-so-fragile self-perceptions, political groups seek to frame and smear and malign and discredit (because they are so in need of group acceptance; because money still isn't making them happy), and there are children with too much free time reading all of these.
No-one is holding these people accountable: we need transparency and accountability. We need to focus on more important goals and feel good about helping; about volunteering our time to help others be happier.
Instead, now that these haters and scam artists have all self-identified, we must spend our time conditioning their communications until they learn to respectfully disagree on facts and data or go somewhere else. "That's how you feel? Great. How does that make your victim feel?" is the confrontation that some people are seeking from companies that set out to serve free speech and provide a forum for citizens to share the actual news.
Who's going to pay for that? Can they sue for their costs and losses? Advertisers do not want a spot next to hateful and disrespectful.
"How dare you speak of censorship in such veiled terms!?" Really? They're talking about taking down phrases like "kill" and "should die"; not phrases like "I disagree because:"
So, now, because there are so many hateful economically disadvantaged people in the world with nothing better to do and no idea how to run a business or keep a job with benefits, these companies need to staff 24 hour a day censors to take down the hate and terror and gang recruiting within one hour. What a distorted mirror of our divisively fractured wealth inequality, indeed.
"Ban gangs ASAP, please: they'll just go away"
How much does it cost to pay prison labor to redundantly respond to this trash? Are those the skills they need to choose a different career with benefits and savings that meet or exceed inflation when they get out?
What is the procedure for referring threats of violence to justice in your jurisdiction? Are there wealthy individuals in your community who would love to contribute resources to this effort? Maybe they have some region-specific pointers for helping the have-nots out here trolling like it's going to get them somewhere they want to be in life?
Let me share a little story with you:
A person walks into a bar/restaurant, flicks off the bartender/waiter, orders 5 glasses of free water, starts plastering ads to the walls and other peoples' tables, starts making threats to groups of people cordially conversing, and walks out.
Gitflow – Animated in React
Thanks! A command log would be really helpful too.
The HubFlow docs contain GitFlow docs and some really helpful diagrams: https://datasift.github.io/gitflow/IntroducingGitFlow.html
I change the release prefix to 'v' so that the git tags for the release look like 'v0.0.1' and 'v0.1.0':
git config --replace-all gitflow.prefix.versiontag v
git config --replace-all hubflow.prefix.versiontag v
I usually use HubFlow instead of GitFlow because it requires there to be a Pull Request; though GitFlow does work when offline / without access to GitHub.Sure.. will add the command log
Ask HN: How feasible is it to become proficient in several disciplines?
For example to become a professional in:
- back-end api development
- DevOps
- Data Engineer (big data, data science, ML, etc)
It is feasible, though as with any type of specialization, you're then a "jack of all trades, master of none". Maybe a title like "Full Stack Data Engineer" would be descriptive.
You could write an OAuth API for accepting and performing analysis of datasets (model fitting / parameter estimation; classification or prediction), write a test suite, write Kubernetes YAML for a load-balanced geodistributed dev/test/prod architecture, and continuously deploy said application (from branch merges, optionally with a manual confirmation step; e.g. with GitLab CI) and still not be an actual Data Engineer.
After rising for 100 years, electricity demand is flat
Cryptocurrency mining is about to consume more electricity than home usage in Iceland.[1] I assume it's similar in other places that have cheaper electricity.
Seems that power companies should encourage consumers to mine Bitcoin. Problem solved.
[1] https://qz.com/1204840/iceland-will-use-more-electricity-min...
> Seems that power companies should encourage consumers to mine Bitcoin. Problem solved.
Blockchains will likely continue to generate considerable demand for electricity for the foreseeable future.
Blockchain firms can locate where energy is cheapest. Currently that's in countries where energy prices go negative due to excess capacity and insufficient energy storage resources (batteries, [hemp/graphene] supercapacitors, water towers).
With continued demand, energy companies can continue to invest in new clean energy generation alternatives.
Unfortunately, in the current administration's proposed budget, funding for ARPA-E is cancelled and allocated to clean coal; which Canada, France, and the UK are committed to phasing out entirely by ~2030.
A framework for evaluating data scientist competency
Something HTML with local state like "Programmer Competency Matrix" would be great.
http://sijinjoseph.com/programmer-competency-matrix/
Levi Strauss to use lasers instead of people to finish jeans
Chaos Engineering: the history, principles, and practice
awesome-chaos-engineering lists a bunch of chaos engineering resources and tools such as Gremlin: https://github.com/dastergon/awesome-chaos-engineering
Scientists use an atomic clock to measure the height of a mountain
Quantum_clock#More_accurate_experimental_clocks: https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_ex...
> In 2015 JILA evaluated the absolute frequency uncertainty of their latest strontium-87 optical lattice clock at 2.1 × 10−18, which corresponds to a measurable gravitational time dilation for an elevation change of 2 cm (0.79 in) on planet Earth that according to JILA/NIST Fellow Jun Ye is "getting really close to being useful for relativistic geodesy".
AFAIU, this type of geodesy isn't possible with 'normal' time structs. Are nanoseconds enough?
"[Python-Dev] PEP 564: Add new time functions with nanosecond resolution" https://mail.python.org/pipermail/python-dev/2017-October/14...
The thread you linked makes a good point: there isn't really any reason to care about the actual time, only the relative time. These sorts of clocks just use 256- or 512-bit counters; it's not like they're having overflow issues.
Resources to learn project management best practices?
My side project is beginning to attract interest from a few people who would like to hop on board. At this point I am just doing what feels familiar and sensible, but the project manager perspective is new to me. Are there any sort of articles/books/podcasts/etc that could clue me into how to become better at it?
Project Management: https://wrdrd.github.io/docs/consulting/software-development... ... #requirements-traceability, #work-breakdown-structure (Mission, Project, Goal/Objective #n; Issue #n, - [ ] Task)
"Ask HN: How do you, as a developer, set measurable and actionable goals?" https://westurner.github.io/hnlog/#story-15119635
- Burndown Chart, User Stories
... GitHub and GitLab have milestones and reorderable issue boards. I still like https://waffle.io for complexity points; though you can also just create labels for e.g. complexity (Complexity-5) and priority (Priority-5).
Ask HN: Thoughts on a website-embeddable, credential validating service?
Reading Troy Hunt's password release V2 blog post [0], I came across the NIST recommendation to prevent users from creating accounts with passwords discovered in data breaches. This got me thinking: would a website admin (ex. small business owner with a custom website) benefit from a service that validates user passwords? The idea is to create a registration iframe with forms for email, password, etc., which would check hashed credentials against a database of data from breaches. Additionally, client-side validation would enforce rules recommended by the NIST's Digital Identity Guidelines [1], which would relieve admins from implementing their own rules. I'm sure there are additional security features that can be added.
1. Have you seen a need for this type of service, and could you see this being adopted at all?
2. Do you know of a service like this? I've looked, no hits so far.
3. Does the architecture seem sound?
[0]: https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/
[1]: https://www.nist.gov/itl/tig/projects/special-publication-800-63
blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates/cert-verifier-js
> A library to enable parsing and verifying a Blockcert. This can be used as a node package or in a browser. The browserified script is available as verifier.js.
https://github.com/blockchain-certificates/cert-issuer
> The cert-issuer project issues blockchain certificates by creating a transaction from the issuing institution to the recipient on the Bitcoin blockchain that includes the hash of the certificate itself.
... We could/should also store X.509 cert hashes in a blockchain.
Exactly what part of such a service would benefit from anything related to a blockchain?
Are you asking me why blockcerts stores certs in a blockchain?
Or whether using certs (really long passwords) is a better option than submitting unhashed passwords on a given datetime to a third-party in order to make sure they're not in the pwned passwords tables?
I was just reading about a company trying to make self-sovereign identity including actual certs (like degrees and such) an accessible and widely applicable/acceptable technology using Ethereum blockchain. I thought it showed some real practicality and promise. I believe it begins with U- forgot the name.Perhaps UPort? Anyhow, I'd be interested in hearing from anyone here about why that might be a bad or good idea. I don't personally have the skill in that tech to know.
Known Traveler Digital Identity system is a "new model for airport screening and security that uses biometrics, cryptography and distributed ledger technologies."
Blockcerts are for academic credentials, AFAIU.
[EDIT]
Existing blockchains have a limited TPS (transactions per second) for writes; but not for reads. Sharding and layer-2 (sidechains) do not have the same assurances. I'm sure we all remember how cryptokitties congested the txpool during the Bitcoin futures launch.
Thank you. I looked into "clear", one of the airport known traveler ID systems. (I'm assuming there are others) It's pretty cool/concerning. Takes ~ 8 min to load a traveler into its system. Thanks for reminding me of TPS---the info on read vs write.
Ask HN: What's the best algorithms and data structures online course?
These aren't courses, but from answers to "Ask HN: Recommended course/website/book to learn data structure and algorithms" :
Data Structure: https://en.wikipedia.org/wiki/Data_structure
Algorithm:https://en.wikipedia.org/wiki/Algorithm
Big O notation:https://en.wikipedia.org/wiki/Big_O_notation
Big-O Cheatsheet: http://bigocheatsheet.com
Coding Interview University > Data Structures: https://github.com/jwasham/coding-interview-university/blob/...
OSSU: Open Source Society University > Core CS > Core Theory > "Algorithms: Design and Analysis, Part I" [&2] https://github.com/ossu/computer-science/blob/master/README....
"Algorithms, 4th Edition" (2011; Sedgewick, Wayne): https://algs4.cs.princeton.edu/
Complexity Zoo > Petting Zoo (P, NP,): https://complexityzoo.uwaterloo.ca/Petting_Zoo
While perusing awesome-awesomeness [1], I found awesome-algorithms [2] , algovis [3], and awesome-big-o [4].
[1] https://github.com/bayandin/awesome-awesomeness
[2] https://github.com/tayllan/awesome-algorithms
Using Go as a scripting language in Linux
I, too, didn't realize that shebang parsing is implemented in the `binfmt_script` kernel module.
Does this persist across reboots?
echo ':golang:E::go::/usr/local/bin/gorun:OC' | sudo tee /proc/sys/fs/binfmt_misc/register
No, but different init systems may autoload formats based on some configuration files. Systemd, for instance: https://www.freedesktop.org/software/systemd/man/systemd-bin...
Guidelines for enquiries regarding the regulatory framework for ICOs [pdf]
This is a helpful table indicating whether a Payment, Utility, Asset, or Hybrid coin/token: is a security, qualifies under Swiss AML payment law.
The "Minimum information requirements for ICO enquiries" appendix seems like a good set of questions for evaluating ICOs. Are there other good questions to ask when considering whether to invest in a Payment, Utility, Asset, or Hybrid ICO?
Are US regulations different from these clear and helpful regulatory guidelines for ICOs in Switzerland?
> Are there other good questions to ask when considering whether to invest in a Payment, Utility, Asset, or Hybrid ICO?
This paper doesn seem to cover that, only on how regulators should treat investments.
On investing in ICOs, the questions are the same as any other IPO. And, in most cases, just speculation.
The Benjamin Franklin method for learning more from programming books
> Read your programming book as normal. When you get to a code sample, read it over
> Then close the book.
> Then try to type it up.
According to a passage in "The Autobiography of Benjamin Franklin" (1791) regarding re-typing from "The Spectator"
https://en.wikipedia.org/wiki/The_Autobiography_of_Benjamin_...
Avoiding blackouts with 100% renewable energy
I notice that cases A and C require batteries for storage.
Should there be a separate entry for new gen supercapacitors? Supercapacitors built with both graphene and hemp have different Max Charge Rate (GW), Max Discharge Rate (GW), and Storage (TWh) capacities than even future-extrapolated batteries and current supercapacitors.
https://en.wikipedia.org/wiki/Supercapacitor
The cost and capabilities stats in this article look very promising:
"Hemp Carbon Makes Supercapacitors Superfast” https://www.asme.org/engineering-topics/articles/energy/hemp...
> “Our device’s electrochemical performance is on par with or better than graphene-based devices,” Mitlin says. “The key advantage is that our electrodes are made from biowaste using a simple process, and therefore, are much cheaper than graphene.”
> Graphene is, however, expensive to manufacture, costing as much as $2,000 per gram. [...] developed a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that outperforms graphene. What’s more, it can be manufactured for less than $500 per ton.
> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.
https://scholar.google.com/scholar?hl=en&q=hemp+supercapacit...
To be clear, supercapacitors are an alternative to li-ion batteries.
"Matching demand with supply at low cost in 139 countries among 20 world regions with 100% intermittent wind, water, and sunlight (WWS) for all purposes" (Renewable Energy, 2018) https://web.stanford.edu/group/efmh/jacobson/Articles/I/Comb...
Ask HN: What are some common abbreviations you use as a developer?
These are called 'codelabels'. They're great for prefix-tagging commit messages, pull requests, and todo lists:
BLD: build
BUG: bug
CLN: cleanup
DOC: documentation
ENH: enhancement
ETC: config
PRF: performance
REF: refactor
RLS: release
SEC: security
TST: test
UBY: usability
DAT: data
SCH: schema
REQ: requirement
REQ: request
ANN: announcement
STORY: user story
EPIC: grouping of user stories
There's a table of these codelabels here: https://wrdrd.github.io/docs/consulting/software-development...
Someday TODO FIXME XXX I'll get around to:
- [ ] DOC: create a separate site/organization for codelabels
- [ ] ENH: a tool for creating/renaming GitHub labels with unique foreground and background colors
YAGNI: Ya' ain't gonna need it
LOL, lulz
DRY: Don't Repeat Yourself
KISS: Keep It Super Simple
MVC: Model-View-Controller
MVT: Model-View-Template
MVVM: Model-View-View-Model
UI: User Interface
UX: User Experience
GUI: Graphical User Interface
CLI: Command Line Interface
CAP: Consistency, Availability, Partition tolerance
DHT: Distributed Hash Table
ETL: Extract, Transform, and Load
ESB: Enterprise Service Bus
MQ: Message Queue
VM: Virtual Machine
LXC: Linux Containers
[D]VCS, RCS: [Distributed] Version/Revision Control System
XP: Extreme Programming
CI: Continuous Integration
CD: Continuous Deployment
TDD: Test-Driven Development
BDD: Behavior-Driven Development
DFS, BFS: Depth/Breadth First Search
CRM: Customer Relationship Management
CMS: Content Management System
LMS: Learning Management System
ERP: Enterprise Resource Planning system
HTTP: Hypertext Transfer Protocol
HTTP STS: HTTP Strict Transport Security
REST: Representational State Transfer
API: Application Programming Interface
HTML: Hypertext Markup Language
DOM: Document Object Model
LD: Linked Data
LOD: Linked Open Data
URI: Uniform Resource Indicator
URN: Uniform Resource Name
URL: Uniform Resource Locator
UUID: Universally Unique Identifier
RDF: Resource Description Format
RDFS: RDF Schema
OWL: Web Ontology Language
JSON-LD: JSON Linked Data
JSON: JavaScript Object Notation
CSVW: CSV on the Web
CSV: Comma Separated Values
CIA: Confidentiality, Integrity, Availability
ACL: Access Control List
RBAC: Role-Based Access Control
MAC: Mandatory Access Control
CWE: Common Weakness Enumeration
CVE: Common Vulnerabilities and Exposures
XSS: Cross-Site Scripting
CSRF: Cross-Site Request Forgery
SQLi: SQL Injection
ORM: Object-Relational Model
AUC: Area Under Curve
ROC: Receiver Operating Characteristic
DL: Description Logic
RL: Reinforcement Learning
CNN: Convolutional Neural Network
DNN: Deep Neural Network
IS: Information Systems
ROI: Return on Investment
RPU: Revenue per User
MAU: Monthly Active Users
DAU: Daily Active Users
STEM: Science, Technology, Engineering, Mathematics/Medicine
STEAM: STEM + Arts
W3C: World-Wide-Web Consortium
GNU: GNU's not Unix
WRDRD: WRD R&D
... The Sphinx ``.. index::`` directive makes it easy to include index entries for acronym forms, too https://wrdrd.github.io/docs/genindex
There Might Be No Way to Live Comfortably Without Also Ruining the Planet
"A good life for all within planetary boundaries" (2018) https://www.nature.com/articles/s41893-018-0021-4
> Abstract: Humanity faces the challenge of how to achieve a high quality of life for over 7 billion people without destabilizing critical planetary processes. Using indicators designed to measure a ‘safe and just’ development space, we quantify the resource use associated with meeting basic human needs, and compare this to downscaled planetary boundaries for over 150 nations. We find that no country meets basic needs for its citizens at a globally sustainable level of resource use. Physical needs such as nutrition, sanitation, access to electricity and the elimination of extreme poverty could likely be met for all people without transgressing planetary boundaries. However, the universal achievement of more qualitative goals (for example, high life satisfaction) would require a level of resource use that is 2–6 times the sustainable level, based on current relationships. Strategies to improve physical and social provisioning systems, with a focus on sufficiency and equity, have the potential to move nations towards sustainability, but the challenge remains substantial.
> "Radical changes are needed if all people are to live well within the limits of the planet," [...]
> "These include moving beyond the pursuit of economic growth in wealthy nations, shifting rapidly from fossil fuels to renewable energy, and significantly reducing inequality.
> "Our physical infrastructure and the way we distribute resources are both part of what we call provisioning systems. If all people are to lead a good life within the planet's limits then these provisioning systems need to be fundamentally restructured to allow for basic needs to be met at a much lower level of resource use."
Perhaps ironically, our developments in service of sustainability (resource efficiency) needs for a civilization on Mars are directly relevant to solving these problems on Earth.
Recycle everything.
Survive without soil, steel, hydrocarbons, animals, oxygen.
Convert CO2, sunlight, H20, and geothermal energy to forms necessary for life.
https://en.wikipedia.org/wiki/Colonization_of_Mars
Algae, carbon capture, carbon sequestration, lab grown plants, water purification, solar power, [...]
Mars requires a geomagnetic field in order to sustain an atmosphere in order to [...].
"The Limits to Growth" (1972, 2004) [1] very clearly forecasts these same unsustainable patterns of resource consumption: 'needs' which exceed and transgress our planetary biophysical boundaries.
The 17 UN Sustainable Development Goals (#GlobalGoals) [2] outline our worthwhile international objectives (Goals, Targets, and Indicators). The Paris Agreement [3] sets targets and asks for commitments from nation states (and businesses) to help achieve these goals most efficiently and most sustainably.
In the US, the Clean Power Plan [4] was intended to redirect our national resources toward renewable energy with far less external costs. Direct and indirect subsidies for nonrenewables are irrational. Are subsidies helpful or necessary to reach production volumes of renewable energy products and services?
There are certainly financial incentives for anyone who chooses to invest in solving for the Global Goals; and everyone can!
[1] https://en.wikipedia.org/wiki/The_Limits_to_Growth
[2] http://www.un.org/sustainabledevelopment/sustainable-develop...
Multiple GWAS finds 187 intelligence genes and role for neurogenesis/myelination
> We found evidence that neurogenesis and myelination—as well as genes expressed in the synapse, and those involved in the regulation of the nervous system—may explain some of the biological differences in intelligence.
re: nurture, hippocampal plasticity and hippocampal neurogenesis also appear to be affected by dancing and omega-3,6 (which are transformed into endocannabinoids by the body): https://news.ycombinator.com/item?id=15109698
Could we solve blockchain scaling with terabyte-sized blocks?
These numbers in a computational model (or even Jupyter notebooks) would be useful.
We may indeed need fractional satoshis ('naks').
With terabyte blocks, lightning network would be unnecessary: at least for TPS.
There will need to be changes to account for quantum computing capabilities somewhere in the future timeline of Bitcoin (and everything else in banking and value-producing industry). Probably maybe a different hash function instead of just a routine difficulty increase (and definitely something other than ECDSA, which isn't a primary cost). $1.3m/400k a year to operate a terabyte mining rig with 50Gbps bandwidth would affect decentralization; though maybe not any more than it already is affected now.
https://en.bitcoin.it/wiki/Weaknesses#Attacker_has_a_lot_of_... (51%)
Confidence intervals for these numbers would be useful.
Casper PoS and beyond may also affect future Bitcoin volume estimates.
Ask HN: Do you have ADD/ADHD? How do you manage it?
Also, how has it affected your CS career? I feel that transitioning to management would help, as it does not require lengthy periods of concentration, but rather distributed attention for shorter periods.
Music. Headphones. Chillstep, progressive, chillout etc. from di.fm. Long mixes from SoundCloud with and without vocals. "Instrumental"
Breathe in through the nose and out through the mouth.
Less sugar and processed foods. Though everyone has a different resting glucose level.
Apparently it's called alpha-pinene.
Fidget things. Rubberband, paperclip.
The Pomodoro Technique: work 25 minutes, chill for 5 (and look at something at least 20 feet away (20-20-20 rule))
Lists. GTD. WBS.
Exercise. Short walks.
Ask HN: How to understand the large codebase of an open-source project?
Hello All!
what are techniques you all used to learn and understand a large codebase? what are the tools you use?
What is the best way to learn to code from absolute scratch?
We have been hosting a Ugandan refugee in our home in Oakland for the past 9 months and he wants to learn how to code.
Where is the best place for him to start from absolute scratch? What resources can we point him to? Who can help?
Here's an answer to a similar question: "Ask HN: How to introduce someone to programming concepts during 12-hour drive?" https://news.ycombinator.com/item?id=15454421
https://learnxinyminutes.com/docs/python3/ (Python3)
https://learnxinyminutes.com/docs/javascript/ (Javascript)
https://learnxinyminutes.com/docs/git/ (Git)
https://learnxinyminutes.com/docs/markdown/ (Markdown)
Read the docs. Read the source. Write docstrings. Write automated tests: that's the other half of the code.
Keep a journal of your knowledge as e.g. Markdown or ReStructuredText; regularly pull the good ones from bookmarks and history into an outline.
I keep a tools reference doc with links to Wikipedia, Homepage, Source, Docs: https://wrdrd.github.io/docs/tools/
And a single-page log of my comments: https://westurner.github.io/hnlog/
> To get a job, "Coding Interview University": https://github.com/jwasham/coding-interview-university
Wow, thank you so much. This is incredibly useful.
Tesla racing series: Electric cars get the green light – Roadshow
Tesla Racing Circuit ideas for increasing power discharge rate, reducing heat, and reducing build weight:
Hemp supercapacitors (similar power density as graphene supercapacitors and li-ion, lower cost than graphene)
Active cooling. Modified passive cooling.
Biocomposite frame and panels (stronger and lighter than steel and aluminum (George Washington Carver))
> Biocomposite frame and panels (stronger and lighter than steel and aluminum (George Washington Carver))
"Soybean Car" (1941) https://en.wikipedia.org/wiki/Soybean_car
What happens if you have too many jupyter notebooks?
These days there is a tendency in data analysis to use Jupyter Notebooks. But what happens if you have too many jupyter notebooks? For example, there are more than a hundred.
Actually, you start creating some modules. However, it is less convenient to work with them compared to what was before. It happens that you should code in web interface, somewhere in similar to the notepad++ form or you should change your IDLE.
Personally, I work in Pycharm and so far I couldn't assess remote interpreter or VCS. It is because pickle files or word2vec weighs too much (3gb+) and so I don't want to download/upload them. Also Jupyter is't cool in pycharm.
Do you have better practices in your companies? How to correctly adjust IDLE? Do you know about any possible substitution for the IPython notebook in the world of data analysis?
> what happens if you have too many jupyter notebooks? For example, there are more than a hundred.
Like anything else, Jupyter Notebook is limited by the CPU and RAM of the system hosting the Tornado server and Jupyter kernels.
At 100 notebooks (or even just one), it may be a good time to factor common routines into a packaged module with tests and documentation.
It's actually possible (though inefficient) to import code from Jupyter notebooks with ipython/ipynb (pypi:ipynb): https://github.com/ipython/ipynb ( https://jupyter-notebook.readthedocs.io/en/stable/examples/N... )
> Actually, you start creating some modules. However, it is less convenient to work with them compared to what was before. It happens that you should code in web interface, somewhere in similar to the notepad++ form or you should change your IDLE.
The Spyder IDE has support for .ipynb notebooks converted to .py (which have the IPython prompt markers in them). Spyder can connect an interpreter prompt to a running IPython/Jupyter kennel. There's also a Spyder plugin for Jupyter Notebook: https://github.com/spyder-ide/spyder-notebook
> Personally, I work in Pycharm and so far I couldn't assess remote interpreter or VCS. It is because pickle files or word2vec weighs too much (3gb+) and so I don't want to download/upload them.
Remote data access times can be made faster by increasing the space efficiency of the storage format, increasing the bandwidth of the connection, moving the data to the code, or moving the code to the data.
> Do you have better practices in your companies?
There are a number of [Reproducible] Data Science cookiecutter templates which have a directory for notebooks, module packaging, and Sphinx docs: https://cookiecutter.readthedocs.io/en/latest/readme.html#da...
Refactoring increases testability and code reuse.
> How to correctly adjust IDLE?
I don't think I understand the question?
"Configuring IPython" https://ipython.readthedocs.io/en/stable/config/index.html
Jupyter > "Installation, Configuration, and Usage" https://jupyter.readthedocs.io/en/latest/projects/content-pr...
> Do you know about any possible substitution for the IPython notebook in the world of data analysis?
From https://en.wikipedia.org/wiki/Notebook_interface :
> > "Examples of the notebook interface include the Mathematica notebook, Maple worksheet, MATLAB notebook, IPython/Jupyter, R Markdown, Apache Zeppelin, Apache Spark Notebook, and the Databricks cloud."
There are lots of Jupyter kernels for different tools and languages (over 100; including for other 'notebook interfaces'): https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
And there are lots of Jupyter integrations and extensions: https://github.com/quobit/awesome-python-in-education/blob/m...
Cancer ‘vaccine’ eliminates tumors in mice
The article is about this study:
"Eradication of spontaneous malignancy by local immunotherapy" http://stm.sciencemag.org/content/10/426/eaan4488
> In situ vaccination with low doses of TLR ligands and anti-OX40 antibodies can cure widespread cancers in preclinical models.
Boosting teeth’s healing ability by mobilizing stem cells in dental pulp
Tideglusib
https://en.wikipedia.org/wiki/Tideglusib
> "Promotion of natural tooth repair by small molecule GSK3 antagonists" https://www.nature.com/articles/srep39654
> [...] Here we describe a novel, biological approach to dentine restoration that stimulates the natural formation of reparative dentine via the mobilisation of resident stem cells in the tooth pulp.
This Biodegradable Paper Donut Could Let Us Reforest the Planet
"These drones can plant 100,000 trees a day" https://news.ycombinator.com/item?id=16260892
Drones that can plant 100k trees a day
> It’s simple maths. We are chopping down about 15 billion trees a year and planting about 9 billion. So there’s a net loss of 6 billion trees a year.
This is a regional thing [0] though. We need to plant the trees in Latin America, Caribbean, and Sub-Saharan Africa. The rest of the world is gaining forests.
[0] http://www.telegraph.co.uk/news/2016/03/23/deforestation-whe...
Now, the question is, which industry sector cuts the largest percentage of trees. Now, answering that question, is there a way to do the same thing without cutting trees?
Answering that question and then executing a business plan is probably worth billions.
Planting trees to combat 6 billion trees lost every year is a pure expense, no profit to be made at all. At least not if you won't cut them down a few decades later.
A very good way to absorb atmospheric carbon is to plant new trees and cut down and use old ones for anything else but burning it.
"This Biodegradable Paper Donut Could Let Us Reforest The Planet" https://news.ycombinator.com/item?id=16261101
I think this kind of thing is funny for us westerners. I recently found this tech has been used in arid countries for possibly hundreds or thousands of years.
https://duckduckgo.com/?q=ollas+gardening&t=lm&iax=images&ia...
I do realize the ollas are for more permanent gardens, but it's the same concept. I do hope the paper version gets used, there are plenty of places that could benefit from it.
What are some YouTube channels to progress into advanced levels of programming?
There are some cool YouTube channel suggestions on https://news.ycombinator.com/item?id=16224165 But I wanted to know which of those are great to progress into advanced level of programming? Which of the channels teach advanced techniques?
These aren't channels, but a number of the links are marked with "(video)":
https://github.com/jwasham/coding-interview-university
https://github.com/jwasham/coding-interview-university/blob/...
Thanks
Multiple issue and pull request templates
+1
Default: /ISSUE_TEMPLATE.md
/ISSUE_TEMPLATE/<name>.md</name>
Default: /PULL_REQUEST_TEMPLATE.md
/PULL_REQUEST_TEMPLATE/<name>.md</name>
You can leave the last S for off (for savings) if you want /ISSUE_TEMPLATE(S)/ or /PULL_REQUEST_TEMPLATE(S)/
Five myths about Bitcoin’s energy use
Regarding the proof-of-stake part: Ethereum devs are also working on sharding which will make scaling on the Ethereum Blockchain way easier. I really think that Ethereum will be the No.1 cryptocurrency in the near future. Bitcoin devs showed plenty of times, that they are not capable of keeping pace with demand (couldn't even get the block size thing right). Bitcoin is virtually dead. No one can really use it for real world transactions. Plus Bitcoin in reality is a really central coin, completely in the hands of the miners. Even if the Bitcoin devs decided to go with PoS, the miners wouldn't agree.
Proof of Work (Bitcoin*, ...), Proof of Stake (Ethereum Casper), Proof of Space, Proof of Research (GridCoin, CureCoin,)
Plasma (Ethereum) and Lightning Network (BitCoin (SHA256), Litecoin (scrypt),) will likely offload a significant amount of transaction volume and thereby reduce the kWh/transaction metrics.
> But electricity costs matter even more to a Bitcoin miner than typical heavy industry. Electricity costs can be 30-70% of their total costs of operation.
> [...] If Bitcoin mining really does begin to consume vast quantities of the global electricity supply it will, it follows, spur massive growth in efficient electricity production—i.e. the green energy revolution. Moore’s Law was partially a story about incredible advances in materials science, but it was also a story about incredible demand for computing that drove those advances and made semiconductor research and development profitable. If you want to see a Moore’s-Law-like revolution in energy, then you should be rooting for, and not against, Bitcoin. The fact is that the Bitcoin network, right now, is providing a $200,000 bounty every 10 minutes (the mining reward) to the person who can find the cheapest energy on the planet.
This is ridiculous. The economy is already incentivized to find cheaper electricity by forces far more powerful than Bitcoin. By this logic, you would defend a craze for trying to boil the sea.
If the market had internalized the external health, environmental, and defense costs of nonrenewable energy, we would already have cheap, plentiful renewable energy. But we don't: the market is failing to optimize for factors other than margin. (New Keynesian economics admits market failure, but not non-rationality.)
So, (speculative_valuation - cost) is the margin. Whereas with a stock in a leveraged high-frequency market with shorting, (shareholder_equity - market_cap) is explainable in terms of the market information that is shared.
So, it's actually (~$200K-(n_kwhrs*cost_kwhr)) for whoever wins the block mining lottery (which is about every 10 minutes and can be anyone who's mining).
But the point about Bitcoin maintaining demand for and while we move to competitive lower cost renewable energy and greater efficiency is good.
What we should hope to see is the blockchain industry directly investing in clean energy capacity development in order to rationally minimize their primary costs and maximize environmental sustainability.
It is kind of missing the point though. If everyone was competing to make electrical single-person planes to help people commute to work it would also increase demand for renewable/electrical energy - but can we agree, that just using electrical cars instead of planes is going to take a lot less electricity overall?
Yes, and then energy prices would decrease due to less demand. Blockchain energy usage maintains demand for energy; which keeps prices high enough that production of renewables can profitably compete with nonrenewables while we reach production volumes of solar, wind, and hemp supercapacitors for grid storage.
> Throughout the first half of 2008, oil regularly reached record high prices.[2][3][4][5] Prices on June 27, 2008, touched $141.71/barrel, for August delivery in the New York Mercantile Exchange [...] The highest recorded price per barrel maximum of $147.02 was reached on July 11, 2008.
At that price, there's more demand for renewables (such as electric vehicles and solar panels)
> Since late 2013 the oil price has fallen below the $100 mark, plummeting below the $50 mark one year later.
https://en.wikipedia.org/wiki/World_oil_market_chronology_fr...
... Energy costs and inflation are highly covariate. (Trouble is, CPI All rarely ever goes back down)
Bitcoin's energy use will tend to raise to match the mining profits. As long as both BTC's price and transaction fees keep raising, the energy miners spend will continue to raise. BTC's price is going up due to speculation, and transaction fees due to blocks being full.
Even at current levels, BTC's built in block rewards dominate tx fees (12.50BTC built in + 1.38 tx fees). The 12.5 built in reward is essentially a subsidy for mining.
Ask HN: Which programming language has the best documentation?
Ask HN: Recommended course/website/book to learn data structure and algorithms
I am a full-time Android developer who does most of his programming work in Java. I am a non CS graduate so didn't study Data structure and algorithms course in university so I am not familiar with this subject which is hindering my prospect of getting better programming jobs. There are so many resources out there on this subject that I am unable to decide which one is the best for my case. Could someone please point me out in the right direction. Thanks.
Data Structure: https://en.wikipedia.org/wiki/Data_structure
Algorithm: https://en.wikipedia.org/wiki/Algorithm
Big O notation: https://en.wikipedia.org/wiki/Big_O_notation
Big-O Cheatsheet: http://bigocheatsheet.com
Coding Interview University > Data Structures: https://github.com/jwasham/coding-interview-university/blob/...
OSSU: Open Source Society University > Core CS > Core Theory > "Algorithms: Design and Analysis, Part I" [&2] https://github.com/ossu/computer-science/blob/master/README....
"Algorithms, 4th Edition" (2011; Sedgewick, Wayne): https://algs4.cs.princeton.edu/
Complexity Zoo > Petting Zoo (P, NP,): https://complexityzoo.uwaterloo.ca/Petting_Zoo
While perusing awesome-awesomeness [1], I found awesome-algorithms [2] , algovis [3], and awesome-big-o [4].
[1] https://github.com/bayandin/awesome-awesomeness
[2] https://github.com/tayllan/awesome-algorithms
Why is quicksort better than other sorting algorithms in practice?
The top-voted response is helpful: throwing away constants as in Big-O notation is misleading, average cases aren't the case; Sedgewick Algorthms book.
ORDO: a modern alternative to X.509
There are a number of W3C specs for this type of thing.
Linked Data Signatures (ld-signatures) relies upon a graph canonicalization algorithm that works with any RDF format (RDF/XML, JSON-LD, Turtle,)
> The signature mechanism can be used across a variety of RDF data syntaxes such as JSON-LD, N-Quads, and TURTLE, without the need to regenerate the signature
https://w3c-dvcg.github.io/ld-signatures/
A defined way to transform ORDO to RDF would be useful for WoT graph applications.
WebID can express X509 certs with the cert ontology. {cert:X509Certificate, cert:PGPCertificate,} rdfs:subClassOf cert:Certificate
https://www.w3.org/ns/auth/cert
https://www.w3.org/2005/Incubator/webid/spec/
ld-signatures is newer than WebID.
(Also, we should put certificates in a blockchain; just like Blockcerts (JSON-LD))
Wine 3.0 Released
Kimbal Musk is leading a $25M mission to fix food in US schools
Spinzero – A Minimal Jupyter Notebook Theme
+1. The Computer Modern serif fonts look legit. Like LaTeX legit.
Now, if we could make the fonts unscalable and put things in two columns (in order to require extra scrolling and 36 character wide almost-compiling copy-and-pasted code samples without syntax highlighting) we'd be almost there!
I searched for Computer Modern fonts and they're all available here: http://canopus.iacp.dvo.ru/~panov/cm-unicode/
I am surprised why these beauties are not widely adopted on websites and such. I agree, they just look very disciplined and professional.
I'd hope someday these relics are hosted on Google Fonts.
What does the publishing industry bring to the Web?
Q: What does the publishing industry bring to the Web?
A: PDF hosting, comments, a community of experts
FWIU, Publishing@W3C proposes WPUB [1] instead of PDF or MHTML for 'publishing' http://schema.org/ScholarlyArticle .
How do WPUB canonical identifiers (which reference/redirect(?) to the latest version of the resource) work with W3C Web Annotations attached to e.g. sentences within a resource identified with a URI? When the document changes, what happens to the attached comments? This is also a problem with PDFs: with a filename like document-20180111-v01.pdf and a stable(!) URL like http://example.org/document-20180111-v01.pdf, we can add Web Annotations to that URI; but with a new URI, those annotations are lost.
Git is a blockchain
Bitcoin is very much inspired by git; though in terms of immutability it's more similar to mercurial and subversion (git push -f)
Git accepts whatever timestamp a node chooses to add to a commit. This can cause interesting sorts in terms of chronological and topological sort orders.
Without an agreed-upon central git server there is not a canonical graph.
You can use GPG signatures with Git, but you need to provide your own keyserver and then there's still no way to enforce permissions (e.g. who can ALTER, UPDATE, or DELETE which files).
Git is a directed acyclic graph (DAG). Not a chain. Blockchains are chains to prevent double-spending (e.g. on a different fork).
Bitcoin was accepted by The Linux Foundation (Linus Torvalds wrote Git): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/
It's always been a mix of inspiration in my mind:
- Torrent P2P file sharing.
- Git like data structure and protocol
- Immutability from functional programming
- Public key cryptography
Show HN: Convert Matlab/NumPy matrices to LaTeX tables
LaTeX must be escaped in order to prevent LaTeX injection.
AFAIU, numpy.savetxt does not escape LaTeX characters?
Jupyter Notebook rich object display protocol checks for obj._repr_latex_() when converting a Jupyter notebook from .ipynb to LaTeX.
The Pandas _repr_latex_() function calls to_latex(escape=True † ). https://github.com/pandas-dev/pandas/blob/master/pandas/core...
†* The default value of escape ️ (and a few other presentational parameters) is determined from the display.latex.escape option: https://pandas.pydata.org/pandas-docs/stable/options.html?hi... *
df = pd.read_csv('filename.csv', ); df.to_latex(escape=True)
Or, with a Jupyter notebook:
df = pd.read_csv('filename.csv', ); df
# $ jupyter convert --to latex filename.ipynb
Wouldn't it be great if there was a LaTeX incantation that allowed for specifying that the referenced dataset URI (maybe optionally displayed also as a table) is a premise of the analysis; with RDFa and/ or JSONLD in addition to LaTeX PDF? That way, an automated analysis tool could identify and at least retrieve the data for rigorous unbiased analyses.
http://schema.org/ScholarlyArticle
#StructuredPremises
A Year of Spaced Repetition Software in the Classroom
What a great article about using Anki during class in a Language Arts curriculum.
NIST Post-Quantum Cryptography Round 1 Submissions
Are there any blogs that talk about what’s required in a crypto algorithm to withstand quantum computing? Is a straightforward increasing the key size enough? Or are there new paradigms explored? Are these algorithms symmetric or asymmetric?
This paper lists a few of the practical concerns for quantum-resistant algos (and proposes an algo that wasn't submitted to NIST Post-Quantum Cryptography Round 1):
"Quantum attacks on Bitcoin, and how to protect against them" https://arxiv.org/abs/1710.10377 (~2027?)
A few Quantum Computing and Quantum Algorithm resources: https://news.ycombinator.com/item?id=16052193
Responsive HTML (arxiv-vanity/engrafo, PLoS,) or Markdown in a Jupyter notebook (stored in a Git repo with a tag and maybe a DOI from figshare or Zenodo) really would be far more useful than comparing LaTeX equations rendered into PDFs.
What are some good resources to learn about Quantum Computing?
Quantum computing: https://en.wikipedia.org/wiki/Quantum_computing
Quantum algorithm: https://en.wikipedia.org/wiki/Quantum_algorithm
Quantum Algorithm Zoo: http://math.nist.gov/quantum/zoo/
Jupyter notebooks:
* QISKit/qiskit-tutorial > "Exploring Quantum Information Concepts" https://nbviewer.jupyter.org/github/QISKit/qiskit-tutorial/b...
* jrjohansson/qutip-lectures > "Lecture 0 - Introduction to QuTiP - The Quantum Toolbox in Python" https://nbviewer.jupyter.org/github/jrjohansson/qutip-lectur...
http://qutip.org/tutorials.html
* sympy/quantum_notebooks https://nbviewer.jupyter.org/github/sympy/quantum_notebooks/...
https://github.com/topics/quantum
krishnakumarsekar/awesome-quantum-machine-learning: https://github.com/krishnakumarsekar/awesome-quantum-machine...
arxiv quant-ph: https://arxiv.org/list/quant-ph/recent
Gridcoin: Rewarding Scientific Distributed Computing
In all honesty this is what I've been waiting for in terms of a useful cryptocurrency. Now if we could only decentralize the control of what projects the processing goes towards with smart contracts then we could have a coin with more actual utility. Imagine the hash rate of the BTC network going towards some useful calculations.
> Imagine the hash rate of the BTC network going towards some useful calculations.
""" CureCoin Reaches #1 Ranking on Folding@home
As of the afternoon of August 29, 2017 (Eastern Time), the Curecoin Team 224497 earned the world's #1 rank on Stanford's Folding@home - a protein folding simulation Distributed Computing Network (DCN). In a little over 3 years, the team (including our merge-folding partners at Foldingcoin) collectively produced 160 billion points worth of molecular computations to support research in the areas of cancer, Alzheimer's, Huntington's, Parkinson's, Infectious Disease as well as helping scientists uncover new molecular dynamics through groundbreaking computational techniques. """
> Imagine the hash rate of the BTC network going towards some useful calculations.
Unfortunatly BTC mining now runs almost entirely on ASICs that can't be used to compute anything but SHA-256.
Imagine and equivalent amount of computer power in an alternative future. I mean, you're right, and this limitation is a flaw in bitcoin, but you've also missed the point.
It's also hard to call it a flaw in Bitcoin, considering it was an intentional design decision.
Even intentional design decisions can be flawed.
There's a pretty hard limit bounding the optimizability of SHA256. That's why hashcash uses a cryptographic hash function.
There may be - or, very likely are - shortcuts for proof of research better than Grover's; which, when found, will also be very useful for science and medicine. However, that advantage is theoretically destabilizing for a distributed consensus network; which is also a strange conflict in incentives.
Sort of like buying "buy gold" commercials when the market was heading into the worst recession since the Great Depression.
SSL accelerators may benefit from the SHA256 ASIC optimizations incentivized by the bitcoin design.
"""The accelerator provides the RSA public-key algorithm, several widely used symmetric-key algorithms, cryptographic hash functions, and a cryptographically secure pseudo-random number generator"""
GPU prices are also lower now; probably due to demand pulling volume. The TPS (transactions per second) rate is doing much better these days.
How would you solve the local daretime problem in order with Git and signatures?
Power Prices Go Negative in Germany
The article doesn’t actually answer the questions it purports to answer. Better source: https://www.cleanenergywire.org/factsheets/why-power-prices-....
Energy markets are artificial markets designed to create various price signals that result in certain incentives on both generation and demand, subject to numerous constraints. One constraint is that demand and supply must balance. The grid can’t store much energy. Oversupply can cause grid frequency to go above 50/60 Hz, threatening grid stability: https://www.e-education.psu.edu/ebf483/node/705. Power prices go negative when there is too much generation capacity online at a given instant, relative to demand. That creates incentives for generators that can shut down (like natural gas) to do so.
Negative power prices are not a good thing for consumers. A negative price in the wholesale electric markets does not mean the electricity is "less than free." Obviously, even wind power or solar always costs positive money to generate in real terms. Instead, it signals a mismatch between generation capacity, storage capacity, and demand. In a grid with adequate storage capacity, negative prices would be extremely rare.
I think you're missing the critical piece of your explanation of why negative prices aren't good for consumers:
The reason that negative power prices look like they would be good for consumers is that it seems like they should lower their monthly bills. But in practice their bill should stay the same, even if there were significant periods during which energy prices were negative. Why? The negative prices don't indicate that the cost of power production is negative for the power company, so the power company is losing money. The power company has to recoup those losses somehow, and they do it by charging more when power prices are positive.
In order to take advantage of negative power prices, a power consumer would have to dynamically increase their power consumption in response to the negative prices. If they have any significant power storage capacity, maybe they could store power and sell it back to the grid when prices go positive, or maybe turn on their mining rig while prices are negative, if they go negative frequently.
I'll just add in some empirical results. German power has risen to be some of the most expensive in Europe since they started the Energiewende. They have very low wholesale prices (driving the coal producers out of business) combined with very high retail prices (might be top 3 in Europe?).
The high retail price appears to be driven by the mechanisms driving the Energiewende ie transition to wind and solar.
https://www.cleanenergywire.org/factsheets/what-german-house...
https://www.ovoenergy.com/guides/energy-guides/average-elect...
The NYT article reads as though Germany is benefiting from the switch to renewables, and that the only problems are that sometimes there's so much power around that they have to pay people to use it.
It mentions high cost of electricity, and that it's due to fees and "renewable investment costs" but then immediately hand-waves that away because "household energy bills have been rising over all anyway."
When combined with your information above (those hand-wavy fees actually account for fully 50% of the costs, with 24% being the renewables surcharge) it would seem that the NYT is being misleading. It seems they want to convey the idea that the renewables are an immediate good thing for everyone (which I do not take issue with politically) while downplaying the significant costs to consumers. Am I missing something?
If I'm not, then this does nothing but contribute to the current view of American media as being intentionally misleading when it suits their interests.
No, I don't think you're missing something here.
The price for electricity is indeed very high in Germany, rising to new heights with the new year, again.
The problem is, unsurprisingly, regulation and policy, i.e. the fees you already mentioned, and the subsidies for industrial usage, etc.
So, yes, it actually seems that the NYT is misleading here.
Let's leave this as an exercise for the reader, to judge if anyone should really be surprised here.
I always think of this old "Trust, but verify" quibble, and that it is actually based on an old Russian proverb. The irony.. :)
> The price for electricity is indeed very high in Germany
It's not only very high, but the second highest in the world, and we are about to take over the first spot [1]
> The problem is, unsurprisingly, regulation and policy, i.e. the fees you already mentioned, and the subsidies for industrial usage, etc.
The problem are the subsidies for all the green energy. It's not only the direct costs but also the costs for grid interventions (turning on/off capacity), paying for renewables even if they are not producing because there is too much energy available, the huge requirements for new energy lanes from north to south, getting rid of nuclear power etc.
Subsidies for industrial usage is often quoted as increasing the costs by interested parties, but it's also a direct need of the Energiewende, because the economic damage would be gigantic. Unless you want to get the energy intensive factories out of your country, you have to factor in those costs.
> So, yes, it actually seems that the NYT is misleading here.
Yes, as does most media - especially in Germany. Those negative prices are no win for any german. Why else is it, that we will soon pay the highest prices for electricity in the world?
[1] http://www.sueddeutsche.de/wirtschaft/energiekosten-in-oecd-...
"Several countries in Europe have experienced negative power prices, including Belgium, Britain, France, the Netherlands and Switzerland."
> Yes, as does most media - especially in Germany. Those negative prices are no win for any german. Why else is it, that we will soon pay the highest prices for electricity in the world?
AFAIU, it's because you're aggressively shaping the energy market in order to reduce health and environmental costs now.
The technical issue here is that batteries are not good enough yet; and [hemp] supercapacitors are not yet at the volume needed to lower the costs. So, maintaining a high price for energy keeps the market competitive for renewables which have positive negative externalities.
Bitcoin is an energy arbitrage
In addition to relocating to where energy is the least expensive, Bitcoin creates incentive for miners to lower the local cost of energy: invest in renewable energy.
Renewable Energy / Clean Energy is now less expensive than alternatives; with continued demand, the margins are at least maintained.
> In addition to relocating to where energy is the least expensive, Bitcoin creates incentive for miners to lower the local cost of energy: invest in renewable energy.
We have lots of direct and effective subsides for nonrenewable energy in the United States. And some for renewables, as well. For example [1] average effective tax rate over all money making companies: 26%
"Coal & Related Energy": 0.69%
"Oil/Gas (integrated)": 8.01%
"Power": 29.22%
"Green and Renewable Energy": 26.42%
[1] "Tax Rates by Sector (US)" (January 2017) http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/...
X-posting here from the article's comments:
The price reflects the confidence investors have in the security's ability to meet or exceed inflation and in the information security of the network.
Volatility adds value for algo traders: say the prices are [1, 101, 51, 101, 51, 201]:
(101-1)+(101-51)+(201-51)=300
(201-1)=200
For the average Joe looking at the vested options they're hodling, though, volatility is unfriendly.
When e.g. algo-traders are willing to buy in when the price starts to fall, they're making liquidity; which some exchanges charge less for.
Enigma Catalyst (Zipline) is one way to backtest and live-trade cryptocurrencies algorithmically.
There are now more than 200k pending Bitcoin transactions
At 20 transactions per second it's a delay of 3 hours. (200000/20/60/60)
"The bitcoin network's theoretical maximum capacity sits between 3.3 to 7 transactions per second."
The OT link does say "Transactions Per Second 22.54".
The solutions for this 3 hour backlog of unconfirmed transactions include: implementing SegWit, increasing the blocksize, and Lightning Network.
I think the link counts incoming transactions. I.e. you can always schedule more, but it doesn't mean they're going to be acted on in a reasonable timeframe.
What ORMs have taught me: just learn SQL (2014)
ORMs:
- Are maintainable by a team. "Oh, because that seemed faster at the time."
- Are unit tested: eventually we end up creating at least structs or objects anyway, and then that needs to be the same everywhere, and then the abstraction is wrong because "everything should just be functional like SQL" until we need to decide what you called "the_initializer2".
- Can make it very easy to create maintainable test fixtures which raise exceptions when the schema has changed but the test data hasn't.
- Prevent SQL injection errors by consistently parametrizing queries and appropriately quoting for the target SQL dialect. (One of the Top 25 most frequent vulnerabilities). This is especially important because most apps GRANT both UPDATE and DELETE; if not CREATE TABLE and DROP TABLE to the sole app account.
- Make it much easier to port to a new database; or run tests with SQLite. With raw SQL, you need the table schema in your head and either comprehensive test coverage or to review every single query (and the whole function preceding db.execute(str, *params))
- May be the performance bottleneck for certain queries; which you can identify with code profiling and selectively rewrite by hand if adding an index and hinting a join or lazifying a relation aren't feasible with the non-SQLAlchemy ORM that you must use.
- Should provide a way to generate the query at dev or compile-time.
- Should make it easy to DESCRIBE the query plans that code profiling indicates are worth hand-optimizing (learning SQL is sometimes not the same as learning how a particular database plans a query over tables without indexes)
- Make managing db migrations pretty easy.
- SQLAlchemy really is great. SQLAlchemy has eager loading to solve the N+1 query problem. Django is often more than adequate; and has had prefetch_related() to solve the N+1 query problem since 1.4. Both have an easy way to execute raw queries (that all need to be reviewed for migrations). Both are much better at paging without allocating a ton of RAM for objects and object attributes that are irrelevant now.
- Make denormalizing things from a transactional database with referential integrity into JSON really easy; which webapps and APIs very often need to do.
Is there a good JS ORM? Maybe in TypeScript?
I've used http://bookshelfjs.org/ but I'd stay away from it, only felt cumbersome. No productivity gain.
It's built on top a query builder Knex (http://knexjs.org/) which is decent.
Objection.js and the Knex query builder are excellent. Think ORM light with full access to SQL.
Show HN: An educational blockchain implementation in Python
> It is NOT secure neither a real blockchain and you should NOT use this for anything else than educational purposes.
It would be nice if non-secure parts of implementation or design were clearly marked.
What's the point of education article, if bad examples aren't clearly marked as bad? If MD5 usage is the only issue, author could easily replace it with SHA and get rid of the warning at the start. If there are other issues, how can a reader know which parts to trust?
Even if fixing bad/insecure parts are "left as an exercise for the reader", learning value of the article would be much greater if those parts would be at least pointed at.
OP here.
erikb is spot on in the sibling comment. This hasn't been expert-reviewed, hasn't been audited so I'm pretty confident there is a bug somewhere that I don't know about.
It's educational in the sense that I tried as best a I could to implement the various algorithmic parts (mining, validating blocks & transactions, etc...).
I originally used MD5 because I thought I would do more exploration regarding difficulty and MD5 is faster to compute than SHA. In the end, I didn't do that exploration, so I could easily replace MD5 with SHA. I'll update the notebook to use SHA, but I'm still not gonna remove the warning :)
I'll also try to point out more explicitly which parts I think are not secure.
> I'll also try to point out more explicitly which parts I think are not secure.
Things I've noticed:
* Use of floating point arithmetic.
* Non-reproducible serialization in verify_transaction can produce slightly different, but equivalent JSON, which leads to rejecting transactions if produced JSON is platform-dependent (e.g. CRLFs, spaces vs tabs).
* Miners can perform DoS by creating a pair of blocks referencing each other (recursive call in verify_block is made before any sanity checks or hash checks, so they can modify block's ancestor without worrying about changing its hash).
* mine method can loop forever due to integer overflow.
* Miners can put in block a transaction with output sum greater than input sum - only place where it is checked is in compute_fee and no path from verify_block leads there.
Those are all very good points I didn't think about, thanks for these.
I'll fix the two bugs with verify_block and the possibility for a miner to inject invalid a output > input transaction.
I'll add a note for the 3 others.
For deterministic serialization (~canonicalization), you can use sort_keys=True or serialize OrderedDicts. For deseialization, you'd need object_pairs_hook=collections.OrderedDict.
Most current blockchains sign a binary representation with fixed length fields. In terms of JSON, JSON-LD is for graphs and it can be canonicalized. Blockcerts and Chainpoint are JSON-LD specs:
> Blockcerts uses the Verifiable Claims MerkleProof2017 signature format, which is based on Chainpoint 2.0.
https://github.com/blockchain-certificates/cert-verifier-js/...
FYI, dicts are now ordered by default as of Python 3.6.
That's an implementation detail, and shouldn't be relied upon. If you want an ordered dictionary, you should use collections.OrderedDict.
It's now the spec for 3.6+.
> #python news: @gvanrossum just pronounced that dicts are now guaranteed to retain insertion order. This is the end of a long journey.
https://twitter.com/raymondh/status/941709626545864704
More here: https://www.reddit.com/r/Python/comments/7jyluw/dict_knownor...
OrderedDicts are backwards-compatible and are guaranteed to maintain order after deletion.
Thanks! Simplest explanation I've seen.
Here's an nbviewer link (which, like base58, works on/over a phone): https://nbviewer.jupyter.org/github/julienr/ipynb_playground...
Note that Bitcoin does two rounds of SHA256 rather than one round of MD5. There's also a "P2P DHT" (peer-to-peer distributed hash table) for storing and retrieving blocks from the blockchain; instead of traditional database multi-master replication and secured offline backups.
> ERROR:root:Invalid transaction signature, trying to spend someone else's money ?
This could be more specific. Where would these types of error messages log to?
My mistake, it's BitTorrent that has a DHT. Instead of finding the most network local peer with the block identified by a (prev_hash, hash) hash table key, the Bitcoin blockchain broadcasts all messages to all nodes; which must each maintain a complete backup of the entire blockchain.
"Protocol documentation" https://en.bitcoin.it/wiki/Protocol_documentation
MSU Scholars Find $21T in Unauthorized Government Spending
Unauthorized federal spending (in these two departments) 1998-2015: $21T
Federal debt (2017): $20T
$ 20,000,000,000,000 USD
Would a blockchain for government expenditures help avoid this type of error?
We already now have https://usaspending.gov ( https://beta.usaspending.gov ) and expenditure line item metadata.
Would having traceable money in a distributed ledger help us keep track of money collected from taxpayers?
Obviously, the volatility of most cryptocurrencies would be disadvantageous for purposes of transferring and accounting for government spending. Isn't there a way to peg a cryptocurrency to the USD; even with Quantitative Easing? How is Quantitative Easing different from just deciding to print trillions more 'coins' in order to counter debt or inflation or deflation; why is the government in debt at all?
re: Quantitative Easing
https://en.wikipedia.org/wiki/Quantitative_easing
Say I have $100 in my Social Security Fund (in very non-aggressive investments which need to meet or exceed inflation) and the total supply of money (including paper notes and numbers in debit and credit columns of various public and private databases) the total supply of money is $1T with $1T in debt; if 1T is printed to pay for that debt, is my $100 in retirement savings then worth $50? Or is it more complex than that?
[deleted]
Universities spend millions on accessing results of publicly funded research
Are there good open source solutions for journal publishing? (HTML abstract, PDFs, comments, ...)?
Yes- quite a few beyond what's already been listed, in fact:
http://www.theoj.org/ https://plos.github.io/ambraproject/index.html https://v4.pubpub.org/
Ambra is being discontinued! http://blogs.plos.org/plos/2017/12/ceo-letter-to-the-communi...
Edit: And theoj doesn't really appear to be maintained anymore either...
> Ambra is being discontinued!
The article mentions the discontinuation of Aperta but nothing about Ambra?
https://plos.github.io/ambraproject/Developer-Overview.html
I'm sorry, I just realised that as well - my mistake.
An Interactive Introduction to Quantum Computing
Part 2 mentions two quantum algorithms that could be used to break Bitcoin (and SSH and SSL/TLS; and most modern cryptographic security systems): Shor's algorithm for factorization and Grover's search algorithm.
Part 2: http://davidbkemp.github.io/QuantumComputingArticle/part2.ht...
Shor's algorithm: https://en.wikipedia.org/wiki/Shor%27s_algorithm
Grover's algorithm: https://en.wikipedia.org/wiki/Grover%27s_algorithm
I don't know what heading I'd suggest for something about how concentration of quantum capabilities will create dangerous asymmetry. (That is why we need post-quantum ("quantum resistant") hash, signature, and encryption algorithms in the near future.)
Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)
"Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)" https://www.arxiv-vanity.com/papers/1710.10377/
> […] On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates.
From https://csrc.nist.gov/Projects/Post-Quantum-Cryptography :
> NIST has initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms. Nominations for post-quantum candidate algorithms may now be submitted, up until the final deadline of November 30, 2017.
Project Euler
After hearing about it for years, I decided to start working through Project Euler about two weeks ago. It really is much more about math than programming, although it's a lot of fun to take on the problems with a language that has tail call optimization because so many of the problems involve recurrence relations.
I like that the problems are constructed in a way that usually punishes you for trying to use brute force. Sometimes there's a problem that doesn't have a more elegant solution, though, as if to remind us that brute force often works remarkably well.
I agree with Project Euler being mostly about math. I prefer Codewars for practicing programming or learning a new language.
Euler was a mathematician after all. Now thinking about it I wonder what project Djikstra would look like, or say maybe project Stallman.
There's a Project Rosalind (named after Rosalind Franklin) which is sort of like Project Euler for bioinformatics: http://rosalind.info/about/
I like https://rosalind.info bioinformatics problems because:
- There are problem explanations and an accompanying textbook.
- You can structure the solutions with unit tests that test for known good values.
- There's a graph of problems.
Who’s Afraid of Bitcoin? The Futures Traders Going Short
Wall Street being able to buy Bitcoin might increase demand, but they might also do things that would at least at times amplify sell pressure like panic selling, shorting it, leveraged shorting, margin calls
Statement on Cryptocurrencies and Initial Coin Offerings
A little emphasis, focusing on the aftermath from the SEC's July report on the DAO:
> "Following the issuance of the 21(a) Report, certain market professionals have attempted to highlight utility characteristics of their proposed initial coin offerings in an effort to claim that their proposed tokens or coins are not securities. Many of these assertions appear to elevate form over substance. Merely calling a token a “utility” token or structuring it to provide some utility does not prevent the token from being a security."
I think people are viewing this as an attack on crypto, when its actually just common sense. People put too much faith the 'Contract' half of 'Ethereum/Smart Contract'
Basically. Today ICOs are selling tokens as shares of equity in their company, or similar. Which you can then sell on.
The problem is these companies essentially reserve the right to disregard that contract and could then sell their company, domestically or overseas, for cash, without recompensating any token holders.
Securities regulation and law stops that. But the tokens do need to be lawful securities in order for the court to recognize them.
Otherwise how can the court help you, and who else is going to help you?
> I think people are viewing this as an attack on crypto, when its actually just common sense.
> […] The problem is these companies essentially reserve the right to disregard that contract and could then sell their company, domestically or overseas, for cash, without recompensating any token holders.
> Securities regulation and law stops that. But the tokens do need to be lawful securities in order for the court to recognize them.
This. IRS regards coins and tokens as capital gains taxable things regardless of whether they qualify as securities. SEC exists to protect investors from scams and unfair dealing. In order to protect investors, SEC regulates issuance of securities.
Ask HN: How do you stay focused while programming/working?
I often find myself "needing" to take a mini-break after just a few minutes of concerted effort while coding. In particular, this often occurs after I've made a tiny breakthrough, prompting me to reward myself by checking Twitter or HN. This bad habit quickly derails any momentum. What are some tips to increase focus stamina and avoid distraction?
It's not exactly new and exciting, but I found that listening to calm, instrumental music helps me focus. Mostly Ambient. If you do not like electronic music, Stars Of The Lid or Bohren & Der Club Of Gore are very much worth checking out.
Also, https://mynoise.net/ has worked wonders for me.
In both cases, it seems that unstructured audio input, like, occupies the parts of my mind that would otherwise distract me.
> It's not exactly new and exciting, but I found that listening to calm, instrumental music helps me focus. Mostly Ambient.
Same. Lounge, Ambient, Chillout, Chillstep (https://di.fm has a bunch of great streams. SoundCloud and MixCloud have complete replayable sets, too.)
I've heard that videogame soundtracks are designed to not be distracting; to help focus.
A Hacker Writes a Children's Book
The rhymes and illustrations look great! Is there a board book edition?
Other great STEM and computers books for kids:
"A is for Array"
"Lift-the-Flap Computers and Coding"
"Computational Fairy Tales"
"Hello Ruby: Adventures in Coding"
"Python for Kids: A Playful Introduction To Programming"
"Lauren Ipsum: A Story About Computer Science and Other Improbable Things"
"Rosie Revere, Engineer"
"Ada Byron Lovelace and the Thinking Machine"
"HTML for Babies: Volume 1 of Web Design for Babies"
"What Do You Do With a Problem?"
"What Do You Do With an Idea?"
"ABCs of Mathematics", "The Pythagorean Theorem for Babies", "Non-Euclidian Geometry for Babies", "Introductory Calculus for Infants", "ABCs of Physics", "Statistical Physics for Babies", "Netwonian Physics for Babies", "Optical Physics for Babies", "General Relativity for Babies", "Quantum Physics for Babies", "Quantum Information for Babies", "Quantum Entanglement for Babies"
"ELI5": "Explain like I'm five"
Someone should really make a list of these.
Ask HN: Do ISPs have a legal obligation to not sell minors' web history anymore?
I guess COPPA is still in place and in theory it applies to ISPs, although they may be allowed to assume that all traffic from a household comes from the bill payer who is presumably over 13.
http://kellywarnerlaw.com/childrens-online-privacy-protectio...
https://www.theverge.com/2017/3/31/15138526/isp-privacy-bill...
Tech luminaries call net neutrality vote an 'imminent threat'
> “The current technically-incorrect order discards decades of careful work by FCC chairs from both parties, who understood the threats that Internet access providers could pose to open markets on the Internet.”
Paid prioritization is that threat.
Again, streaming video content for all ages is not more important than online courses.
Ask HN: Can hashes be replaced with optimization problems in blockchain?
CureCoin.
From https://curecoin.net/knowledge-base/about-curecoin/what-is-c... :
> Curecoin allows owners of both ASIC and GPU/CPU hardware to earn. Curecoin puts ASICs to work at what they are good at–securing a blockchain, while it puts GPUs and CPUs to work with work items that can only be done on them–protein folding. While still having a secure blockchain, it supports, and thus is supported by, scientific research.
...
From "CureCoin Reaches #1 Ranking on Folding@home" https://www.newswire.com/news/bio-research-loves-curecoin-ga... :
> As of the afternoon of August 29, 2017 (Eastern Time), the Curecoin Team 224497 earned the world's #1 rank on Stanford's Folding@home - a protein folding simulation Distributed Computing Network (DCN). In a little over 3 years, the team (including our merge-folding partners at Foldingcoin) collectively produced 160 billion points worth of molecular computations to support research in the areas of cancer, Alzheimer's, Huntington's, Parkinson's, Infectious Disease as well as helping scientists uncover new molecular dynamics through groundbreaking computational techniques.
From https://news.ycombinator.com/item?id=15843795 :
> Gridcoin (Berkeley 2013) is built on Proof-of-Stake and Proof-of-Research. Gridcoin is used as payment for computing resources contributed to BOINC.
> I doubt that volatility would be welcome on the Gridcoin blockchain: Wikipedia lists "6.5% Inflation. 1.5% Interest + 5% Research Payments APR" under the Supply Growth infobox attribute.
Ask HN: What could we do with all the mining power of Bitcoin? Fold Protein?
Instead of buzzing SHA-512 in circles like busy bees ad infinitum, is there any way we can use these calculations productively?
Instead of algo-trading the stock markets?!
There are a number of distributed computing projects (e.g. SETI@home): https://en.wikipedia.org/wiki/List_of_distributed_computing_...
The Ethereum White Paper lists a number of applications for blockchains: https://github.com/ethereum/wiki/wiki/White-Paper
(BitCoin is built on SHA-256, Ethereum is built on Keccak-256 (~SHA-3))
Proof-of-Stake is a lower energy alternative to Proof-of-Work with tradeoffs: https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ
Unfortunately, IDK of another way to find secure consensus (blockchains are consensus protocols) in a DDOS-resistant way with unsolved problems?
> Unfortunately, IDK of another way to find secure consensus (blockchains are consensus protocols) in a DDOS-resistant way with unsolved problems?
Gridcoin (Berkeley 2013) is built on Proof-of-Stake and Proof-of-Research. Gridcoin is used as payment for computing resources contributed to BOINC.
I doubt that volatility would be welcome on the Gridcoin blockchain: Wikipedia lists "Supply growth 6.5% Inflation. 1.5% Interest + 5% Research Payments APR" under the Supply Growth infobox attribute.
No CEO needed: These blockchain platforms will let ‘the crowd’ run startups
How much energy does Bitcoin mining really use?
The Actual FCC Net Neutrality Repeal Document. TLDR: Read Pages 82-87 [pdf]
Here are some links to the relevant antitrust laws:
Sherman Antitrust Act (1890) https://en.wikipedia.org/wiki/Sherman_Antitrust_Act
Aspen Skiing Co. v. Aspen Highlands Skiing Corp. (1985) https://en.wikipedia.org/wiki/Aspen_Skiing_Co._v._Aspen_High....
Transparency in network management and paid prioritization practices and agreements will be relevant.
"We find that antitrust law, in combination with the transparency rule we adopt, is particularly well-suited to addressing any potential or actual anticompetitive harms that may arise from paid prioritization arrangements." (p.147)
If antitrust law is sufficient, as you've found, there would be no need for Title II Common Carrier regulation in any industry.
We can call phone numbers provided by any company at the same rate because phone companies are regulated as Title II Common Carriers. ISPs are also common carriers.
"Public airlines, railroads, bus lines, taxicab companies, phone companies, internet service providers,[3] cruise ships, motor carriers (i.e., canal operating companies, trucking companies), and other freight companies generally operate as common carriers."
The 5 most ridiculous things the FCC says in its new net neutrality propaganda
> The Federal Communications Commission put out a final proposal last week to end net neutrality. The proposal opens the door for internet service providers to create fast and slow lanes, to block websites, and to prioritize their own content. This isn’t speculation. It’s all there in the text.
Great. Payola. Thanks Verizon!
Does the FTC have the agreement information needed to hear the anti-trust cases that are sure to result from what are now complaints to the FCC (an organization with network management expertise) being redirected to the FTC?
Title II is the appropriate policy set for ISPs; regardless of how lucrative horizontal integration with content producers seems.
FCC's Pai, addressing net neutrality rules, calls Twitter biased
No. Censoring hate speech by banning people who are verbally assaulting others (in violation of Terms of Service that they agreed to) is a very different concern than requiring common carriers to equally prioritize bits.
If we extend "you must allow people to verbally assault others (because free speech applies to the government)" to TV and radio, what do we end up with?
Note that the FCC fines non-cable TV (broadcast radio and TV) for cursing on air. See "Obscene, Indecent and Profane Broadcasts" https://www.fcc.gov/consumers/guides/obscene-indecent-and-pr...
How can you ask social media companies to do something about fake news (the vast majority of which served to elect the current administration (which nominated this FCC chairman)) while also lambasting them for upholding their commitment to providing a hate-free experience for net citizens and paying advertisers?
"Open Internet": No blocking. No throttling. No paid prioritization.
It would be easier for us to understand the "Open Internet" rules if the proposed "Restoring Internet Freedom" page wasn't crudely pasted over (redirected to from) the page describing the current Open Internet rules. www.fcc.gov/general/open-internet (current policy) now redirects to www.fcc.gov/restoring-internet-freedom (proposed policy).
ISPs blocking, throttling, or paid-prioritizing Twitter, Netflix, Fox, or CNN for everyone is a different concern than responding to individuals who are threatening others with hate speech.
The current policy ("Open Internet") means that you can use the bandwidth cap that you pay for for whatever legal content you please.
The proposed policy ("Restoring Internet Freedom") means that internet businesses will need to pay every ISP in order to not be slower than the big guys who can afford to pay-to-play (~"payola"). https://en.wikipedia.org/wiki/Payola
A curated list of Chaos Engineering resources
Never having heard off 'Chaos Engineering', this seems like a bad case of 'Cargo Cult Engineering'.
That starts with the term 'chaos', which has a well-defined meaning in Chaos Theory, where it is quite obviously borrowed from: small changes in input lead to large changes in output. Neither distributed systems in general, and especially not the sort of system this engineering strives to build, fit that definition. In fact, they are the exact opposite: every part of a typical web stack is already build to mitigate changing demands such as traffic peaks or attacks.
The mumbo jumbo around "defining a steady state" and "disproving the null hypothesis" seems like a veneer of sciency on a rather well-known concept: testing.
A supreme court justice once said: "Good writing is a $10 thought in a 5 cent sentence". This is the opposite.
"Resilience Engineering" would be a good alternative term for these failure scenario simulations and analyses.
Glossary of Systems Theory > A > Adaptive capacity:
> Adaptive capacity: An important part of the resilience of systems in the face of a perturbation, helping to minimise loss of function in individual human, and collective social and biological systems
Technology behind Bitcoin could aid science, report says
Bloom is working on non-academic credit building and scoring.
Hyperledger brings together many great projects and tools which have numerous applications in science and industry.
Is a blockchain necessary? Could we instead just sign JSONLD records with ld-signatures and store them in an eventually or strongly consistent database we all contribute resources to synchronizing and securing?
That's just the centralization or decentralization question.
We can do it all centralized already but we would also all need to trust whoever is hosting this data and trust every single person who has the ability to enter the data.
Less nodes you need to trust the better, in a centralized solution where everyone can contribute you need to be able to trust everyone.
In a decentralized system where everyone can contribute, you don't need to trust anyone but give up benefits of centralization such as speed, performance and usability.
> We can do it all centralized already but we would also all need to trust whoever is hosting this data and trust every single person who has the ability to enter the data.
There are plenty of ways to minimize trust required with traditional cryptography though, this is not all or nothing, we have been doing this since PGP. You can get the overwhelming majority of the benefits with none of the drawbacks.
But how else are you going to hype an ICO with claims about the size of a market and get people who don't understand how blockchains work to give your their BTC/ETH?
Git hash function transition plan
> Some hashes under consideration are SHA-256, SHA-512/256, SHA-256x16, K12, and BLAKE2bp-256.
Not sure what K12 is (Keccak?), but BLAKE2 is a very attractive option.
Vintage Cray Supercomputer Rolls Up to Auction
Google is officially 100% sun and wind powered – 3.0 gigawatts worth
+1000.
TIL this is called "Corporate Renewable Energy Procurement". https://www.google.com/search?q=Corporate+Renewable+Energy+P...
PPA: Power Purchase Agreement https://en.wikipedia.org/wiki/Power_purchase_agreement
Interactive workflows for C++ with Jupyter
QuantStack/xeus-cling https://github.com/QuantStack/xeus-cling
QuantStack/xwidgets https://github.com/QuantStack/xwidgets
QuantStack/xplot (bqplot) https://github.com/QuantStack/xplot
Vanguard Founder Jack Bogle Says ‘Avoid Bitcoin Like the Plague’
Over the past 7 years, Bitcoin has outperformed every security and portfolio that Jack Bogle has recommended.
This is pretty disrespectful to Jack Bogle.
Vanguard is almost singlehandedly responsible for returning trillions of dollars of costs, in the form of fees and underperformance by active managers, back to investors. Millions of investors have benefited.
Bitcoin has been a bubble since $1 and $100 to these people.
What evidence is there that it isn't a bubble? People buy Bitcoin only because they think they can sell it higher. Eventually, you will run out of greater fools.
Nasdaq Plans to Introduce Bitcoin Futures
My guess is that this is probably pretty meaningless.
There are a few things going against them.
- The CBOE and CME are both much larger futures exchanges and are going to be offering futures first
- since you can't net out futures contracts from different exchanges this means they tend to become winner take all
> One way Nasdaq seeks to differentiate itself seems to be in the amount of data it uses for pricing the digital currency contracts. VanEck Associates Corp., which recently withdrew plans for a bitcoin exchange-traded fund, will supply the data used to price the contracts, pulling figures from more than 50 sources, according to the person.
This might be interesting as one of the things that everyone is worried about is price manipulation.
If you haven't thought about how futures work with respect to margin and marking at the end of the trading day you need to know that you can be required to deposit more money into your margin account if the futures trade moves against you on any given day.
This means the marking price is very important and lost of institutional money is worried that the exchanges are easy to manipulate.
see: http://openmarkets.cmegroup.com/3785/understanding-margin-ch...
> Nasdaq’s product will reinvest proceeds from the spin-off back into the original bitcoin in a way meant to make the process more seamless for traders, the person said.
This is awesome,, right now the CBOE and CME both have punted on the question of forks saying, they'll have a best efforts to figure it out.
> One way Nasdaq seeks to differentiate itself seems to be in the amount of data it uses for pricing the digital currency contracts. VanEck Associates Corp., which recently withdrew plans for a bitcoin exchange-traded fund, will supply the data used to price the contracts, pulling figures from more than 50 sources, according to the person. That appears to exceed CME’s plan to use four sources, and Cboe’s one. Nasdaq’s contracts will be cleared by Options Clearing Corp., the person said.
BitMEX bitcoin futures are already online. IDK how many price sources they pull?
Aren't there a few other companies already selling Bitcoin futures?
In general, when the CME enters a market for futures, they take all of the air out of the room. I don't think it's realistic to believe NASDAQ can compete with them.
https://twitter.com/officialmcafee/status/935900326007328768
Well John McAfee thinks bitcoin will hit 1 million by 2020.
Pump and dumpers gonna pump and dump. https://medium.com/@DEFCON_2015/why-mgt-was-delisted-from-ny...
Or, large investment banking houses will step in and create naked shorting opportunities to inflate sell pressure creating 'death spirals' to drive prices down and scoop them up and extreme discounts. This happens in the traditional public markets everyday.
> Or, large investment banking houses will step in and create naked shorting opportunities to inflate sell pressure creating 'death spirals' to drive prices down and scoop them up and extreme discounts. This happens in the traditional public markets everyday.
Is there a term for this?
Yes, this can happen in a few different ways and is the reason why Ycombinator created SAFEs. When you have a public company you will get offers for what are called "credit lines", "debt financing" or "convertible notes". They are traditionally used to create death spirals https://www.investopedia.com/ask/answers/06/deathspiralbond.... as the size of your float increases by you, executive director (CEO/CFO), as a public company "issuing" more stock to cover the loan. The more you issue, the less you're worth until somebody comes along scoops you up and re-engineers the cap table which is a restructuring. However, manipulation can occur within institutions as well: https://news.ycombinator.com/threads?id=KasianFranks&next=14...
Ask HN: Where do you think Bitcoin will be by 2020?
I have a friend who believes it will be $100,000 per BitCoin and his reasoning is 'supply and demand'.
There will be around 18M bitcoins in 2020. [1][2]
[1] https://en.bitcoin.it/wiki/Controlled_supply
This paper [3] suggests we'll be needing to upgrade to quantum-secure hash functions instead of ECDSA before 2027.
[3] "Quantum attacks on Bitcoin, and how to protect against them" https://arxiv.org/abs/1710.10377
Hopefully, Ethereum will have figured out a Proof of Stake [4] solution for distributed consensus which is as resistant to DDOS as Proof of Work; but with less energy consumption (thereby, unfortunately or fortunately, un-incentivizing clean energy as a primary business goal).
[4] https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ
Ask HN: Why would anyone share trading algorithms and compare by performance?
I was speaking with a person years my senior awhile back, and sharing information about the Quantopian platform (which allows users to backtest and share trading algorithms); and he asked me "why would anyone share their trading algorithms [if they're making any money]?"
I tried "to help each other improve their performance". Is there a better way to explain to someone who spends their time reading forums with no objective performance comparisons over historical data why people would help each other improve their algorithmic trading algorithms?
Catalyst, like Quantopian, is also built on top of Zipline; but for cryptocurrencies. https://enigmampc.github.io/catalyst/example-algos.html
Zipline (backtesting and live trading of algorithms with initialize(context) and handle_data(context, data) functions; with the SPY S&P 500 ETF as a benchmark) https://github.com/quantopian/zipline
Pyfolio (for objectively comparing the performance of trading strategies over time) https://github.com/quantopian/pyfolio
...
"Community Algorithms Migrated to Quantopian 2" https://www.quantopian.com/posts/community-algorithms-migrat...
- "Reply to minimum variance w/ contrast" seems to far outperform the S&P 500.
Ask HN: CS papers for software architecture and design?
Can you please point me to some papers that you consider very influential for your work or that you believe they played significant role on how we structure our software nowdays?
"The Architecture of Open Source Applications" Volumes I & II http://aosabook.org/en/
"Manifesto for Agile Software Development" https://en.wikipedia.org/wiki/Agile_software_development#The...
"Catalog of Patterns of Enterprise Application Architecture" https://martinfowler.com/eaaCatalog/
Fowler > Publications ("Refactoring ",) https://en.wikipedia.org/wiki/Martin_Fowler#Publications
"Design Patterns: Elements of Reusable Object-Oriented Software" (GoF book) https://en.wikipedia.org/wiki/Design_Patterns
.
UNIX Philosophy https://en.wikipedia.org/wiki/Unix_philosophy
Plan 9 https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
## Distributed Systems
CORBA > Problems and Criticism (monolithic standards, oversimplification,): https://en.wikipedia.org/wiki/Common_Object_Request_Broker_A...
Bulk Synchronous Parallel: https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science)
Raft: https://en.wikipedia.org/wiki/Raft_(computer_science) #Safety
CAP theorem: https://en.wikipedia.org/wiki/CAP_theorem
Keeping a Lab Notebook [pdf]
I'd love to hear some thoughts about keeping a "lab notebook" for ML experiments. I use Jupyter Notebooks when playing around with different ML models, and I find that it really helps to document my thought process with notes and comments. It also seems that the ML workflow is very 'experiment' driven. I'm always thinking "Hm, I think if I tweak this hyperparameter this way, or adjust this layer this way, then I'll get a better result because X". Thus, I have a bit of a hypothesis and proposed experiment. I run that model, and see if it improved or not.
Then, I run into an issue where I can either: 1. overwrite the original model with my new hyperparameters/design and re-run and analyze or 2. keep adding to the same notebook "page" with a new hypothesis/test/analysis loop, thus making the notebook pretty large. With number 1, I often want to backtrack and re-reference how a previous experiment went, but I lose that history. With number 2, it seems to get big pretty quickly, and coming back to the same notebook requires more setup, and "searching" the history gets more cumbersome.
Does anyone try using a separate notebook page for each experiment, maybe with a timestamp or "version"? Or is there a better way to do this in a single notebook? I am thinking that something like "chapters" could help me here, and it seems like this extension might help me: https://github.com/minrk/ipython_extensions#table-of-content...
These are ASCII-sortable:
0001_Introduction.ipynb
0010_Chapter-1.ipynb
ISO8601 w/ UTC is also ASCII sortable.
# Jupyter notebooks as lab notebooks
## Disadvantages
### Mutability
With a lab notebook, you can cross things out but they're still there.
- [ ] ENH: Copy cell and mark as don't execute (or wrap with ```language\n``` and change the cell type to markdown)
- [ ] ENH: add a 'Save and {git,} Commit' shortcut
CoCalc (was: SageMathCloud) has (somewhat?) complete notebook replay with a time slider; and multi-user collaborative editing. ("Time-travel is a detailed history of all your edits and everything is backed up in consistent snapshots.")
### Timestamps
You must add timestamps by hand; i.e. as #comments or markdown cells.
- [ ] ENH: add a markdown cell with a timestamp (from a configurable template) (with a keyboard shortcut)
### Project files
You must manage the non-.ipynb sources separately. (You can create a new file or folder. You can just drag and drop to upload. You can open a shell tab to `git status diff commit` and `git push`, if the Jupyter/JupyterHub/CoCalc instance has network access to e.g. GitLab or GitHub)
## Advantages
### Reproducibility Executable I/O cells
The version_information and/or watermark extensions will inline the software versions that were installed when the notebook was last run
Dockerfile for OS config
Conda environment.yml (and/or pip requirements.txt and/or pipenv Pipfile) for further software dependencies
BinderHub can rebuild a docker image on receipt of a webhook from a got repo, push the built image to a docker image repository, and then host prepared Jupyter instances (with Kubernetes) which contain (and reproducibly archive) all of the preinstalled prerequisites.
Diff: `git diff`, `nbdime`
### Publishing
You can generate static HTML, HTML slides with RevealJS, interactive HTML slides with RISE, executable source with comments (e.g. a .py file), LaTeX, and PDF with 'Save as' or `jupyter-convert --to`. You can also create slides with nbpresent.
MyBinder.org and Azure Notebooks have badges for e.g. a README.md or README.rst which launch a project executably in a docker instance hosted in a cloud. CoCalc and Anaconda Cloud also provide hosted Jupyter Notebook projects.
You can template a gradable notebook with nbgrader.
GitHub renders .ipynb notebooks as HTML. Nbviewer renders .ipynb notebooks as HTML.
There are more than 90 Jupyter Kernels for languages other than Python.
https://github.com/quobit/awesome-python-in-education#jupyte...
How to teach technical concepts with cartoons
There's not a Wikipedia page for "visual metaphor", but there are pages for "visual rhetoric" https://en.wikipedia.org/wiki/Visual_rhetoric and "visual thinking" https://en.wikipedia.org/wiki/Visual_thinking
Negative space can be both meaningful and useful later on.
I learned about visual thinking and visual metaphor in application to business communications from "The Back of the Napkin: Solving Problems and Selling Ideas with Pictures" http://www.danroam.com/the-back-of-the-napkin/
Fact Checks
Indeed, fact checking systems are only as good as the link between identity credentialing services and a person.
http://schema.org/ClaimReview (as mentioned in this article) is a good start.
A few other approaches to be aware of:
"Reality Check is a crowd-sourced on-chain smart contract oracle system" [built on the Ethereum smart contracts and blockchain]. https://realitykeys.github.io/realitycheck/docs/html/
And standards-based approaches are not far behind:
W3C Credentials Community Group https://w3c-ccg.github.io/
W3C Verifiable Claims Working Group https://www.w3.org/2017/vc/WG/
W3C Verifiable News https://github.com/w3c-ccg/verifiable-news
In terms of verifying (or validating) subjective opinions, correlational observations, and inferences of causal relations; #LinkedMetaAnalyses of documents (notebooks) containing structured links to their data as premises would be ideal. Unfortunately, PDF is not very helpful in accomplishing that objective (in addition to being a terrible format for review with screen reader and mobile devices): I think HTML with RDFa (and/or CSVW JSONLD) is our best hope of making at least partially automated verification of meta analyses a reality.
DHS orders agencies to adopt DMARC email security
From https://www.cyberscoop.com/dhs-dmarc-mandate/ :
> By Jan. 2018, all federal agencies will be required to implement DMARC across all government email domains.
> Additionally, by Feb. 2018, those same agencies will have to employ Hypertext Transfer Protocol Secure (HTTPS) for all .gov websites, which ensures enhanced website certifications.
The electricity for 1BTC trade could power a house for a month
The article seems to imply that a 1BTC transaction requires 200kWh of energy.
First, what is the source for that number?
Second, what is the business interest of the quoted individual? Are they promoting competing services?
Third, how much energy does the supposed alternative really take, by comparison?
How much energy do these aspects of said business operations require:
- Travel to and from the office for n employees
- Dry cleaning for n employees' work clothes
- Lights for an office of how many square feet
- Fraud investigations in hours worked, postal costs, wait times, CPU time and bandwidth to try and fix data silos' ledgers' transaction ids and time skew; with a full table JOIN on data nobody can only have for a little while from over here and over there
- Desktop machines' idle hours
- Server machines' idle hours
With low cost clean energy, these businesses are profitable; with a very different cost structure than traditional banking and trading.
PAC Fundraising with Ethereum Contracts?
I'll cc this here with formatting changes (extra \n and ---) for Hacker News:
---
### Background
- PAC: Political Action Committee https://en.wikipedia.org/wiki/Political_action_committee
- https://github.com/holographicio/awesome-token-sale
### Questions
- Is Civic PAC fundraising similar to e.g. a Crowdsale or a CappedCrowdsale or something else entirely, in terms of ERC20 OpenZeppelin solidity contracts?
- Would it be worth maintaining an additional contract for [PAC] "fundraising" with terminology that campaigns can understand; or a terminology map?
- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the risks of a token sale for a PAC?
--- Is there any way to check for donors' citizenship? (When/Where is it necessary to check donors' citizenship (with credit/debit cards or cryptocoins/cryptotokens?))
- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the costs of a token sale for a PAC?
--- How much gas would such a contract require?
- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the benefits of a token sale for a PAC?
---- Lower transaction fees than credit/debit cards?
---- Time limit (practicality, marketing)
---- Cap ("we only need this much")
---- Refunds in the event of […]
### Objectives
- Comply with all local campaign finance laws
--- Collect citizenship information for a Person
--- Collect citizenship information for an Organization 'person'
- Ensure that donations hold value
- Raise funds
- Raise funds up to a cap
- (Optionally?) collect names and contact information ( https://schema.org/Person https://schema.org/Organization )
- Optionally refund if the cap is not met
- Optionally change the cap midstream
- Optionally cancel for a specified string and/or URL reason
Here’s what you can do to protect yourself from the KRACK WiFi vulnerability
> But first, let’s clarify what an attacker can and cannot do using the KRACK vulnerability. The attacker can intercept some of the traffic between your device and your router. Attackers can’t obtain your Wi-Fi password using this vulnerability. They can just look at your traffic. It’s like sharing the same WiFi network in a coffee shop or airport.
From reading the articles:
( https://github.com/vanhoefm/krackattacks ; which is watch-able )
> Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets.
https://www.kb.cert.org/vuls/id/228519
> Key reuse facilitates arbitrary packet decryption and injection, TCP connection hijacking, HTTP content injection, or the replay of unicast, broadcast, and multicast frames.
The Solar Garage Door – A Possible Alternative to the Emergency Generator
Is it possible to apply the https://solarwindow.com ($WNDW) glass coating to a non-glass garage door?
Using the Web Audio API to Make a Modem
Ask HN: How to introduce someone to programming concepts during 12-hour drive?
I won't go into details to keep this brief, but I'm going to spend a week with this client of mine's kit, and I'm supposed to teach him enough about programming for him to figure out if it's something he might be interested in pursuing.
He's about 20, and still struggling to finish high school, but he's smart (although perhaps a little weird).
I thought about introducing him to touch typing just to get a useful skill out of this regardless of the outcome. Then, I thought that during this week I'd teach him HTML and enough CSS to see what's used for. I'm thinking that if he gets excited about typing code and seeing things happening he'll want to study more and learn more advanced stuff in the future and perhaps even make it his profession (this is what my client hopes will happen).
Now, part of this trip is a 12-hour drive. I thought I could use this time to introduce him to simple programming concepts. For instance, if asked to list all steps involved in starting a car, most people would say:
- turn key - start car
That could turn into an infinite loop, though. A better way would be:
- turn key - start car - if it starts, exit - if it doesn't start, repeat 3 more times - if it still won't start, call a mechanic
Stuff like this—that anyone can understand, that can be explained without looking at a computer, but that it's still useful.
Any idea what I could talk about? Examples, anecdotes, anything.
Computational Thinking:
https://en.wikipedia.org/wiki/Computational_thinking
> 1. Problem formulation (abstraction);
> 2. Solution expression (automation);
> 3. Solution execution and evaluation (analyses).
This is a good skills matrix to start with:
http://sijinjoseph.com/programmer-competency-matrix/
https://competency-checklist.appspot.com
"Think Python: How to Think Like a Computer Scientist"
http://www.greenteapress.com/thinkpython/html/index.html
K12CS Framework is good for all ages:
For syntax, learnxinmyminutes:
https://learnxinyminutes.com/docs/python3/
https://learnxinyminutes.com/docs/javascript/
Good one, but that's kind of hard to do while driving, though.
To get a job, "Coding Interview University":
https://github.com/jwasham/coding-interview-university
This is actually pretty awesome :-)
Start simple, start small and start with something he's interested in.
There's the part about helping him discover whether he likes to create things through computers and whether he actually believes he can create things through computers. You're spot on that he might be interested about typing code but you'll have to figure out whether he's a visual person, a logical person, etc. For example, I got started learning to code once I understood what code can do to help automate things. A friend of mine got interested after seeing what websites he can build. Everyone is unique so you'll have to learn about him as you're trying to teach.
Good advice. Do you have any tip of how to figure out if he's a visual or logical person, etc.?
You can learn about a person's internal representation by asking Clean Questions and listening to the metaphors that they share; in order to avoid transferring and inferring your own biased internal representation (MAPS: metaphors, assumptions, paradigms or sensations).
It's worth reading this whole article (and e.g. "Clean Language: Revealing Metaphors and Opening Minds")
https://en.wikipedia.org/wiki/Clean_Language
"Metaphors We Live By" explains conceptual metaphor ("internal representation" w/ Clean Language / Symbolic Modeling) and lists quite a few examples: https://en.wikipedia.org/wiki/Conceptual_metaphor
Our human brains tend to infer Given, When, Then "rules" which we only later reason about in terms of causal relations: https://en.wikipedia.org/wiki/Given-When-Then
It's generally accepted that software is more correct when we start with tests:
Given : When : Then :: Precondition : Command : Postcondition https://wrdrd.github.io/docs/consulting/software-development...
... "Criteria for Success and Test-Driven-Development" https://westurner.github.io/2016/10/18/criteria-for-success-...
I believe it was Feynman who introduced the analogy:
desktop : filing cabinet :: RAM : hard drive
Here's a video: "Richard Feynman Computer Heuristics Lecture" (1985) https://youtu.be/EKWGGDXe5MA
Somewhere in my comments here, I talk about topologically sorting CS concepts; in what little time I spent, I think I suggested "Constructor Theory" (Deutsch 201?) as a first physical principle. https://en.wikipedia.org/wiki/Constructor_theory
> Constructor Theory
https://en.wikipedia.org/wiki/Constructor_theory#Outline
Task, Constructor, Computation Set, Computation Medium, Information Medium, Super information Medium (quantum states)
The filing cabinet and disk storage are information mediums / media.
How is the desktop / filling cabinet metaphor mismatched or limiting?
There may be multiple desktops (RAM/Cache/CPU; Computation mediums): is the problem parallelizable?
Consider a resource scheduling problem: there are multiple rooms, multiple projectors, and multiple speakers. Rooms and projectors cost so much. Presenters could use all of an allotted period of time; or they could take more or less time. Some presentations are logically sequence able (SHOULD/MUST be topologically sorted). Some presentations have a limited amount of time for questions afterward.
Solution: put talks online with an infinite or limited amount of time for asynchronous questions/comments
Solution: in between attending a presentation, also research and share information online (concurrent / asynchronous)
And, like a hash map, make the lookup time for a given resource with a type(s) ~O(1) with URLs (URIs) that don't change. (Big-O notation for computational complexity)
Resource scheduling (SLURM,): https://news.ycombinator.com/item?id=15267146
American Red Cross Asks for Ham Radio Operators for Puerto Rico Relief Effort
Zello trended up during hurricane Harvey:
> Push the button for instant, radio-style talk on any Wi-Fi or data plan.
> Access public and private channels.
> Choose button for push-to-talk.
> [...] available for Android, BlackBerry, iPhone, Windows PC and Windows Phone 8
...
> Connects to existing LMR radio systems
> All Radio Technologies
> Interconnect conventional and trunked analog FM, ETSI DMR, ETSI TETRA, MotoTRBO, APCO P25 FDMA, and NXDN.
They probably need some batteries, turbines, and solar cell chargers to get WiFi online?
> This phone needs no battery
http://www.techradar.com/news/this-phone-needs-no-battery
> [...] “We’ve built what we believe is the first functioning cellphone that consumes almost zero power,” said Shyam Gollakota, an associate professor in the Paul G. Allen School of Computer Science & Engineering at the UW and co-author on a paper describing the technology.
> Instead, the phone pulls power from its environment - either from ambient radio signals harvested by an antenna, or ambient light collected by a solar cell the size of a grain of rice. The device consumes just 3.5 microwatts of power during use.
> [...] “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere."
(Also trending on HackerNews right now: https://news.ycombinator.com/item?id=15350799 )
That's not at all useful, nor is it relevant.
Probably also worth mentioning Shelterpods and Responsepods for disaster relief deployments to this crowd; they're designed to take a lot of wind and rain:
https://store.advancedsheltersystemsinc.com/?___store=shelte...
https://store.advancedsheltersystemsinc.com/responsepod/vip/...
Technical and non-technical tips for rocking your coding interview
This is also a great resource, if you're into studying yourself:
"Coding Interview University" https://github.com/jwasham/coding-interview-university
Django 2.0 alpha
Can someone who uses Django, review this release? What are the most anticipated changes etc?
There's new `path()` as alternative to `url()` that allows one to use easier to remember syntax for url params like `<int:id>` instead of past `(?P<year>[0-9]{4})`. But if you need it, there's still `url()` for you to use power of regex in your urls.
And your shop still runs on Py2k, you'll need to move to Py3k to use Django 2.0. This also means that websockets support will not land on Django version that works for Py2k.
</year></int:id>How is <int:id> identical to (?P<year>[0-9]{4})? I'm talking about the {4}. Just an error in their blog post?</year></int:id>
Does it support negative long integers?
EDIT: I am without actual internet or mobile tethering and an unable to `git clone https://github.com/django/django -b stable/2.0.x` and check out this convenient new feature.
It's still regex under the hood, the 'int' is just syntactic sugar for "[0-9]+":
https://github.com/django/django/blob/dda37675db0295baefef37...
Ask HN: What is the best way to spend my time as a 17-year-old who can code?
I'm 17 and I can code at a relatively high level. I'm not really sure what I should be doing. I would like to make some money, but is it more useful to me to contribute to open-source software to add to my portfolio or to find people who will hire me? Even most internships require you to be enrolled as a CS major at a college. I've also tried things like Upwork, but generally people aren't willing to hire a 17-year-old and the pay is very bad. Thanks for any advice!
My GitHub is: https://github.com/meyer9
Pick a #GlobalGoal or three that you find interesting and want to help solve.
Apply Computational Thinking to solving a given problem. Break it down into completeable tasks.
You can work on multiple canvasses at once: sometimes it's helpful to let things simmer on the back burner while you're taking care of business. Just don't spread yourself too thin: everyone deserves your time.
Remember ERG theory (and Maslow's Hierarchy). Health and food and shelter are obviously important.
Keep lists of good ideas. Notecards, git, a nice fresh blank sheet of paper for the #someday folder. What to call it isn't important yet. "Thing1" and "Thing2".
You can spend time developing a portfolio, building up your skills, and continuing education. You can also solve a problem now.
You don't need a co-founder at first. You do need to plan to be part of a team: other people are good at other things; and that's the part they most enjoy doing.
Democrats fight FCC's plans to redefine “broadband” from 25+ to 10+ Mbps
The FCC redefined broadband as 25Mbps down and 3Mbps up as reported in the 2015 Broadband Progress Report (from 4Mbps/1Mbps in 2010).
https://www.fcc.gov/reports-research/reports/broadband-progr...
Ask HN: Any detailed explanation of computer science
Any detailed easily understandable explanation of computer science from bottom-up like Feynman's lectures explanation of physics.
Bits
Boolean algebra
Boolean logic gates / (set theory)
CPU / cache
Memory / storage
Data types (signed integers, floats, decimals, strings), encoding
...
A bottom-up (topologically sorted) computer science curriculum (a depth-first traversal of a Thing graph) ontology would be a great teaching resource.
One could start with e.g. "Outline of Computer Science", add concept dependency edges, and then topologically (and alphabetically or chronologically) sort.
https://en.wikipedia.org/wiki/Outline_of_computer_science
There are many potential starting points and traversals toward specialization for such a curriculum graph of schema:Things/skos:Concepts with URIs.
How to handle classical computation as a "collapsed" subset of quantum computation? Maybe Constructor Theory?
https://en.wikipedia.org/wiki/Constructor_theory
From "Resources to get better at theoretical CS?" https://news.ycombinator.com/item?id=15281776 :
- "Open Source Society University: Path to a self-taught education in Computer Science!" https://github.com/ossu/computer-science
This is also great:
- "Coding Interview University" https://github.com/jwasham/coding-interview-university
Neither these nor the ACM Curriculum are specifically topologically sorted.
Ask HN: What algorithms should I research to code a conference scheduling app
I'm interested in writing a utility to assist with scheduling un-conferences. Lets take the following situation for an example:
* 4 conference rooms across 4 time slots, for a total of 16 talks.
* 30 proposed talks
* 60 total participants
Each user would be given 4(?)votes, un-ranked. (collection of the votes is a separate topic) Voting is not secret, and we don't need mathematically precise results. The goal is just to minimize conflicts.
The algorithm would have the following data to work with:
* List of talks with the following properties:
* presenter participant ID
* the participant ID for each user that voted for the talk
I'd like to come up with an algorithm that does the following:* fills all time slots with the highest voted topics
* attempts to avoid overlapping votes for any particular given user in a given time slot
* attempt to not schedule a presenter's talk during a talk they are interested in.
* Sugar on top: implement ranked preferences
My question: where do I start to research the algorithms that will be helpful? I know this is a huge project, but I have a year to work on it. I'm also not overly concerned with performance, but would like to keep it from being exponential.
Thank you for any references you can provide!
Resource scheduling, CSP (Constraint Satisfaction programming)
CSP: https://en.wikipedia.org/wiki/Constraint_satisfaction_proble...
Scheduling (production processes):
https://en.wikipedia.org/wiki/Scheduling_(production_process...
Scheduling (computing):
https://en.wikipedia.org/wiki/Scheduling_(computing)
... To an OS, a process thread has a priority and sometimes a CPU affinity.
What have been the greatest intellectual achievements?
- The internet (TCP/IP) and world wide web (HTML, HTTP).
History of the Internet:
https://en.wikipedia.org/wiki/History_of_the_Internet
History of the World Wide Web:
https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web
- Relational algebra, databases, Linked Data (RDF,).
Relational algebra:
https://en.wikipedia.org/wiki/Relational_algebra
Relational database:
https://en.wikipedia.org/wiki/Relational_database
Linked Data:
https://en.wikipedia.org/wiki/Linked_data
RDF:
https://en.wikipedia.org/wiki/Resource_Description_Framework
- CRISPR/Cas9, CRISPR/Cpf1
https://en.wikipedia.org/wiki/CRISPR
Cas9:
https://en.wikipedia.org/wiki/Cas9
CRISPR/Cpf1:
https://en.wikipedia.org/wiki/CRISPR/Cpf1
- Tissue Nanotransfection
- Time, Calendars
Time > History of the calendar: https://en.wikipedia.org/wiki/Time#History_of_the_calendar
- Standard units of measure (QUDT URIs)
How about Tesla?!
Nikola Tesla > AC (alternating current) and the induction motor:
https://en.wikipedia.org/wiki/Nikola_Tesla#AC_and_the_induct...
Induction motor:
Ask HN: What can't you do in Excel? (2017)
Was just Googling around for whether Excel (sans VBA scripting of course) is Turing-complete, in order to decide whether telling a layperson that Excel (or spreadsheeting in general) can be considered very much like programming. Came across this 2009 HN thread, "Ask HN: What can't you do in Excel?" from pg:
> One of the startups in the current YC cycle is making a new, more powerful spreadsheet. If there are any Excel power users here, could you please describe anything you'd like to be able to do that you can't currently? Your reward could be to have some very smart programmers working to solve your problem.
https://news.ycombinator.com/item?id=429477
What significant advances -- in Excel/spreadsheets, not the Turing-complete thing -- have been made in the 8 years since? What's the YC startup from that cycle that "is making a new, more powerful spreadsheet", and what is it doing today? I remember Grid [0], but that was from 2012. Any other companies make innovations that would overturn the spreadsheet paradigm, or at least be copied by Excel/OO/GSheets?
A commenter mentioned "Queries", since many spreadsheet users use spreadsheets like a database. I just recently noticed that GSheets has a QUERY function [1] that uses "principles of Structured Query Language (SQL) to do searches). The function has been around since 2015 (according to Internet Archive [2]) so perhaps I ignored it because its description then was simply, "Runs a Google Visualization API Query Language query across data."
It appears that "Visualization API Query Language" has a lot of SQL-type features with the immediately obvious exception of joins [3].
edit: Multiple people said they would like Excel to have online functionality, i.e. like Google Sheets, but being able to accept VBA and any other features of legacy Excel spreadsheets. There's now Excel Online but I haven't used it (still sticking to Office 2011 for Mac if I ever need to use Excel instead of GS). How seamless is the transition from offline, legacy Excel files to online Excel?
[0] http://blog.ycombinator.com/grid-yc-s12-reinvents-the-spreadsheet-for-the/
[1] https://support.google.com/docs/answer/3093343?hl=en
[2] http://web.archive.org/web/20150319144449/https://support.google.com/docs/answer/3093343?hl=en
[3] https://developers.google.com/chart/interactive/docs/querylanguage
Topological sort on cells based on formula references.
Excel sheets are often highly convoluted in cell cross-references in formulas. It would help to have a clean-up mechanism that performs a topological sort on all the cells with formulas and puts them in a more natural order. It would help to be able to identify the backward references even if the cells are not automatically rearranged.
+1 for Topological sort of formulas (e.g. into a Jupyter notebook; as e.g. Python)
https://www.reddit.com/r/statistics/comments/2arevn/how_to_u...
https://www.reddit.com/r/personalfinance/comments/1739oc/imp...
[deleted]
The real data structure for financial reporting and analysis is hyperdimensional, like it or not:
https://en.wikipedia.org/wiki/XBRL
After a 15 year struggle, digital financial reports are a success in the U.S and the Europeans are following up.
---
Also, many people use a spreadsheet when they really want a database. In the office world that would be Access instead of Excel; I like the idea of Access but the implementation is an uncomfortable place of having a complex GUI and having to know some SQL.
---
Finally, decimal arithmetic. Financial calculations should not have the artifacts that come from trying to represent (1/100)th in base 2.
W3C RDF Data Cubes (qb:)
https://wrdrd.github.io/docs/consulting/knowledge-engineerin...
> RDF Data Cubes vocabulary is an RDF standard vocabulary for expressing linked multi-dimensional statistical data and aggregations.
> Data Cubes have dimensions, attributes, and measures
> Pivot tables and crosstabulations can be expressed with RDF Data Cubes vocabulary
And then SDMX is widely used internationally:
https://github.com/pandas-dev/pandas/issues/3402#issuecommen...
Linked Data.
> [...] 7 metadata header rows (column label, property URI path, DataType, unit, accuracy, precision, significant figures)
https://wrdrd.github.io/docs/consulting/linkedreproducibilit...
Specifically, CSVW JSONLD as a lossless output format.
CSVW supports physical units.
https://twitter.com/westurner/status/901990866704900096
> "Model for Tabular Data and Metadata on the Web" (#JSONLD, #RDFa HTML) is for Data on the Web #dwbp #linkeddata https://www.w3.org/TR/tabular-data-model/
> #CSVW defaults to xsd:string if unspecified. "How do you support units of measure?" #qudt https://www.w3.org/TR/tabular-data-primer/#units-of-measure
Elon Musk Describes What Great Communication Looks Like
" The world is flat! "*
https://en.wikipedia.org/wiki/The_World_Is_Flat
Check out Thomas L. Friedman (@tomfriedman): https://twitter.com/tomfriedman
Great Ideas in Theoretical Computer Science
List of important publications in computer science https://en.wikipedia.org/wiki/List_of_important_publications...
https://github.com/papers-we-love/papers-we-love#other-good-...
Ask HN: How do you, as a developer, set measurable and actionable goals?
I see a lot of people from other industries, say designers or sales people, who can set for themselves actionable and measurable goals such as "Make one illustration a day", "Make a logo a day" or "Sell X units of Y product a day", "Make X ammount of dollars seeling product Z by date X", etc.
How do you, as a developer, set measurable goals for yourself, being it at work or in your side hobbie?
some companies use some kind of agile methodology to manage the work.
https://en.wikipedia.org/wiki/Agile_software_development
two of the most widely used being Scrum and Kanban
which could be used to track how your doing against your self. although recommended not to be used for evaluating employees so take with a grain of salt.
Burn down chart (each story has complexity points; making it possible to estimate velocity and sprint deadlines):
https://en.wikipedia.org/wiki/Burn_down_chart
User stories in a "story map" (Kanban board) with labels and/or milestones for epics, flights, themes:
https://en.wikipedia.org/wiki/User_story#Story_map
Software Development > Requirements Management > Agile Modeling > User Story: https://wrdrd.github.io/docs/consulting/software-development...
Bitcoin Energy Consumption Index
... Speaking of environmental externalities,
In the US, "Class C" fire extinguishers work on electrical fires:
From Fire_class#Electrical:
https://en.wikipedia.org/wiki/Fire_class#Electrical
> Carbon dioxide CO2, NOVEC 1230, FM-200 and dry chemical powder extinguishers such as PKP and even baking soda are especially suited to extinguishing this sort of fire. PKP should be a last resort solution to extinguishing the fire due to its corrosive tendencies. Once electricity is shut off to the equipment involved, it will generally become an ordinary combustible fire.
> In Europe, "electrical fires" are no longer recognized as a separate class of fire as electricity itself cannot burn. The items around the electrical sources may burn. By turning the electrical source off, the fire can be fought by one of the other class of fire extinguishers [citation needed].
How does this compare to carbon-intensive resource extraction operations like gold mining?
(Gold is industrially and medically useful, IIUC)
See also:
"So, clean energy incentives" https://news.ycombinator.com/item?id=15070430
Dancing can reverse the signs of aging in the brain
"Dancing or Fitness Sport? The Effects of Two Training Programs on Hippocampal Plasticity and Balance Abilities in Healthy Seniors"
Front. Hum. Neurosci., 15 June 2017 | https://doi.org/10.3389/fnhum.2017.00305
Adult neurogenesis:
https://en.wikipedia.org/wiki/Adult_neurogenesis
IIUC:
{Omega 3/6, Cardiovascular exercise,} -> Endocannabinoids -> [Hippocampal,] neurogenesis
"Neurobiological effects of physical exercise" (Hippocampal plasticity, neurogenesis,)
https://en.wikipedia.org/wiki/Neurobiological_effects_of_phy...
"Study: Omega-3 fatty acids fight inflammation via cannabinoids" https://news.illinois.edu/blog/view/6367/532158 (Omega 6: Omega 3 ratio)
scholar.google q=cannabinoid+neurogenesis https://scholar.google.com/scholar?q=cannabinoid+neurogenesi...
Functions of the ECS (Endocannabinoid System):
https://en.wikipedia.org/wiki/Endocannabinoid_system#Functio...
- #Role-in-hippocampal-neurogenesis, "runners high"
Rumours swell over new kind of gravitational-wave sighting
This might be a naive question, but I'd like to clear it out.
Do gravitational waves travel at the speed of light?
I know the theory says nothing can travel faster than light. I also know that photons can be seen as quanta or as waves. So my guess is that gravitational waves travel at most, at the speed of light.
But do they? or do they travel slower? faster? Is there a doppler effect for GWs?
I ask because I would think ripples in the space-time fabric itself might be a bit different than light waves or other more studied phenomena.
Can anyone point me in the right direction?
These PBS Spacetime episodes should help:
The Speed of Light is NOT About Light - https://www.youtube.com/watch?v=msVuCEs8Ydo
Is Quantum Tunneling Faster than Light? - https://www.youtube.com/watch?v=-IfmgyXs7z8
The Quantum Experiment that Broke Reality - https://www.youtube.com/watch?v=RlXdsyctD50
Pilot Wave Theory and Quantum Realism - https://www.youtube.com/watch?v=RlXdsyctD50
The Future of Gravitational Waves - https://www.youtube.com/watch?v=eJ2RNBAFLj0
Thanks!
"Neutron star": https://en.wikipedia.org/wiki/Neutron_star
New Discovery Simplifies Quantum Physics
OpenAI has developed new baseline tool for improving deep reinforcement learning
https://blog.openai.com/openai-baselines-dqn/ (May 2017)
Deep Learning RL (Reinforcement Learning) algos in this batch of OpenAI RL baselines: DQN, Double Q Learning, Prioritized Replay, Dueling DQN
Src: https://github.com/openai/baselines
[deleted]
https://blog.openai.com/baselines-acktr-a2c/ (August 2017)
ACKTR & A2C (~=A3C)
(The GitHub readme lists: A2C, ACKTR, DDPG, DQN, PPO, TRPO)
... openai/baselines/commits/master: https://github.com/openai/baselines/commits/master
The prior can generally only be understood in the context of the likelihood
Bayes assumes/requires conditional independence of observations; which is sometimes the case.
For example:
- Are the positions of the Earth and the Moon conditionally independent? No.
- In the phrase "the dog and the cat", are "and" and "the" independent? No.
- In a biological system, are we to assume conditional independence? We should not.
https://en.wikipedia.org/wiki/Conditional_independence
...
"Efficient test for nonlinear dependence of two continuous variables" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4539721/
- In no particular sequence: CANOVA, ANOVA, Pearson, Spearman, Kendall, MIC, Hoeffding
From https://plato.stanford.edu/entries/logic-inductive/ :
> It is now generally held that the core idea of Bayesian logicism is fatally flawed—that syntactic logical structure cannot be the sole determiner of the degree to which premises inductively support conclusions. A crucial facet of the problem faced by Bayesian logicism involves how the logic is supposed to apply to scientific contexts where the conclusion sentence is some hypothesis or theory, and the premises are evidence claims. The difficulty is that in any probabilistic logic that satisfies the usual axioms for probabilities, the inductive support for a hypothesis must depend in part on its prior probability. This prior probability represents how plausible the hypothesis is supposed to be based on considerations other than the observational and experimental evidence (e.g., perhaps due to relevant plausibility arguments). A Bayesian logicist must tell us how to assign values to these pre-evidential prior probabilities of hypotheses, for each of the hypotheses or theories under consideration. Furthermore, this kind of Bayesian logicist must determine these prior probability values in a way that relies only on the syntactic logical structure of these hypotheses, perhaps based on some measure of their syntactic simplicities. There are severe technical problems with getting this idea to work. Moreover, various kinds of examples seem to show that such an approach must assign intuitively quite unreasonable prior probabilities to hypotheses in specific cases (see the footnote cited near the end of section 3.2 for details). Furthermore, for this idea to apply to the evidential support of real scientific theories, scientists would have to formalize theories in a way that makes their relevant syntactic structures apparent, and then evaluate theories solely on that syntactic basis (together with their syntactic relationships to evidence statements). Are we to evaluate alternative theories of gravitation (and alternative quantum theories) this way?
>"This prior probability represents how plausible the hypothesis is supposed to be based on considerations other than the observational and experimental evidence (e.g., perhaps due to relevant plausibility arguments)."
I guess I don't know about how "Bayesian logicism" differs from "Bayesian probability", but this is totally false in the latter case. The prior is just supposed to be independent of the current data (eg devised before it was collected). In practice info almost always leaks into the prior + model via tinkering. That is why a priori predictions are so important to proving you are onto something.
Bayesian logicism is the logic derived from Bayesian probability.
Magic numbers are an anti-pattern: which constants are what and why should be justified OR it should be shown that a non-expert-biased form converges regardless.
The use of the term prior probability in that paragraph is not consistent with its use in bayesian probability, so something is wrong.
Also, I am not sure what magic numbers you are referring to.
Ask HN: How to find/compare trading algorithms with Quantopian?
I found this, which links to a number of quantitative trading algorithms that significantly outperform as compared with SPY (an S&P 500 ETF):
"Community Algorithms Migrated to Quantopian 2"
https://www.quantopian.com/posts/community-algorithms-migrat...
Why even build a business, create jobs, and solve the world's problems?
... "Impact investing"
https://en.wikipedia.org/wiki/Impact_investing
"Is this a good way to invest in solving for the #GlobalGoals for Sustainable Development ( https://GlobalGoals.org )?"
Ask HN: How do IPOs and ICOs help a business raise capital?
Ask HN: How do IPOs and ICOs help a business raise capital?
IPO: "Initial Public Offering"
https://en.wikipedia.org/wiki/Initial_public_offering
ICO: "Initial Coin Offering"
https://en.wikipedia.org/wiki/Initial_coin_offering
"""
IPO: "Initial Public Offering"
https://en.wikipedia.org/wiki/Initial_public_offering
ICO: "Initial Coin Offering"
https://en.wikipedia.org/wiki/Initial_coin_offering
"""
Solar Window coatings “outperform rooftop solar by 50-fold”
MS: Bitcoin mining uses as much electricity as 1M US homes
So, clean energy incentives.
> That means 1.2% of the Sahara desert is sufficient to cover all of the energy needs of the world in solar energy.
https://www.forbes.com/sites/quora/2016/09/22/we-could-power...
Nearly all other animals on the planet survive entirely on solar energy.
Ask HN: What are your favorite entrepreneurship resources
Hey everyone, I'm teaching an undergraduate class in the fall at a local university here in Miami (FIU) and would love your recommendations on what books or articles or frameworks you think the students should read. My goal for the class is to teach them how to identify problems and prototype solutions for those problems. Hopefully, they make some money from them to help pay for books, etc.
I put these notes together:
Entrepreneurship: https://wrdrd.github.io/docs/consulting/entrepreneurship
- #plan-for-failure
- #plan-for-success
Investing > Capitalization Table: https://wrdrd.github.io/docs/consulting/investing#capitaliza...
- I'll add something about Initial Coin Offerings (which are now legal in at least Delaware).
AngelList ( https://angel.co for VC jobs and funding ) asks "What's the most useful business-related book you've ever read?" ... Getting Things Done (David Allen), 43Folders = 12 months + 31 days (Merlin Mann), The Art of the Start (Guy Kawasaki), The Personal MBA (Josh Kaufman)
Lever ( https://www.lever.co ) makes recruiting and hiring (some parts of HR) really easy.
LinkedIn ( https://www.linkedin.com ) also has a large selection of qualified talent: https://smallbusiness.linkedin.com/hiring
... How much can you tell about a candidate from what they decide to write on themselves on the internet?
USA Small Business Administration: "10 steps to start your business." https://www.sba.gov/starting-business/how-start-business/10-...
"Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California" https://github.com/leonar15/startup-checklist
Jupyter Notebook (was: IPython Notebook) notebooks are diff'able and executable. Spreadsheets can be hard to review. https://github.com/jupyter/notebook
It's now installable with one conda command: ``conda install -y notebook pandas qgrid``
CPU Utilization is Wrong
Instructions per cycle: https://en.wikipedia.org/wiki/Instructions_per_cycle
What does IPC tell me about where my code could/should be async so that it's not stalled waiting for IO? Is combined IO rate a useful metric for this?
There's an interesting "Cost per GFLOPs" table here: https://en.wikipedia.org/wiki/FLOPS
Btw these are great, thanks: http://www.brendangregg.com/linuxperf.html
( I still couldn't fill this out if I tried: http://www.brendangregg.com/blog/2014-08-23/linux-perf-tools... )
It's not that your code should be async, but that it should be more cache-friendly. It's probably mostly stalled waiting for RAM.
Ask HN: Can I use convolutional neural networks to clasify videos on a CPU
Is there any way that I can use conv nets to classify videos on a CPU. I do not have GPUs but I want to classify videos.
There's a table with runtime comparisons for a convnet here: https://github.com/ryanjay0/miles-deep/ (GPU CuDNN: 15s, GPU: 19s, CPU: 159s)
(Also written w/ Caffe: https://github.com/yahoo/open_nsfw)
Esoteric programming paradigms
Re: "Dependent Types"
In Python, PyContracts supports runtime type-checking and value constraints/assertions (as @contract decorators, annotations, and docstrings).
https://andreacensi.github.io/contracts/
Unfortunately, there's yet no unifying syntax between PyContracts and the newer python type annotations which MyPy checks at compile-type.
https://github.com/python/typeshed
What does it mean for types to be "a first class member of" a programming language?
Reasons blog posts can be of higher scientific quality than journal articles
So, schema.org, has classes (C:) -- subclasses of CreativeWork and Article -- for property (P:) domains (D:) and ranges (R:) which cover this domain:
- CreativeWork: http://schema.org/CreativeWork
- - BlogPosting: http://schema.org/BlogPosting
- - Article: http://schema.org/Article
- - - NewsArticle: http://schema.org/NewsArticle
- - - Report: http://schema.org/Report
- - - ScholarlyArticle: http://schema.org/ScholarlyArticle
- - - SocialMediaPosting: http://schema.org/SocialMediaPosting
- - - TechArticle: http://schema.org/TechArticle
Thing: (name, [url], [identifier], [#about], [description[_gh_markdown_html]])
- C: CreativeWork:
- - P: comment R: Comment
- - C: Comment: https://schema.org/Comment
Fact Check now available in Google Search and News
So, publishers can voluntarily add https://schema.org/ClaimReview markup as RDFa, JSON-LD, or Microdata.
Ask HN: Is anyone working on CRISPR for happiness?
"studies have found that genetic influences usually account for 35-50% of the variance in happiness measures"
No doubt there are many reasons why this is extremely complicated
Roadmap to becoming a web developer in 2017
Nice.
- https://github.com/fkling/JSNetworkX would be a cool way to build interactive schema:Thing/CreativeWork curriculum graph visualizations (and BFS/DFS traversal)
- #WebSec: https://wrdrd.com/docs/consulting/web-development#websec
- Web Development Checklist: https://wrdrd.com/docs/consulting/web-development#web-develo...
-- http://webdevchecklist.com/
- | Web Frameworks (GitHub Sphinx wiki (./Makefile)): https://westurner.org/wiki/webframeworks (| Wikipedia, | Homepage, Source, Docs,)
Ask HN: How do you keep track/save your learnings?(so that you can revisit them)
- Vim Voom: `:Voom rest` , ':Voom markdown`
- Jupyter notebooks
- Sphinx docs: https://wrdrd.com/docs/consulting/research#research-tools src: https://github.com/wrdrd/docs/blob/master/docs/consulting/re...
- Sphinx wiki (./Makefile):
-- Src: https://github.com/westurner/wiki
-- Src: https://github.com/westurner/wiki/wiki
-- Web: https://westurner.org/wiki/workflow
interesting stuff, thanks for sharing!
Use a spaced repetition flashcard program like Anki or Mnemosyne or one of the others (https://en.wikipedia.org/wiki/List_of_flashcard_software) and use it on a regular basis, preferably every day.
[deleted]
Ask HN: Criticisms of Bayesian statistics?
In tech circles, it seems that Bayesian statistics is often favored over classical frequentist statistics. In my study of both Bayesian and frequentist statistics, it seems that the results of a Bayesian analysis are generally more intuitive, such as when comparing Bayesian credible intervals to frequentist confidence intervals. It also seems like Bayesian analysis avoids what I think is one of the most serious problems in analysis, the multiple comparisons problem. It's been easy for me to find any number of Bayesian critiques of frequentist stats, but I have rarely seen frequentist defenses against Bayesian stats. This may simply be because I mostly read technology related sites as opposed to more general statistics oriented sites. As such, I would really appreciate hearing some frequentist critiques of Bayesian stats. I feel like the situation can't be as cut and dry as one being better than the other in all things, so I would like to acquire a more balanced perspective by hearing about the other side. Thanks!
~bayesian logicism
https://plato.stanford.edu/entries/logic-inductive/ :
> It is now generally held that the core idea of Bayesian logicism is fatally flawed—that syntactic logical structure cannot be the sole determiner of the degree to which premises inductively support conclusions. [...]
80,000 Hours career plan worksheet
> What are your best medium-term options (3-15 years)?
> 1. What global problems do you think are most pressing?
The 17 UN Sustainable Development Goals (SDGs) and 169 targets w/ statistical indicators, AKA GlobalGoals, are for the whole world through 2030.
https://en.wikipedia.org/wiki/Sustainable_Development_Goals
"Schema.org: Mission, Project, Goal, Objective, Task" https://news.ycombinator.com/item?id=12525141 could make it easy to connect our local, regional, national, and global goals; and find people with similar objectives and solutions.
World's first smartphone with a molecular sensor is coming in 2017
> Looking at the back of the phone, you'd be forgiven for thinking the sensor is just the phone's camera. But that odd-looking dual lens is the scanner, basically the embedded version of the SCiO. It uses spectrometry to shine near-infrared light on objects — fruit, liquids, medicine, even your body — to analyze them.
> Say you're at at the supermarket and you want to check how fresh the tomatoes are. Instead of squeezing them, you'd just launch the SCiO app, hold the scanner up to the skin of the tomato, and it will tell you how fresh it is on a visual scale. Do the same thing to your body and you can check your body mass index (BMI). You need to specify the thing you're scanning at the outset, and the actually analysis is performed in the cloud, but the whole process is a matter of seconds, not minutes.
https://en.wikipedia.org/wiki/Spectroscopy
... Tricorder X PRIZE: https://en.wikipedia.org/wiki/Tricorder_X_Prize
Ask HN: How would one build a business that only develops free software?
So I was reading Richard Stallman's blog on why you should not use google/uber/apple/twitter etc and I understand his reasoning. But what I don't understand is how would one go about building a startup or business that develops and distributes free software only and make good money doing so?
For example, would it be possible to build a free software version of uber/twitter/facebook etc? How would that work?
By removing all restrictions on the software, what is the incentive to not pirate the software? The GPL can be enforced, but that is clearly not practical especially outside the US.
A business is more than just the source code. The source for Reddit, for example, is OSS. So anybody should be able to put Reddit out of business in a few days, right? But yet... nobody has. Hmmm....
So yeah, you can distribute source code and still make money. Lots of companies do it. Red Hat, SugarCRM, Alfresco, etc.
In some ways it's probably even easier for an online service to both be OSS and be successful, exactly because it takes all the other "stuff" (hosting, devops, marketing, network effects, etc.) to be successful. And, in fact, there are OSS replacements for things like Facebook and Twitter. The problem is, very few people use them for whatever reason (probably mostly network effects).
So at least in regards to the Facebooks, Twitters, etc. of the world, the first question you'd have to answer, is how to get people to switch to your service, whether it's free software or otherwise.
> The source for Reddit [...]
Src: https://github.com/reddit/reddit /blob/master/r2/setup.py
Docs: https://github.com/reddit/reddit/wiki/Install-guide
"Reddit Enhancement Suite (RES)" is donationware: https://github.com/honestbleeps/Reddit-Enhancement-Suite
"List of Independent GNU social Instances" http://skilledtests.com/wiki/List_of_Independent_GNU_social_...
> [...] the first question you'd have to answer, is how to get people to switch to your service, whether it's free software or otherwise.
"Growth hacking": https://en.wikipedia.org/wiki/Growth_hacking
"Business models for open-source software" https://en.wikipedia.org/wiki/Business_models_for_open-sourc...
...
- https://github.com/GoogleCloudPlatform
- https://github.com/kubernetes/kubernetes (Apache 2.0)
- https://github.com/apple (Swift is Apache 2.0)
- https://github.com/microsoft
- https://github.com/twitter/innovators-patent-agreement
...
- "GNU Social" (GNU AGPL v3) https://en.wikipedia.org/wiki/GNU_social
... http://choosealicense.com/appendix/ has a table for comparison of open source software licenses.
http://tinyurl.com/p6mka3k describes Open Source Governance in a chart with two axes (Cathedral / Bazaar , Benevolent Dictator / Formal Meritocracy) ... as distinct from https://en.wikipedia.org/wiki/Open-source_governance , which is the application of open source software principles to government. USDS Playbook advises "Default to open" https://playbook.cio.gov/#play13
Anarchy / Budgeting: https://github.com/WhiteHouse/budgetdata
Ask HN: If your job involves continually importing CSVs, what industry is it?
I was wondering if people still use CSVs for data exchange now, or if we've mostly moved to JSON and XML.
Arguing for the CSVW (CSV on the Web) W3C Standards:
- "CSV on the Web: A Primer" http://w3c.github.io/csvw/primer/
- Src: https://github.com/w3c/csvw
- Columns have URIs (ideally from a shared RDFS/OWL vocabulary)
- Columns have XSD datatype URIs
- CSVW can be represented as RDF, JSON, JSONLD
With CSV, which extra metadata file describes how many rows at the top are for columnar metadata? (I.e. column labels, property URI, XSD datatype URI, units URI, precision, accuracy, significant figures) ... https://wrdrd.com/docs/consulting/linkedreproducibility#csv-...
... CSVW: https://wrdrd.com/docs/consulting/knowledge-engineering#csvw
@prefix csvw: <http: csvw#="" ns="" <a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" target="_blank" rel="nofollow noopener">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">www.w3.org=""> .
</http:>
@context: http://www.w3.org/ns/csvw.jsonldAsk HN: Maybe I kind of suck as a programmer – how do I supercharge my work?
I'm in my late twenties and I'm having a bit of a tough time dealing with my level of programming skill.
Over the past 3 years, I've released a few apps on iOS: not bad, nothing that would amaze anyone here. The code is generally messy and horrible, rife with race conditions and barely holding together in parts. (Biggest: 30k LOC.) While I'm proud of my work — especially design-wise — I feel most of my time was spent on battling stupid bugs. I haven't gained any specialist knowledge — just bloggable API experience. There's nothing I could write a book about.
Meanwhile, when I compulsively dig through one-man frameworks like YapDatabase, Audiobus, or AudioKit, I am left in awe! They're brimming with specialist knowledge. They're incredibly documented and organized. Major features were added over the course of weeks! People have written books about these frameworks, and they were created by my peers — probably alongside other work. Same with one-man apps like Editorial, Ulysses, or GoodNotes.
I am utterly baffled by how knowledgeable and productive these programmers are. If I'm dealing with a new topic, it can take weeks to get a lay of the land, figure out codebase interactions, consider all the edge cases, etc. etc. But the commits for these frameworks show that the devs basically worked through their problems over mere days — to say nothing of getting the overall architecture right from the start. An object cache layer for SQL? Automatic code gen via YAML? MIDI over Wi-Fi? Audio destuttering? Pff, it took me like a month to add copy/paste to my app!
I'm in need of some recalibration. Am I missing something? Is this quality of work the norm, or are these just exceptional programmers? And even if they are, how can I get closer to where they're standing? I don't want to wallow in my mediocrity, but the mountain looks almost insurmountable from here! No matter the financial cost or effort, I want to make amazing things that sustain me financially; but I can't do that if it takes me ten times as long to make a polished product as another dev. How do I get good enough to consistently do work worth writing books about?
For identifying strengths and weaknesses: "Programmer Competency Matrix":
- http://sijinjoseph.com/programmer-competency-matrix/
- https://competency-checklist.appspot.com/
- https://github.com/hltbra/programmer-competency-checklist
... from: https://wrdrd.com/docs/consulting/software-development#compu... )
> How do I get good enough to consistently do work worth writing books about?
- These are great reads: "The Architecture of Open Source Applications" http://aosabook.org/en/
- TDD.
[deleted]
Ask HN: Anything Like Carl Sagan's Cosmos for Computer Science?
Is there anything like Carl Sagan's Cosmos that talks about the history of computing in an accessible way? Pondering Christmas gifts for my niece.
Computer #History: https://en.wikipedia.org/wiki/Computer
Outline of Computer Engineering #History of: https://en.wikipedia.org/wiki/Outline_of_computer_engineerin...
History of Computer Science: https://en.wikipedia.org/wiki/History_of_computer_science
Outline of Computer Science: https://en.wikipedia.org/wiki/Outline_of_computer_science
History of the Internet: https://en.wikipedia.org/wiki/History_of_the_Internet
History of the World Wide Web: https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web
... maybe a bit OT; but, interestingly, IDK if any of these include a history section:
#K12CSFramework (Practices, Concepts): https://k12cs.org
- "Impacts of Computing" (Culture; Social Interactions; Safety, Law, and Ethics): https://k12cs.org/framework-statements-by-progression/#jump-...
"Competencies and Tasks on the Path to Human-Level AI" (Perception, Actuation, Memory, Learning, Reasoning, Planning, Attention, Motivation, Emotion, Modeling Self and Other, Social Interaction, Communication, Quantitative, Building/Creation): http://wiki.opencog.org/w/CogPrime_Overview#Competencies_and...
Code.org (#HourOfCode): https://code.org/learn
Not really in keeping with the idea of a gift suitable for a niece.
No, but particulary more comprehensive and informative than any one video. These links (to #OER) would be useful for anyone intending to try and replicate the form and style of the "Cosmos" video series with Computer Science content.
Cosmos was also a dead tree book. [1] It was not uncommonly given as a gift.
The original TV series was broadcast the same year as its publication, 1980, but I don't think it was readily available on consumer tape until several years later and then not at normal holiday gift prices. Back in those days, most video libraries were built by the librarian directly recording broadcasts. But most people would just wait for a rebroadcast.
Learn X in Y minutes
this has been shared/posted a hundred times before... nothing new here.
Eh, I can't honestly tell if there has been more X's added since the last time it was posted and I don't feel like putting in the time into the wayback machine to find out.
The source is hosted on GitHub; there's a commit log (for each file and directory): https://github.com/adambard/learnxinyminutes-docs/commits/ma...
Org mode 9.0 released
I love Org mostly for its ability to link to stuff. In my mind it's the big feature that sets it apart from using, say, a separate application to do my task management. For instance:
- When I'm reading email (in emacs), I can quickly create a TODO that links back to the current email I'm reading (most of my TODOs, in fact, link to an email, so this is very useful),
- For my org file where I keep notes about the servers I'm running, I might link to a specific line on a remote apache config,
- For a bug report I might link to a specific git commit in a project to look at later
Any TODO that is important gets scheduled, so that it is linked to from the agenda view. In this way it's very hard for me to lose track of anything, despite most of my work communication happening through email. In fact I now prefer email over using something like Basecamp, because org makes it easier for me to manage!
Because Emacs is an OS the way it should be — an extensible system with a unified UI instead of a pile of separate and hardly interoperable applications.
"b-but the UNIX philosophy!"
The more time I spend with UNIX and hear the "UNIX philosophy" used in place of argument when someone doesn't like the feature set of some software, the more I realize that we need to move on from UNIX. We should take more ideas from things like Emacs and the Lisp machines when designing our software, and less from UNIX.
Emacs has very few self-contained mega packages; in its own way, Emacs software is built by chaining together smaller packages. Many packages are little more than extensions to other packages.
But their interop is better. Besides Emacs packages having much greater "contact surface" (packages can augment other packages, if needed), the biggest problem with UNIX philosophy is using unstructured text as exchange medium. It leads to huge waste and proliferation of bugs, as each tool or script you use has to have its own custom (and bug-ridden) parser, and outputs its own unstructured and usually undocumented data format. This is one of those moments the history of computing took a huge step backwards. All that recent push for passing JSON around instead of plaintext? That's just rediscovering what we've lost when UNIX came along.
Filenames may contain newlines. JSON strings may contain newlines.
The modular aspects of the UNIX philosophy are pretty cool; the data interchange format (un-typed \n-delimited strings) is irrational (and
dangerous).
JSON w/ a JSONLD @context and XSD type URIs may also contain newlines (which should be escaped)
Note that, with OSX bash, tab \t must be specified as $'\t'.
And, sometimes, it's \r\n instead of just \n (which is extra-format metadata).
And then Unicode. Oh yeah, unicodë.
You don't have to use $'', you can also use literal tabs (ctrl-v to insert literals). The main difference between macOS and Linux that people notice is that macOS sed doesn't itself interpret \t (so you have to use literal tabs or $'' there).
\r\n is Windows, not Unix.
What about Unicode? (Btw, UTF-8 was created by unixers Rob Pike and Ken Thompson https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt )
Ask HN: Best Git workflow for small teams
I have been building up a small team of programmers that are coming from CVS. I am looking for some ideas on ideal workflows.
What do you currently use for teams of 5-10 people?
We've been using the Github flow (or some variation of it) in teams of 2-10 people for a few years now. We work on feature branches, merge after code review using the github web UI. Here's a few things that help us:
- Make a rule that anything that's in master can be deployed by anyone at any time. This will enforce discipline for code review, but also for introducing changes that f.e. require migrations. You'll use feature flags more, split db schema changes into several deployments, make code work with both schema version, etc. All good practices that you'll need anyway when you reach larger scale and run into issues with different versions of code running concurrently during deployments.
- If you're using Github, check out the recently added options for controlling merging. Enable branch protection for master, so that only pull request that are green on CI and have been reviewed can be merged. Enable different dropdown options for the merge button (eg rebase and merge), these are useful for small changes.
- It takes some time to get used to those constraints. Make sure that everyone is on board, and do a regular review of the workflow just as you would review your code. It takes some time to get used to, but I think it's worth it in the long run.
+1 for HubFlow (GitFlow > HubFlow).
- https://westurner.org/tools/#hubflow
- Src: https://github.com/datasift/gitflow
- Docs: https://datasift.github.io/gitflow/
-- The git branch diagrams (originally from GitFlow) are extremely helpful: https://datasift.github.io/gitflow/IntroducingGitFlow.html
TDD Doesn't Work
tl;dr = someone did a study that used a methodology that confirmed that working in small chunks and writing tests as you go is good, but that it's not very important if you write the tests before the small chunk of code or after the small chunk of code.
Aren't tests supposed to be a tool to help design API? in that perspective a test should be written first. The problem IMHO is the choice of methodology as there is several kind of tests. Some may be more time consuming when it comes to the set up.
Personally, I think unit tests shine best when you're designing an API. I can swing from hate to love and back about TDD in minutes, but when it comes to thinking about how your code will be used, unit tests (did we stop using that term?) are a tremendously useful tool I have.
I guess if all code written could be seen as an API, TDD would be great, but that's not the world I live in.
> I guess if all code written could be seen as an API, TDD would be great, but that's not the world I live in.
If not an "Application Programming Interface", isn't all code an Interface? There's input and there's output.
With Object Oriented programming, that there is an interface is more explicit (even if all you're doing is implementing objects that are already tested). There are function call argument (type) specifications (interfaces) whether it's functional or OO.
I thought that TDD morphed into ending up with a regression/integration/conformation test suite instead of using tests as specifications written prior to writing products. And even 100,000s of tests won't help you in very advanced applications like cloud/cluster infrastructure as sometimes it's simply too difficult if not impossible to come up with tests (imagine observer effect when your cluster deadlock happens only in certain rare nanosecond windows and adding a testing framework will make you miss those windows and the problem never happens) and people with mental capacity capable of writing them (e.g. Google/FB-level) are better utilized in writing the product itself.
How do the people who write them know that they work?
TDD presents a paradox that requires split-brain thinking: when writing a test, you pretend to forget what branch of code you are introducing, and when writing a branch, you pretend to forget you already knew the solution. It is annoying as hell.
You CAN indeed cover all your branches with tests afterwards. You can even give that a fancier name, like "Exploratory Testing". Of course it may be more boring or tedious, but is a perfectly valid way to ensure coverage when needed.
TDD was great for popularizing writing test first; However I much prefer the methodology called CABWT - Cover All Branches With Tests. Let the devs choose the way to do it, because not everyone likes these pretend games.
TDD requires you to write FUNCTIONAL test first, not unit tests you are talking about.
I was commenting on the methodology as I heard and watched it explained by the author (Robert C Martin), as well as the way it was presented in his videos.
TDD workflow is fine; it's not thinking about the pink elephant (the source code) idea that bugs me.
Robert Martin is author of Agile manifesto.
https://www.quora.com/Why-does-Kent-Beck-refer-to-the-redisc...
The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output.
+1. TDD could be considered as a derivation of the Scientific Method (Hypothesis Testing).
https://en.wikipedia.org/wiki/Scientific_method
https://en.wikipedia.org/wiki/Hypothesis
Test first isolates out a null hypothesis (that the test already passed); but not that it passes/fails because of some other chance variation (e.g. hash randomization and unordered maps).
https://en.wikipedia.org/wiki/Null_hypothesis
... https://en.wikipedia.org/wiki/Test-driven_development
+1 Right on the spot.
TDD requires you to draw your target first, then hit or miss it with the code, like in science: hypotheses -> confirmation/declining via experiments -> working theory.
But in practice, lot of coders are hitting a point instead, then they draw target around that point, like in fake science: we throw coin 100 times, distribution is 60/40, our hypothesis: random coin flip has 60 to 40 ratio, our hypothesis confirmed by experiment, huge savings, hooray!
C for Python programmers (2011)
LearnXinYminutes tuts are pretty cool. I learned C++ before I learned Python (CPython ~2.0) before I learned C:
C++ ("c w/ classes", i thought to myself):
- https://en.wikipedia.org/wiki/C++
- https://learnxinyminutes.com/docs/c++/
Python 2, 3:
- https://en.wikipedia.org/wiki/Python_(programming_language)
- https://learnxinyminutes.com/docs/python/
- https://learnxinyminutes.com/docs/python3/
C:
- https://en.wikipedia.org/wiki/C_(programming_language)
- https://learnxinyminutes.com/docs/c/
And then I learned Java. But before that, I learned (q)BASIC, so
Ask HN: How do you organise/integrate all the information in your life?
Hello fellow HNers,
How do you organise your life/work/side projects/todo lists/etc in an integrated way?
We have:
* To do lists/Reminders
* Bookmark lists
* Kanban boards
* Wikis
* Financial tools
* Calenders/Reminders
* Files on disk
* General notes
* ...
However, there must be a better way to get an 'integrated' view on your life? ToDo list managers suck at attaching relevant information; wikis can't do reminders; bookmarks can't keep track of notes and thoughts; etc, and all the above are typically not crosslinked easily, and exporting data for backup/later consumption is hit and miss from various services.So far, I've found a wiki to be almost the most flexible in keeping all manner of raw information kind of organised, but lacks useful features like reminders, and minimal tagging support, no easy way to keep track of finances, etc.
I understand 'best tool for the job', but there's just so...many...
Emacs.
- Todo lists and reminders. org-agenda.
- Bookmark lists. org-capture and org-protocol.
- Kanban boards. I don't use this, but kanban.el.
- Wiki. Org-mode files and grep/ag with helm.
- Financial tools. ledger.
- Calendar/reminders. Org-agenda.
- Files on disk. dired, org-mode.
- General notes. Org-mode.
- Literate programming. org-babel.
- Mail. mu4e.
- rss. elfeed, gnus, or rss2email.
- git. magit.
- irc. erc.
- ...
Org-Mode is the one thing that makes me want to move from vim to emacs.. but I just don't feel like investing the time in getting proficient with emacs at this stage.
https://en.wikipedia.org/wiki/Org-mode#Integration lists a number of Vim extensions with OrgMode support.
... "What is the best way to avoid getting "Emacs Pinky"?" http://stackoverflow.com/questions/52492/what-is-the-best-wa...
They are all pale imitations of what org-mode offers.
[deleted]
Ask HN: What are the best web tools to build basic web apps as of October 2016?
Questions:
1: Which technologies are popular and what do people like about them? (To help someone deciding between). Seems like React frontend, or perhaps Vue? and Node being the popular backend?
2: Is there a site that keeps track of the various options for frontend and backend frameworks and how their popularity progresses?
1. I work with Django, Django Rest Framework, React, works well. I heard that Vue might be an interesting option. Honestly there's a myriad of tools/framework around there, you have to ask yourself what are you trying to achieve? Is it a one shot app you'll build over the week end? Something that you'd like to maintain over time? That may scale? Are you working alone on this or not? Is it an exercise to learn a new stack? I started a startup 4 years ago, asked a friend who's a ruby/rails dev for advice about tech/stack, his answer, yes rails is awesome but as you already know python go for Django as my goal was primarily to get stuff done business-wise not so much to learn cool new tech. So be aware of your options and definitely spend some time to know them but in the end they're just tools, don't forget why you use them in the first place.
2. I stumbled upon http://stackshare.io the other day, I can't vouch for it but seemed nice to have a quick overview of what languages / framework / services are used around.
+1 for http://stackshare.io/
* Backend performance: https://www.techempower.com/benchmarks/
* Frontend examples: https://github.com/tastejs/todomvc
There are tradeoffs between: performance, development speed, trainability (documentation), depth and breadth of developer community, long-term viability (foundation, backers), maintenance (upgrade path), API flexibility (decoupling, cohesion), standards compliance, vulnerability/risk (breadth), out of the box usability, accessibility, ...
So, for example, there are WYSIWYG tools which get like the first 70-80% of the serverside and clientside requirements; and then there's a learning curve (how to do the rest of the app as abstractly as the framework developers). ( If said WYSIWYG tools aren't "round-trip" capable, once you've customized any of the actual code, you have to copy paste (e.g. from a diff) in order to preserve your custom changes and keep using the GUI development tool. )
... Case in point: Django admin covers very many use cases (and is already testable, tested), but often we don't think to look at the source of the admin scaffolding app until we've written one-off models, views, and templates.
- Django Class-based Views abstract alot of the work into already-tested components.
- Django REST Framework has OpenAPI (swagger) and a number of 3rd party authentication and authorization integrations available.
- In a frontend framework (MVVM), ARIA (Accessibility standards), the REST adapter and error handling are important (in addition to the aforementioned criteria (long-term viability, upgrade path)) ... and then we want to do realtime updates (with something like COMET, WebSockets, WebRTC)
Similar features in any framework are important for minimizing re-work. "Are there already tests for a majority of these components?"
Harvard and M.I.T. Are Sued Over Lack of Closed Captions
I'm really surprised to see this anti-accesibility sentiment on HN, where you see comments about contrast/scroll hijacking/Javascript requirability and numerous other things the WCAG recommends against on nearly every link posted. You'd expect Harvard and M.I.T. to have wheelchair ramps so disabled students could get to class right? It shouldn't be so controversial that their lectures should be accessible as well. Maybe some are concerned over the possible costs but there are solutions to that such as crowdsourcing. Universities are meant to spread knowledge for all and even if they're a public good, they should still be held accountable for it.
Why should lectures given away to the public for free obligate the university to make them accessible? We aren't talking about enrolled students here. If we were, that would likely not even be an argument. But placing an obligation on a university to provide as much accessibility as they would for their students to the masses of the internet as a condition of freely sharing with the world? That's not the same thing. There's nuance here, and people are missing it.
Why is everyone getting so caught up in the free part? If the lectures were a dollar would you say that they're obligated to subtitle them then? Nope, then you'd say that they're forcing a private organization to do things. (Or do you have a specific cost threshold before you expect subtitles? Hint: deaf people don't.) It's great that they're releasing them for free, very altruistic, but they're also depriving disabled people of them which the ADA specifically requires. Besides, they could easily provide a mechanism for crowd-sourcing subtitles which I noted in another comment so the cost wouldn't really be as burdensome as people want to think. Also, are people forgetting that these are Harvard and MIT? The NAD isn't going to be suing your mom and pop website. Harvard and MIT can afford it and should be held to higher standards. I hate to use the word due to the anti-SJW frenzy the internet is in these days, but the ableism in this thread is appalling. No one is trying to see from the side of the NAD, with one user even suggesting that it just wants to line its coffers...
> Why is everyone getting so caught up in the free part?
To restate my final point: there is nuance here, and people are missing it. I'm commenting for the purpose of interrogating the nuance, because I feel somewhat mixed on the issue. For online course material that is offered to enrolled students and required for the completion of a degree, I do not believe anyone is arguing in opposition. However, the collection and publishing of course material provided to enrolled students, then sharing it for free to non-enrolled students is a different matter entirely. The free part is an important detail in this particular matter and its circumstances, and shouldn't be wholly ignored, or treated as if it isn't part of the equation.
> If the lectures were a dollar would you say that they're obligated to subtitle them then?
Possibly. Most likely, only if paying for the material and following the courses was somehow tied to earning enrolled status and credit toward a degree, though. Because at that point, someone is actually a student looking to obtain something in exchange for studying the material, the university is engaged in the activity for which it receives federal funding, and we would rightly expect the institution to treat them as students according to the law and all its glorious regulations that seek to provide all students with a level playing field. Giving the material to the public at large with no fees or strings attached--meaning no strings attached to either party--isn't something I think we should discourage.
> Or do you have a specific cost threshold before you expect subtitles?
There is no cost threshold on my mind, no. There is only the threshold of whether the parties consuming the materials are enrolled students seeking a degree at the institution.
> ... but they're also depriving disabled people of them ...
I'm not convinced this is true. The internet is full of freely available information from a variety of sources, much of it in video and audio form, and we do not have a longstanding debate centering on how much of the freely available information in video and audio form is depriving the hearing impaired of that information and should be made accessible. What's happening here is singling out a particularly easy target and asserting that they should be held to a different standard than all the other parties producing free, inaccessible content, and calling it "depriving disabled people" of the content. This stirs my something-isn't-quite-right detector, because we are attempting to provide a very narrowly scoped requirement onto a narrowly scoped party, on the basis of taking rules that inarguably apply to their services in one particular set of conditions, and applying them to another, quite different set of conditions.
> Also, are people forgetting that these are Harvard and MIT? The NAD isn't going to be suing your mom and pop website. Harvard and MIT can afford it and should be held to higher standards.
This is an argument from a pretty low set of standards, honestly. The ability of the party to afford increased accessibility sets up a rather disingenuous cash-gate on the issue, and completely debases the argument for accessibility into an argument about money. We're either concerned about establishing a proper set of guidelines and cultural expectations for making information accessible, regardless of its cost, or we're targeting entities with cash who are otherwise doing something we applaud, and saying because they have the means to do more, they should do more, and bringing the force of the state against them to compel them to do so. This is the kind of thinking that inexorably leads to crafting laws that target specific parties, leave open loopholes for other parties, and wind up subverting our intended goals by allowing those who wish to avoid a particular set of regulations and obligations by reorganizing under an uncovered entity type. We'd surely want to avoid such an outcome--even if it would help us better identify truly bad actors.
> I hate to use the word due to the anti-SJW frenzy the internet is in these days, but the ableism in this thread is appalling. No one is trying to see from the side of the NAD, with one user even suggesting that it just wants to line its coffers...
I don't think there is an appalling level of ableism in this thread. I am, and I think others are, trying to interrogate the issue from multiple perspectives, but we are coming to different conclusions (or are withholding conclusions) than you seem to expect. Perhaps that's because we're looking at the nuances of the circumstances.
From the NAD's perspective, I'm questioning and considering the affordances one ought to expect from information being made freely available in its original form, and what limitations can sensibly be agreed to exist--because it is insensible to expect there to be no limitations. This includes interrogating the alleged principles involved, and to what extent and to which parties they apply. When they don't apply equally to all parties, especially when they don't apply equally on the basis of one's ability to pay, I find the alleged principles reveal themselves to be suspect. This perspective, in particular, is the one in which the absence of affordances and obligations on information-releasing parties are felt most acutely. My lacking of a particular ability preventing me from equally enjoying informative and enlightening material is a bitter pill--especially if I can reasonably expect otherwise.
From the university and academia perspective, I am questioning and considering what reasonable thresholds one ought to be able to easily identify when releasing information in is original form freely to the public, or withholding it because additional accessibility affordances cannot reasonably be provided in light of the return on the time invested versus simply releasing the information. This perspective, in particular, is the one in which the force and burden of the obligations we levy as a society are felt most acutely. For instance, if a professor is teaching a course in which no persons enrolled have a disability, is the professor obligated to only use information which is accessible to serve the unknown contingent of internet consumers should the university decide to release the course materials freely on the internet? Are the lines only drawn at choosing videos with accurate captions? What about all the millions of people with other learning disabilities of some sort--what is the university obligated to do to ensure they are not "depriving disabled people" of this information? If other kinds of learning disabilities do not merit such affordances, why not? Why only this one or that one?
From the social and cultural perspective, I am questioning and considering what expectations and obligations we ought to hold in such cases for both parties, and how we should reasonably define these expectations for accessibility to as many people as possible and the obligations of implementation. Do we base our expectations and obligational determination on defining thresholds of sheer number of people who may potentially be affected by the lack of affordances? Do we only care about certain accessibility affordances, while ignoring others? Why or why not? We have, I think, passed the point of solving many of society's issues with hammers and saws. We now need scalpels. Much as medical science drastically improves outcomes by isolating bad things and eradicating them with precision, instead of simply removing a whole appendage, we need to pay attention to the nuances and rationally interrogate them to figure out what we think is best socially and culturally. If that's increasing the reach of federal disability law to cover information that is given away for free for the masses of internet consumers, okay. But we better establish some bulletproof and sane principles for doing so, and hold all information producers equally accountable. If we don't, then let's drop the veneer that we are holding all individual and organizational entities equally responsible and accountable, and admit we are instead targeting specific entities based on their perceived ability to pay for the increased obligation to be universally accessible.
Aside from feeling mixed in sum of all the above perspectives and not jumping to immediate and simplistic conclusions that ignore the nuances of circumstance, I continue to feel mixed because I think, as a principle, an accessible web is a better web. I hold firm to the principle that the more information people have access to, the better off they are, and the better a society is for providing this information as accessibly as possible to as many people as possible. However, I think there is also a somewhat disappointing need to include in our rational calculations when information producers, whomever they may be, are publishing that information as they have it, to put it out there, to share it widely with as many people as they can in the form it exists. Perhaps if we had better tools, we could abstract away the burden, then try taking the route of expecting entities to use certain technologies that alleviate the need to take on making things accessible on their own. If we make accessibility a social and cultural good and goal, how might that change how we produce information?
When a student is paying for an education in a federally-funded institution, it's reasonable to expect video-captioning, braille, text-scalable HTML (not PDF) wherever feasible. What about Sign Language interpretation? Simple English?
It would be great if everyone could afford to offer accessible content.
Maybe, instead of paying instructors, all lectures should be typed verbarim - in advance - and delivered by Text-to-Speech software (with gestural scripting and intonation). All in the same voice.
- Ahead-of-time lecture scripts could be used to help improve automated speech recognition accuracy.
- Provide additional support for paid captioning
-- Tools
-- Labor
- Provide support for crowdsourced captioning services
-- Feature: Upvote to prioritize
-- Feature: Flag as garbled
- Develop video-platform-agnostic transcription software (and make it available for free)
-- Desktop offline Speech-to-Text
-- Mobile offline Speech-to-Text
-- Speaker-specific language model training
- Require use of a video-platform with support for automated transcription
-- YouTube
--
- Companies with research in this space:
-- Speech Recognition, [Automated] Transcription, Autocomplete hinting for [Crowd-sourced] captioning
-- IBM
--- YouTube has automated transcription
--- Google Voice supports transcription corrections, but AFAIU it's not speaker-specific
-- Baidu
-- Nuance (Dragon,)
--
... Textual lecture transcriptions are useful for everyone; because Ctrl-F to search.
- Label (with RDFa structured data) accessible content to make it easy to find
-- Schema.org accessibility structured data (for Places, Events)
--- https://github.com/schemaorg/schemaorg/issues/254
--- http://schema.org/accessibilityFeature
--- http://schema.org/accessibilityPhysicalFeature and/or
--- http://schema.org/amenityFeature
--- https://github.com/schemaorg/schemaorg/issues/254
- Challenges
-- Funding
-- CPU Time
-- Error Rate
-- Mitigating spam and vandalism
-- Human-verified crowdsourced corrections can/could be used to train recognizing and generative speaker-specific models
-- In the film A.I. (2001), there's a scene where they're asking questions of Robin Williams and the intonation/inflection inadvertantly wastes one of their 3 wishes / requests. https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence
> When a student is paying for an education in a federally-funded institution...
That's not what we are talking about.
Jack Dorsey Is Losing Control of Twitter
Well, one would hope so. Fortunately, Twitter doesn't have one of those "president for life" two-class stock setups like Google and Facebook. The stockholders can fire Dorsey when necessary.
The post-growth phase of a social network doesn't have to mean its collapse. Look at IAC, InterActive Corp (iac.com, ticker IAC). They run a lot of sites - Vimeo, Ask, About, Investopedia, Tinder, OKCupid, etc. - have a market cap of about $5 billion, and keep plugging along. They were started by Barry Diller, the creator of the Home Shopping Channel, something else that keeps plugging along. Diller is still CEO. IAC is boring but useful.
Twitter doesn't have to "exit"; they're already publicly held. They just have to trim down to a profitable and stable level, and accept that they're post-growth.
This is a great comment. For most companies, grinding along and sometimes taking a hit, firing people, slow things down and then pick it up again and grow, is the norm. If every company that stops getting massive growth would stop, we wouldn't have factories mass producing our day to day products, the post office wouldn't be delivering us goods anymore, etc. twitter is a mature organisation. There are jobs. There is cashflow. We should stop letting journalists make a big deal of companies that don't have infinite growth and applaud the entrepreneurs that build the monster.
There is another angle that we often don't consider: who the media writes for. In general, the financial press writes for investors (this is the same for the tech media), and so their alarm bells and dire predictions are for investors, not necessarily about underlying fundamentals or facts of a business for everyone else.
- Are these jounalists working for media companies that are competing for time?
- If they are shareholders, they don't seem to be declaring their conflicts of interest.
- For Twitter to respond would require that Twitter be taking editorial positions regarding the activities of competing media conglomerates. ("You're down, you should all just cash out now" [while you have far more daily active customers and revenue per user than a number of TV channels combined].
- Are there competing international interests and biases? Is the market for noiseless citizen media saturated? How much time is there, really?
I'm not sure if I'm understanding you correctly, but I think you're considering Twitter to be a media company, like a newspaper company? If so, I don't think that comparison is accurate so there'd be no conflict of interest really.
The conflict of interest is if the journalists owned shares of Twitter.
The conflict of interest is that one media company is publishing negative articles about another media company amidst acquisition talks.
What is their interest here? How do they intend to affect the perceived value of Twitter? Are there sources cited? Data? Figures?
I'm still a bit confused. Are you implying that Bloomberg and Twitter are in competition? I'm not so sure they are, really, in any sense.
Bloomberg is a private corporation which sells ads and exercises editorial discretion in publishing market-moving information and/or editorials.
Twitter is a public corporation which hosts Tweets, sells ads, and selects trending Twitter Moments.
Both companies are media services. Both companies compete for ad revenue. Both companies compete for readers' time.
(Medium allows journalists to publish information and/or editorials for free (or, now, for subscription revenue from membership programs)).
Schema.org: Mission, Project, Goal, Objective, Task
Can someone explain to me how this fits into Schema.org's core mission (which, as I understand it, is basically to provide a lightweight ontological overlay on pages to assist search engines)? This seems like an attempt to build a formal project-management ontology for systems that will never get indexed by Google -- an effort that might benefit more from the Basic Formal Ontology as a starting point.
A (search) use cases:
- I want to find an organinization with a project with similar goals and objectives.
- I want to find other objectives linked to indicators.
- I want to say "the schema:Project described in this schema:WebPage is relevant to one or more #GlobalGoals" (so that people searching for ways to help can model similar goal-directed projects)
so basically you make your site easier readable by search engines which in turn extract the data from your site stealing your traffic?
for example if you search for something on google, you get a preview in a box of the main info with a link to the original article (e.g. on wikipedia, imdb etc depending on what you search for), in most cases you no longer need to visit the original link.
There are always two ways to look at things.
In the case of Wikipedia and similar high-traffic websites, it probably takes a bit of the load off. It's probably a considerable saving in terms of serving requests if a user searches for an actor they think they recognise from a movie and see a snippet from Google served from Google's cache rather than a full pageview from Wikipedia.
In the case of physical stores and offices, I would imagine that a check for opening hours or telephone number shouldn't be counted as a pageview -> conversion anyway — they already chose you i.e. converted. Maybe they're a returning customer or somebody who liked the email marketing you sent them.
Is Google Maps stealing traffic from you by showing your business on the map? Maybe, but they're also doing you a favour. Neither Google nor Google Maps are likely to go anywhere anytime soon, and a considerable number of people are only going to check Google Maps and won't bother checking your site for the same information w.r.t. your business's location.
Beneficial in terms of reviews as well — displaying the aggregated score in a rich snippet is arguably preferable to letting the user click through where they can be persuaded against the 4 star average by an impassioned negative review.
Google obviously wants to be the one and only gateway to the web, and they want to keep you on Google pages as much as possible to show as many ads as possible, but unless you're selling ads, counting pageviews/traffic is pointless unless it's new business, and people new to you are unlikely to make a snap judgement based on a rich snippet. If anything, it's more likely to convince the new user to click through to your site.
[IMHO]
> In the case of physical stores and offices, I would imagine that a check for opening hours or telephone number shouldn't be counted as a pageview -> conversion anyway — they already chose you i.e. converted. Maybe they're a returning customer or somebody who liked the email marketing you sent them.
As well, adding structured data to the page (with RDFa, JSONLD, or Microdata) makes it much easier for voice assistant apps to parse out the data people ask for.
> Google [...]
Schema.org started as a collaborative effort between Bing, Google, Yahoo, and then Yandex. Anyone with a parser can read structured data from an HTML page with RDFa or a JSONLD document.
The Open Source Data Science Masters
This seems like a nice compilation for introductory material in one place.
I still can't get over the term "data science", though. Not only is it ridiculously meaningless - what sort of science doesn't involve data, and how often would data be useful to something that isn't scientific at some level - its meaninglessness derives from the hyped buzzword trendiness that drove its upswing.
I say this as someone whose expertise is really sitting at the nexus of what would be considered data science. I feel as if I have been doing what might be considered data science for a long time, before there was a label for it, but watching its ascendance in demand and popularity has been troubling. I should be happy, but I feel like it's being driven by fashion rather than fundamentals, which makes me worried about the trajectory going forward, and disturbed by some communities being thrown under the bus.
>I still can't get over the term "data science", though. Not only is it ridiculously meaningless - what sort of science doesn't involve data, and how often would data be useful to something that isn't scientific at some level - its meaninglessness derives from the hyped buzzword trendiness that drove its upswing.
I couldn't disagree more.
There are a number of terms for domain-independent data analysis:
- data analysis
- statistics
- statistical modeling
- machine learning
- big data
- data journalism
- data science
I think it makes perfect sense that the practice of collecting and analyzing data be qualified and indentified as a specific field.
I know of no better resource than these venn diagrams which identify the 'danger zones' around data science:
- http://datascienceassn.org/content/fourth-bubble-data-scienc...
Is there such a thing as a statistical model which only applies to a certain domain?
Domain knowledge ("substantive expertise"/"social sciences" in the linked venn diagrams) serves only to logically validate statistical models which may be statistically valid but otherwise illogical, in context to currently-available field knowledge (bias).
Regardless of field, the math is the same.
Regardless of field, the model either fits or it doesn't.
Regardless of field, the controls were either sufficient or they weren't.
We Should Not Accept Scientific Results That Have Not Been Repeated
So, we should have a structured way to represent that one study reproduces another? (e.g. that, with similar controls, the relation between the independent and dependent variables was sufficiently similar)
- RDF is the best way to do this. RDF can be represented as RDFa (RDF in HTML) and as JSON-LD (JSON LinkedData).
... " #LinkedReproducibility "
https://twitter.com/search?q=%23LinkedReproducibility
It isn't/wouldn't be sufficient to, with one triple, say (example.org/studyX, 'reproduces', example.org/studyY); there is a reified relation (an EdgeClass) containing metadata like who asserts that studyX reproduces studyY, when they assert that, and why (similar controls, similar outcome).
Today, we have to compare PDFs of studies and dig through them for links to the actual datasets from which the summary statistics were derived; so specifying who is asserting that studyX reproduces studyY is very relevant.
Ideally, it should be possible to publish a study with structured premises which lead to a conclusion (probably with formats like RDFa and JSON-LD, and a comprehensive schema for logical argumentation which does not yet exist). ("#StructuredPremises")
Most simply, we should be able to say "the study control type URIs match", "the tabular column URIs match", "the samples were representative", and the identified relations were sufficiently within tolerances to say that studyX reproduces studyY.
Doing so in prosaic, parenthetical two-column PDFs is wasteful and shortsighted.
An individual researcher then, builds a set of beliefs about relations between factors in the world from a graph of studies ("#StudyGraph") with various quantitative and qualitative metadata attributes.
As fields, we would then expect our aggregate #StudyGraphs to indicate which relations between dependent and independent variables are relevant to prediction and actionable decision making (e.g. policy, research funding).
The SQL filter clause: selective aggregates
You can do these with Ibis and various SQL engines:
* http://docs.ibis-project.org/sql.html#aggregates-considering...
* https://github.com/cloudera/ibis/tree/master/ibis/sql (PostgreSQL, Presto, Redshift, SQLite, Vertical)
* https://github.com/cloudera/ibis/blob/master/ibis/sql/alchem... (SQLAlchemy)
Ask HN: What do you think about the current education system?
Is it good, bad? What can be done better? What problems do you identify? Is it upsetting? are you used to it?
A Reboot of the Legendary Physics Site ArXiv Could Shape Open Science
Interesting to see their approach and the reasons why they are not building more features on top of ArXiv. Although comments on papers might be a dangerous area to venture into there are definitely places on the web where the ability to annotate and comment papers is helping science move forward. A good example is the Polymath project and Terry Tao's blog. Tao's recent solution to the Erdos discrepancy problem,an 80-year-old number theory problem, was actually triggered by a comment on his blog. Another example is www.fermatslibrary.com. Although the papers in the platform are more historical/foundational, they were able to get consistently good/constructive comments that help people understand papers better.
I've written up a few ideas about PDFs, edges, and reproducibility (in particular); with the Hashtags #LinkedReproducibility (and #MetaResearch)
https://twitter.com/search?q=%23LinkedReproducibility
https://twitter.com/search?q=%23MetaResearch
- schema.org/MedicalTrialDesign enumerations could/should be extended to all of science (and then added to all of these PDFs without structured edge types like e.g. {intendedToReproduce, seemsToReproduce} (which then have specific ensuing discussions))
- http://health-lifesci.schema.org/MedicalTrialDesign
- there should be a way to evaluate controls in a structured, blinded, meta-analytic way
- PDF is pretty, but does not support RDFa (because this is a graph)
... notes here: https://wrdrd.com/docs/consulting/data-science#linked-reprod...
(edit) please feel free to implement any of these ideas (e.g. CC0)
Principles of good data analysis
Helpful; thanks!
"Ten Simple Rules for Reproducible Computational Research" http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fj...
* Rule 1: For Every Result, Keep Track of How It Was Produced
* Rule 2: Avoid Manual Data Manipulation Steps
* Rule 3: Archive the Exact Versions of All External Programs Used
* Rule 4: Version Control All Custom Scripts
* Rule 5: Record All Intermediate Results, When Possible in Standardized Formats
* Rule 6: For Analyses That Include Randomness, Note Underlying Random Seeds
* Rule 7: Always Store Raw Data behind Plots
* Rule 8: Generate Hierarchical Analysis Output, Allowing Layers of Increasing Detail to Be Inspected
* Rule 9: Connect Textual Statements to Underlying Results
* Rule 10: Provide Public Access to Scripts, Runs, and Results
Sandve GK, Nekrutenko A, Taylor J, Hovig E (2013) Ten Simple Rules for Reproducible Computational Research. PLoS Comput Biol 9(10): e1003285. doi:10.1371/journal.pcbi.1003285
This is way more useful than the original post, thanks.
This is great, thank you!
Why Puppet, Chef, Ansible aren't good enough
So, basically, replace yum, apt, etc. with a 'stateless package management system'. That seems to be the gist of the argument. Puppet, Chef and Ansible (he left out Salt and cfengine!) have little to do with the actual post, and are only mentioned briefly in the intro.
They would all still be relevant with this new packaging system.
For some reason, this came to mind: https://xkcd.com/927/
No.
Well, yes, replace yum, apt, etc. But once you have a functional package management system, you don't need Puppet, Chef or Ansible, because the same stateless configuration language can be used to describe cluster configurations as well as packages. So build a provisioning tool based on that, instead.
That provisioning tool is called NixOps. The article links to it, but doesn't really go into detail about NixOps as a replacement for Puppet et al.
am I going to switch distros just to use a different provisioning tool?
Yes... Eventually.
The Nix model is the only sane model going into the future where complexity will continue to increase. Nobody is forcing you to switch distros now, but hinting that you'll probably want to look into this, because the current models might well be on their way to obsolescence.
* Unique PREFIX; cool. Where do I get signed labels and checksums?
* I fail to see how baking configuration into packages can a) reduce complexity; b) obviate the need for configuration management.
* How do you diff filesystem images when there are unique PREFIXes in the paths?
A salt module would be fun to play around with.
How likely am I to volunteer to troubleshoot your unique statically-linked packages?
[deleted]
Python vs Julia – an example from machine learning
1. Where is the source for this benchmark?
2.http://benchmarksgame.alioth.debian.org could be a bit more representative of broad-based algorithmic performance.
3. There are lots of Python libraries for application features other than handpicked algorithms. I would be interested to see benchmarks of the marshaling code in IJulia (IPython with Julia)
Free static page hosting on Google App Engine in minutes
From years back, there was a tool created called DryDrop[1] that allows you to publish static GAE sites via GitHub push.
https://developers.google.com/appengine/docs/push-to-deploy (Git + .netrc)
Seems nice! How do you set up a custom domain name for it?
“Don’t Reinvent the Wheel, Use a Framework” They All Say
1. WordPress is an application with a plugin API. It is not a framework.
2. Writing a web application without a framework is a good learning experience. For anything but small-scale local learning experiences, the risks and costs of not working with a framework are significant. [It is probable that I, with my ego, would "do it wrong" and that a community of developers has arrived at a far superior solution.]
One of the best explanations for what advantages a framework offers over basically just writing your own framework I've found is in the Symfony2 Book: "Symfony2 versus Flat PHP". [1]
[1] http://symfony.com/doc/current/book/from_flat_php_to_symfony...
PEP 450: Adding A Statistics Module To The Standard Library
I think Pandas is a great candidate for inclusion in the stdlib if this ever happens - and hopefully, numpy/parts of scipy will also be thrown in :)
I think the idea is to include a small independent stats package not a full featured still developing third party. Any number of people need std dev easily and reliably available. If you need numpy on top of that, you know you do and can afford the effort.
For 99% of my work numpy and the associated compilation overhead is unneeded - fits my brain, fits my needs
So let's amortize the cost of compiling and/or installing fast binaries by only relying on plain Python.
It would be great if there was a natural progression (and/or compat shims) for porting from this new stdlib library to NumPy[Py] (and/or from LibreOffice). (e.g. "Is it called 'cummean'")?
I guess that's the point of the stats-battery - pure python stats with no / minimal cost to migrate to numpy e.g.
From stats import mean
...
from numpy import mean
Functional Programming with Python
Great deck about functional programming in Python. Also:
* `operator.attrgetter`, `getattr()`, `setattr()`, `object.__getattribute__` [1][2]
* `operator.itemgetter`, `object.__getitem__` [3][4]
* `collections.abc` [5][6]
[1] http://docs.python.org/2/library/operator.html#operator.attr...
[2] http://docs.python.org/2/reference/datamodel.html#object.__g...
[3] http://docs.python.org/2/library/operator.html#operator.item...
[4] http://docs.python.org/2/reference/datamodel.html?#object.__...
[5] http://docs.python.org/2/library/collections.html#collection...
wow, i love your presentation design, type and function :)
is this a reveal.js rendering of an IPython notebook ?
ipython nbconvert --to slides <notebook.ipynb>
</notebook.ipynb>
http://ipython.org/ipython-doc/dev/interactive/nbconvert.htm...PEP 8 Modernisation
The default wrapping in most tools disrupts the visual structure of the code,
making it more difficult to understand.
It's 2013. Let's fix the tools.You think we would have gotten past the point of having to manually figure out "where should I break these lines for the best readability." Code is meant to be consumed by machines, and a machine should be capable of parsing the stuff, figuring out visual structure, and wrapping dynamically to account for window width and readable line-lengths in a way that preserves the visual structure.
I upvoted you for this - I think it's a really interesting idea.
Why can't my editor tokenize my code and then show it in the format I want? Why do I, as the programmer, have to worry about whether one whitespace convention works for you versus me, when you could just come up with whatever scheme you like and view code that way?
Funny, someone was just talking about 79 characters per line in regards to using soft tabs for editor display consistency, the other day.
http://www.reddit.com/r/java/comments/1j7iv4/would_it_not_be...
These are useful for static code analysis and finding congruence with typesetting conventions:
https://pypi.python.org/pypi/flake8
Useful Unix commands for data science
The "Text Processing" category of this list of unix utilities is also helpful: http://en.wikipedia.org/wiki/List_of_Unix_programs
BashReduce is a pretty cool application of many of these utilities.
The data visualization community needs its own Hacker News
It'd be really nice if there was a more centralized place to submit and discuss recent work. I usually end up posting my stuff to visualizing.org, visual.ly, and sometimes hn.
There generally isn't too much commenting going on though, despite generating a decent number of views. This makes sense - I only give feedback to a small percentage of things I see on the internet. I would like to interact with other designers more, but sending unsolicited comments/criticism via a tweet or an email doesn't seem appropriate most of the time.
I included a small note on the bottom of my last project "Have an idea for another graphic? Think I did something wrong? Hit me up! adam.r.pearce@gmail.com | @adamrpearce" which resulted in a couple of interesting email threads. None of those conversations really need to be private though.
/r/d3js , /r/visualization , /r/Infographics
schema.org/ > Thing > CreativeWork > { Article, Dataset, DataCatalog, MediaObject, and CollectionPage } may also be helpful.
Ask HN: Intermediate Python learning resources?
So I've completed Codecademy's course on Python, I have some experience fiddling with Flask and putting together random Python scripts. Generally, when I want to build something that I've never built before, I look up how to do it on Stackoverflow and manage to understand most of the things.
How can I take my knowledge to the next level?
Free learning resources are preferred. Hopefully ones you have used yourself when in my position.
Thanks!
I'm in a very similar position.
If you really like Codeacademy, there are non-track exercises that involve Python in the API section [0] and a couple of Python challenges [1][2] that aren't listed.
What I'm doing now:
* Solving exercises on Project Euler in Python. [3]
* Working through each example in the Python Cookbook[4]. It was just updated to the third edition.
* Watched Guido's Painless Python talks from a few years ago [5]. I found his concise explanations of language features really helpful.
Some things I intend to do:
* Finish working through Collective Intelligence [6]. The examples are written in Python.
* Work through Introduction to Algorithms [7]. The course uses Python.
* Read, understand and give a shot at extending Openstack [8] code.
-----
0: http://www.codecademy.com/tracks/apis
1: http://www.codecademy.com/courses/python-intermediate-en-NYX...
2: http://www.codecademy.com/courses/python-intermediate-en-VWi...
5: http://www.youtube.com/watch?v=bDgD9whDfEY
7: http://ocw.mit.edu/courses/electrical-engineering-and-comput...
Aren't Project Euler's exercises seem more likely maths exercises? It's kinda difficult for those who graduated from social sciences and tries to learn programming from scratch.
The Green Tea Press books are great; and free.
Think Python: How To Think Like a Computer Scientist http://www.greenteapress.com/thinkpython/thinkpython.html
Think Complexity: Exploring Complexity Science with Python : http://www.greenteapress.com/compmod/
Think Stats: Probability and Statistics for Programmers : http://www.greenteapress.com/thinkstats/index.html
You can search announced, in progress, future, self-paced, and finished MOOCs (Massive Open Online Courses) with class-central.com : http://www.class-central.com/search?q=python
Here are three resources for learning Python for Science :
http://scipy-lectures.github.io
https://github.com/jrjohansson/scientific-python-lectures
https://github.com/ipython/ipython/wiki/A-gallery-of-interes...
Ansible Simply Kicks Ass
> Doing this with Chef would probably mean chasing down a knife plugin for adding Linode support, and would simply require a full Chef stack (say hello to RabbitMQ, Solr, CouchDB and a gazillion smaller dependencies)
It is throwaway lines line that where you really need to be careful since, no, you don't need to RabbitMQ, solr, couchdb etc. You can just use chef-solo which can also be installed with a one liner (albeit running a remote bash script)
When comparing two products (especially in an obviously biased manner) you need to make sure you are 100% correct. Otherwise you weaken your case and comments like this one turn up.
True, though chef-solo is a local-only solution. Ansible manages to manage remote machines without that kind of setup, and can start managing the remotes without installing anything on them.
Install chef-solo on server.
scp locally dev'd (with vagrant ideally) cookbook to server
Run cookbook
3 steps that can easily be wrapped in a little script (which I know a large company does because I saw their presentation about it on confreaks. Sorry I cannot remember the name of the company or presentation but it was a chef related one).
Still not exactly killing it in terms of complexity. I would avoid comparing ansible to chef-solo in that respect and focus on bits where ansible has a clear (IMHO) win.
Having said that I should say that I have not used ansible and am basing this on what I have read about it.
You don't even need to do that. You can type `knife solo prepare root@my-server` and it will install chef-solo on that machine. Then type `knife solo cook root@my-server` and you're good to go.
Well damn, I have been doing it wrong. That is awesome to know though.
I wonder if anyone has done a http://todomvc.com/ equivalent for cfengine, puppet, chef, salt, ansible etc.
Something like a simple webserver running Apache with mod_xsendfile, Passenger, Ruby 2.0.0, postgreSQL and a few firewall tweaks (why yes, I AM mainly a ruby developer, why do you ask?)
https://github.com/devstructure/blueprint [generates] configuration sets for "Puppet or Chef or CFEngine 3."
https://en.wikipedia.org/wiki/Comparison_of_open-source_conf...
https://ops-school.readthedocs.org/en/latest/config_manageme...
Python-Based Tools for the Space Science Community
"OSX install has become a challenge. With the Enthought transition to Canopy we cannot figure out clean install directions for 3rd party packages and therefore can no longer recommend using EPD for SpacePy."
Um... use Anaconda? http://continuum.io/anaconda
The Python installation tool utilized to install different versions of Anaconda and component packages is called [conda](http://docs.continuum.io/conda/intro.html). [pythonbrew]( https://github.com/utahta/pythonbrew) in combination with [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/) is also great.
Big-O Algorithm Complexity Cheat Sheet
JSON API
In terms of http://en.wikipedia.org/wiki/Linked_data , there are a number of standard (overlapping) URI-based schema for describing data with structured attributes:
* http://schema.org/docs/full.html
* http://schema.rdfs.org/all.json
* http://schema.rdfs.org/all.ttl (Turtle RDF Triples)
* http://json-ld.org/spec/latest/json-ld/
* http://json-ld.org/spec/latest/json-ld-api/
* http://www.w3.org/TR/ldp/ Linked Data Platform TR defines a RESTful API standard
* http://wiki.apache.org/incubator/MarmottaProposal implements LDP 1.0 Draft and SPARQL 1.1
Norton Ghost discontinued
Would love to hear ideas on the best replacement from other HNers.
Open-source utilities running from livecd.
dd if=/dev/sda bs=16777216 | gzip -c9 > /path/to/sda.img.gz
If you have the pv utility installed, you can get progress bars: dd if=/dev/sda bs=16777216 | pv -c -W | gzip -c9 | pv -c -W > /path/to/sda.img.gz
To restore (WARNING: THE FOLLOWING COMMAND IS EXTREMELY DANGEROUS, USE IT ONLY IF YOU KNOW EXACTLY WHAT YOU'RE DOING): pv -c -W /path/to/sda.img.gz | gzip -cd | pv -c -W | dd bs=16777216 of=/dev/sdx
Of course, you can use your favorite compression program instead of gzip; on Debian-like systems, bzip2 or xz should be drop-in replacements available by default.If you want a compressed image which you can mount read-only without uncompressing (e.g. if you want to be able to reach into the backup and pull out a single file or directory), you can pipe the dump to a FIFO and then use the pseudo-file feature of mksquashfs [1]. E.g. something like this (commands not tested, there may be typos):
mkfifo sda.fifo
echo 'sda.img f 444 root root cat sda.fifo' > sda.pf
dd if=/dev/sda bs=16777216 > sda.fifo &
mkdir empty
mksquashfs empty sda.squashfs -pf sda.pf
mount -o ro,loop sda.squashfs /mnt
You then have an uncompressed image visible. If it's a whole-disk image (/dev/sda instead of /dev/sda1), you need to use a program called kpartx to make device nodes for each partition.I recommend reading the man pages and playing around with these utilities in a Virtualbox VM with a small disk.
Of course, this approach doesn't pay any attention to filesystems. Which means it works with Windows partitions (unlike LVM or btrfs snapshots). But there are limitations; it reads and stores "empty" space not occupied by files (I recommend making a large file full of zeroes beforehand to make the empty space compress better), restoring to a smaller disk is difficult if not impossible, restoring to a larger disk requires you to resize manually afterwards if you want to use the extra space.
[1] It's a little easier conceptually to make a squashfs directly from an image file, but that approach requires you to store the image file, which requires temp space equal to the size of the disk you're backing up. The pseudo-file approach has a benefit of not needing temporary space equal to the size of the disk being backed up, you just need enough space for the compressed result.
That misses much of what Ghost does. Ghost is file-system-aware, so it's not a byte-for-byte cloner. It can copy one volume to a large volume, and vice versa.
You can accomplish the same thing in Linux, of course, but it requires a lot more than just "dd".
http://clonezilla.org supports http://partclone.org/ , http://www.partimage.org/ , dd, and ntfsclone.
As an experimental physicist, I refuse to get excited about a new theory until the proponent gets to an observable phenomenon that can fix the question.
The problem with emergent theories like this is that they _derive_ Newtonian gravity and General Relativity so it’s not clear there’s anything to test. If they are able to predict MOND without the need for an additional MOND field then they become falsifiable only insofar as MOND is.
Please, how is the article related to MOND's theories?
In general, they’re not. But if the only thing emergent theories predict is Newtonian dynamics and General Relativity then that’s a big problem for falsifiability. But if they modify Newtonian dynamics in some way, then do we have something to test.
From https://news.ycombinator.com/item?id=43738580 :
> FWIU this Superfluid Quantum Gravity [SQG, or SQR Superfluid Quantum Relativity] rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From https://news.ycombinator.com/item?id=43310933 re: second sound:
> - [ ] Models fluidic attractor systems
> - [ ] Models superfluids [BEC: Bose-Einstein Condensates]
> - [ ] Models n-body gravity in fluidic systems
> - [ ] Models retrocausality
From https://news.ycombinator.com/context?id=38061551 :
> A unified model must: differ from classical mechanics where observational results don't match classical predictions, describe superfluid 3Helium in a beaker, describe gravity in Bose-Einstein condensate superfluids , describe conductivity in superconductors and dielectrics, not introduce unoobserved "annihilation", explain how helicopters have lift, describe quantum locking, describe paths through fluids and gravity, predict n-body gravity experiments on earth in fluids with Bernoulli's and in space, [...]
> What else must a unified model of gravity and other forces predict with low error?