Contents^
Items^
Fusion energy is 30 years away and always will be. https://www.youtube.com/watch?v=JurplDfPi3U
This was before ignition was confirmed in a lab setting though so now the countdown has actually begun
"Ignition" was as much a marketing term as anything. It relied on a very specific accounting of energy-in vs energy-out, which no one doubted could be done.
The fact is that they used 50 kWh of energy and produced 0.7 kWh of energy. The fact that at some tiny part of the flow diagram we achieved > 1:1 energy doesn't change the fact that the actual fraction of energy out compared to energy in has barely changed in ten years.
The latest experiment produced a Q-total of 0.014, while before it was something like 0.012.
We can't just hand-wave away the energy in.
Watched the Helion Learn Engineering video, too: https://www.youtube.com/watch?v=_bDXXWQxK38
Has that net-positive finding been reproduced yet in any other Tokamoks?
How does this compare to Helion's (non-Tokamok, non-Stellerator fusion plasma confinement reactor) published stats for the Trenta and Polaris products?
Could SYLOS or other CPA Chirped Pulse Amplification lasers be useful for this problem with or without the high heat of a preexisting plasma reaction to laser pulse next to? https://www.google.com/search?q=nuclear+waste+cpa
Sh1mmer – An exploit capable of unenrolling enterprise-managed Chromebooks
I wouldn't have a career in IT if I hadn't spent many hours at ages 11 to 15 trying to get round my schools network security. My logon was frequently disabled for misuse and I was even suspended for a couple of days once but I learnt more that way than in any class I've ever taken.
I relate to this. As someone currently in high school, messing around with web proxies and code deployment sights, and web-based IDE's trying to run Dwarf Fortress in my school browser has taught me more about computers and networks then just about anything else. It is painfully easy to get around school filters these days. I've never really messed with unenrollment because you do need enrollment to access the testing websites but I've been trying to get into Developer Mode to get linux apps, but the IT guys must have thought ahead on that one.
Chromebooks don't even have a Terminal for the kids. Vim's great, but VScode with Jupyter Notebook support would make the computers we bought for them into great offline calculators, too.
VSCode on a Chromebook requires VMs and Containers which require "Developer Tools" and "Powerwash"; or the APK repack of VSCodium that you can't even sideload and manually update sometimes (because it's not on the 15-30% cut, and must use their payment solution, app store with static analysis and code signing at upload).
AFAIU, Chromebooks with Family Link and Chromebooks for Education do not have a Terminal, bash, git, VMs (KVM), Containers (Docker/Podman/LXC/LXD/gvisor), third-party repos with regular security updates, or even Python; which isn't really Linux (and Windows, Mac, and Linux do already at present support such STEM for Education use cases).
From https://news.ycombinator.com/item?id=30168491 :
> Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"? The current pyodide CPython Jupyter kernel takes like ~25s to start at present, and can load Python packages precompiled to WASM or unmodified Python packages with micropip: https://pyodide.org/en/latest/usage/loading-packages.html#lo...
There's also MambaLite, which is part of the emscripten-forge project; along with BinderLite. https://github.com/emscripten-forge/recipes (Edit: Micropip or Mambalite or picomamba or Zig. : "A 116kb WASM of Blink that lets you run x86_64 Linux binaries in the browser" https://news.ycombinator.com/item?id=34376094 )
It looks like there are now tests for VScode in the default Power washable 'penguin' Debian VM that you get with Chromebook Developer Tools; but still the kids are denied VMs and Containers or local accounts (with kid-safe DoH/DoT at lesat) and so they can't run VScode locally on the Chromebooks that we bought for them.
Why do I need "Developer Tools" access to run VScode and containers on a Chromebook; but not on a Windows, Mac or Linux computer? If containers are good enough for our workloads hosted in the cloud, they should be good enough for local coding and calculating in e.g. Python. https://github.com/quobit/awesome-python-in-education#jupyte...
I actually use a Web Assembly port of VIM on my school computer.
Nice. TIL about vim.wasm: https://github.com/rhysd/vim.wasm
Jupyter Notebook and Jupyter Lab have a web terminal that's good enough to do SSH and Vim. Mosh Mobile Shell is more resilient to internet connection failure.
Again though, Running everything in application-sandboxed WASM all as the current user is a security regression from the workload isolation features built into VMs and Containers (which Windows, Mac, and Linux computers support in the interests of STEM education and portable component reuse).
The Qubit Game (2022)
"World Quantum Day: Meet our researchers and play The Qubit Game" https://blog.google/technology/research/world-quantum-day-me... :
> In celebration of World Quantum Day, the Google Quantum AI team wanted to try a different way to introduce people to the world of quantum computing. So we teamed up with Doublespeak Games to make The Qubit Game – a playful journey to building a quantum computer, one qubit at a time, while solving challenges that quantum engineers face in their daily work. If you succeed, you’ll discover new upgrades for your in-game quantum computer, complete big research projects, and hopefully become a little more curious about how we’re building quantum computers.
Additional Q12 (K12 QIS Quantum Information Science) ideas?:
- Exercise: Port QuantumQ quantum puzzle game exercises to a quantum circuit modeling and simulation library like Cirq (SymPy) or qiskit or tequila: https://github.com/ray-pH/quantumQ
- Exercise: Model fair random coin flips with qubit basis encoding in a quantum circuit simulator in a notebook
- Exercise: Model fair (uniformly distributed) 6-sided die rolls with basis state embedding or amplitude embedding or better (in a quantum circuit simulator in a notebook)
- QIS K-12 Framework (for K12 STEM, HS Computer Science, HS Physics) https://q12education.org/learning-materials-framework
- tequilahub/tequila-tutorials: https://github.com/tequilahub/tequila-tutorials
Calculators now emulated at Internet Archive
MAME: https://en.wikipedia.org/wiki/MAME
"TI-83 Plus Calculator Emulation" https://archive.org/details/ti83p-calculator
TI-83 series: https://en.wikipedia.org/wiki/TI-83_series :
> Symbolic manipulation (differentiation, algebra) is not built into the TI-83 Plus. It can be programmed using a language called TI-BASIC, which is similar to the BASIC computer language. Programming may also be done in TI Assembly, made up of Z80 assembly and a collection of TI provided system calls. Assembly programs run much faster, but are more difficult to write. Thus, the writing of Assembly programs is often done on a computer.
I had a TI-83 Plus in middle school, and then bought a TI-83 Plus Silver edition for high school. The TI-83 Plus was the best calculator allowed for use by the program back then. FWIU these days it's the TI-84 Plus, which has USB but no CAS Computer Algebra System.
The JupyterLite build of JupyterLab - and https://NumPy.org/ - include the SymPy CAS Computer Algebra System and a number of other libraries; and there's an `assert` statement in Python; but you'd need to build your own JupyterLab WASM bundle to host as static HTML if you want to include something controversial like pytest-hypothesis. https://jupyterlite.rtfd.io/
Better than a TI-83 Plus emulator? Install MambaForge in a container to get the `conda` and `mamba` package managers (and LLVM-optimized CPython on Win, Mac, Lin) and then `mamba install -y jupyterlab tabulate pandas matplotlib sympy`; or login to e.g. Google Colab, Cocalc, or https://Kaggle.com/learn ( https://GitHub.com/Kaggle/docker-python ) .
To install packages every time a notebook runs:
!python -m pip install # or
%pip install <pkgs>
!conda install -y
!mamba install -y
But NumPy.org, JupyterLite, and Colab, and Kaggle Learn all already have a version of SymPy installed (per their reproducible software version dependency files; requirements.txt, environment.yml (Jupyter REES; repo2docker))Like MAME, which is the emulator for the TI-83 Plus and other calculators hosted by this new Internet Archive project, Emscripten-forge builds WASM (WebAssembly) that runs in an application-sandboxed browser tab as the same user as other browser tab subprocesses.
TI-83 apps:
ACT Math Section app; /? TI-83 ACT app: https://www.google.com/search?q=ti83+act+app
Commodity markets with volatility on your monochrome LCD calculato with no WiFi. SimCity BuildIt has an online commodity marketplace and sims as part of the simulation game. "Category:TI-83&4 series Zilog Z80 games" https://en.wikipedia.org/wiki/Category:TI-83%264_series_Zilo...
Computer Algebra System > Use in education: https://en.wikipedia.org/wiki/Computer_algebra_system#Use_in... :
> CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms[15] though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams.
Machine Learning for Fluid Dynamics Playlist
[Machine Learning for] "Fluid Dynamics" YouTube playlist. Steve Brunton (UW) https://youtube.com/playlist?list=PLMrJAkhIeNNQWO3ESiccZmPss...
Intercepting t.co links using DNS rewrites
That 9-hop shortening example is disgusting, I wonder if that could be alleviated by introducing some protocol:
1. Make all shortening services append a `This-is-a-shortening-service: true` header to all the responses they send.
2. When a link is added to a shortening service, check if the response from the link has the header above and resolve the destination, recursively.
awesome-url-shortener: https://github.com/738/awesome-url-shortener
/? shorturl api OpenAPI https://www.google.com/search?q=shorturl+api+openapi
- TinyURL OpenAPI: https://tinyurl.com/app/dev
- GH topic: url-shortener: https://github.com/topics/url-shortener
A https://schema.org/Thing may have zero or more https://schema.org/url and/or https://schema.org/identifier ; and then first the ?s subject URI that's specified with the `@id` property in JSONLD RDF.
You can add string, schema:Thing, or URI tags/labels with the https://schema.org/about property.
MusicLM: Generating music from text
I don't really understand why this approached is pushed for music. You can overpaint an image, but you can't do that with a song. Cutting an image to reintroduce coherence is easy too. For a song you need midi, or another symbolic representation. That was the approach of pop2piano (unfortunately it is limited to covers, not generating from scratch). And even if a song generated this is OK, listening to half an hour full of AI mistakes is really tiring. With a symbolic representation you could at least fix the mistakes if there is one good output.
I understand what you're saying, although it could be argued that at least for some types of image tasks one would prefer something like an SVG output with layers to make it easier to edit.
For music, I think it's partly an academic question of "can we do it" rather than trying to maximize immediate practical usefulness. There's already quite a bit of work on symbolic music generation (mostly MIDI), a lot of it quite competent, especially in more constrained domains like NES chiptunes or classical piano, so a full text-to-audio pipeline probably seemed a more interesting research problem.
And for a lot of use cases, where people might truly not care too much about tweaking the output to their liking, the generated audio might be good enough; the examples were pretty plausible to my ear, if somewhat lo-fi sounding (probably because it's operating at 24kHz, compared to the more standard 44-48kHz).
In the future a more hybrid approach probably makes sense for at least some applications, where MIDI is generated along with some way of specifying the timbre for each instrument (hopefully something better than general MIDI, though even that would be fun; not sure if it's been done). I'm sure that in the near we'll see a lot more work in the DAW and plugin space to have these kind of things built-in, but in a way that they can be edited by the user.
awesome-sheet-music lists a number of sheet music archives https://github.com/ad-si/awesome-sheet-music
Other libraries of (Royalty Free, Public Domain) sheet music:
Explainable artificial intelligence: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
FWIU, Current LLMs can't yet do explainable AI well enough to satisfy the optional Attribution clause of e.g. Creative Commons licenses?
"Sufficiently Transformative" is the current general copyright burden according to precedent; Transformative use and fair use: https://en.wikipedia.org/wiki/Transformative_use
SQLAlchemy 2.0 Released
I would urge people who have had issues with the documentation to give the 2.0 documentation a try. Many aspects of it have been completely rewritten, both to correctly describe things in terms of the new APIs as well as to modernize a lot of old documentation that was written many years ago.
First off, SQLAlchemy's docs are pretty easy to get to, for a direct link just go to:
It's an esoteric URL I know! ;)
from there, docs that are new include:
- the Quickstart, so one can see in one quick page what SQLAlchemy usually looks like: https://docs.sqlalchemy.org/en/20/orm/quickstart.html
- the Unified Tutorial, which is a dive into basically every important concept across Core / ORM : https://docs.sqlalchemy.org/en/20/tutorial/index.html
- the ORM Querying guide, which is a "how to" for a full range of SQL generation: https://docs.sqlalchemy.org/en/20/orm/queryguide/index.html
it's still a lot to read of course but part of the idea of SQLAlchemy 2.0 was to create a straighter and more consistent narrative, while it continues to take on a very broad-based problem space. If you compare the docs to those of like, PostgreSQL or MySQL, those docs have a lot of sections and text too (vastly more). It's a big library.
From https://docs.sqlalchemy.org/en/20/dialects/ :
> Currently maintained external dialect projects for SQLAlchemy include: [...]
Is there a list of async [SQLA] DB adapters?
The SQLAlchemy 2.0 Release Docs: https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.htm...
Show HN: A script to test whether a program breaks without network access
"Chaos engineering" https://en.wikipedia.org/wiki/Chaos_engineering
dastergon/awesome-chaos-engineering#notable-tools: https://github.com/dastergon/awesome-chaos-engineering#notab...
IIUC, MVVM apps can handle delayed messages - that sit in the outbox while waiting to reestablish network connectivity - better than apps without such layers.
Which mobile apps work during intermittent connectivity scenarios like disasters and disaster relief (where first priority typically is to get comms back online in order to support essential services (with GIF downloads and endless pull-to-refresh))?
Certified 100% AI-free organic content
> Published content will be later used to train subsequent models, and being able to distinguish AI from human input may be very valuable going forward
I find this to be a particularly interesting problem in this whole debacle.
Could we end up having AI quality trend downwards due to AI ingesting its own old outputs and reinforcing bad habits? I think it's a particular risk for text generation.
I've already run into scenarios where ChatGPT generated code that looked perfectly plausible, except for that the actual API used didn't really exist.
Now imagine a myriad fake blogs using ChatGPT under the hood to generate blog entries explaining how to solve often wanted problems, and that then being spidered and fed into ChatGPT 2.0. Such things could end up creating a downwards trend in quality, as more and more of such junk gets posted, absorbed into the model and amplified further.
I think image generation should be less vulnerable to this since all images need tagging to be useful, "ai generated" is a common tag that can be used to exclude reingesting old outputs, and also because with artwork precision doesn't matter so much. If people like the results, then it doesn't matter that much that something isn't drawn realistically.
As someone in SEO, I've been pretty disgusted by the desire for site owners to want to use AI-generated content. There are various opinions on this, of course, but I got into SEO out of interest in the "organic web" vs. everything being driven by ads.
Love the idea of having AI-Free declarations of content as it could / should help to differentiate organic content from generated content. It would be very interesting if companies and site owners wished to self-certify their site as organic with something like an /ai-free.txt.
I don't see the point. There's lots of old content out there that won't get tagged, so lacking the tag doesn't mean it's AI generated. Meanwhile people abusing AI for profit (eg, generating AI driven blogs to stick ads on them) wouldn't want to tag their sites in a way that might get them ignored.
And what are the consequences for lying?
Does use of a search engine violate the "No AI" covenant with oneself?
Variation on the Turning Test: prove that it's not a human claiming to be a computer.
Modeling premises and Meta-analysis are again necessary elements for critical reasoning about Sources and Methods and superpositions of Ignorance and Malice.
Maybe this could encourage the recreation of the original Yahoo! (If you don't remember, Yahoo! started out not as a search engine in the Google sense but as a collection of human curated links to websites about various topics)
I consider Wikipedia to be a massive curated set of information. It also includes a lot of references and links to additional good information / source materials. Companies try to get spin added and it's usually very well controlled. I worry that a lot of ai generated dreck will seep into Wikipedia, but I am hopeful the moderation will continue to function well.
List of Web directories: https://en.wikipedia.org/wiki/List_of_web_directories ; DMOZ FTW
Distributed Version Control > Work model > Pull Request: https://en.wikipedia.org/wiki/Distributed_version_control#Pu...
sindresorhus/awesome: https://github.com/sindresorhus/awesome#contents
bayandin/awesome-awesomeness: https://github.com/bayandin/awesome-awesomeness
"Help compare Comment and Annotation services: moderation, spam, notifications, configurability" https://github.com/executablebooks/meta/discussions/102
Re: fact checks, schema.org/ClaimReview, W3C Verifiable Claims, W3C Verifiable News & Epistemology: https://news.ycombinator.com/item?id=15529140
W3C Web Annotations could contain (cryptographically-signed (optionally with a W3C DID)) Verifiable Claims; comments with signed Linked Data
An incomplete guide to stealth addresses
> Basic stealth addresses can be implemented fairly quickly today, and could be a significant boost to practical user privacy on Ethereum. They do require some work on the wallet side to support them
So how easy is it realistically? I hope it's not going to un-ergonomic like PGP where novices are sometimes seeing to be pasting their private key into e-mails and sending things in plaintext which should have been ciphertext, or otherwise leaking info.
I imagine you have to be really careful not to mess things up here.
Oh, there's WKD: Web Key Directory https://wiki.gnupg.org/WKD#How_does_an_email_client_use_WKD....
gpg --homedir "$(mktemp -d)" --verbose --locate-keys your.email@example.org
https://example.org/.well-known/openpgpkey/hu/0t5sewh54rxz33fwmr8u6dy4bbz8itz2
Is there a pinned certificate for `gpg recv-keys` (that isn't possible with WKD) https://en.wikipedia.org/wiki/Key_server_(cryptographic)#Pro... ?WKD and HKP depend upon TLS and preshared CA certs (PKI or pinned certificates) in all forms AFAIU:
# HKP, HTTPS
gpg --recv-keys an.email@example.org
# WKD
gpg --locate-keys your.email@example.org
Who is trusted with read/write to all keys on the HTTP pubkey server?W3C DIDs are encodable into QR codes, too. And key hierarchy is optional with DIDs.
(Edit)
https://www.w3.org/TR/did-core/#did-controller :
> DID Controller
> A DID controller is an entity that is authorized to make changes to a DID document. The process of authorizing a DID controller is defined by the DID method.
> The controller property is OPTIONAL. If present, the value MUST be a string or a set of strings that conform to the rules in 3.1 DID Syntax. The corresponding DID document(s) SHOULD contain verification relationships that explicitly permit the use of certain verification methods for specific purposes.
> When a controller property is present in a DID document, its value expresses one or more DIDs. Any verification methods contained in the DID documents for those DIDs SHOULD be accepted as authoritative, such that proofs that satisfy those verification methods are to be considered equivalent to proofs provided by the DID subject.
/? "Certificate Transparency" blockchain / dlt ... QKD, ... Web Of Trust and temp keys
What does Interledger Protocol say about these an in-band / in-channel signaling around transactions?
https://westurner.github.io/hnlog/ Ctrl-F "SPSP"
> https://github.com/interledger/rfcs/blob/master/0009-simple-... :
> Relation to Other Protocols: SPSP is used for exchanging connection information before an ILP payment or data transfer is initiated
RFC 8905 specifies "The 'payto:' URI Scheme for Payments" and does support ILP addresses https://www.rfc-editor.org/rfc/rfc8905.html#name-tracking-pa... https://datatracker.ietf.org/doc/rfc8905/ :
> 7. Tracking Payment Target Types
> A registry of "Payto Payment Target Types" is described in Section 10. The registration policy for this registry is "First Come First Served", as described in [RFC8126]. When requesting new entries, careful consideration of the following criteria [...]
DID URIs are probably also already payto: URI-scheme compatible but not yet so registered?
ILP Addresses - v2.0.0 https://interledger.org/rfcs/0015-ilp-addresses/ :
> ILP addresses provide a way to route ILP packets to their intended destination through a series of hops, including any number of ILP Connectors. (This happens after address lookup using a higher-level protocol such as SPSP.) Addresses are not meant to be user-facing, but allow several ASCII characters for easy debugging.
Do Large Language Models learn world models or just surface statistics?
If they don't search for Tensor path integrals, for example, can any NN or symbolic solution ever be universally sufficient?
A generalized solution term expression for complex quantum logarithmic relations:
e**(w*(I**x)*(Pi**z))
What sorts of relation expression term forms do LLMs synthesize from?Can [LLM XYZ] answer prompts like:
"How far is the straight-line distance from (3red, 2blue, 5green) to (1red, 5blue, 7green)?"
> - What are "Truthiness", Confidence Intervals and Error Propagation?
> - What is Convergence?
> - What does it mean for algorithmic outputs to converge given additional parametric noise?
> - "How certain are you that that is the correct answer?"
> - How does [ChatGPT] handle known-to-be or presumed-to-be unsolved math and physics problems?
> - "How do we create room-temperature superconductivity?"
"A solution for room temperature superconductivity using materials and energy from and on Earth"
> - "How will planetary orbital trajectories change in the n-body gravity problem if another dense probably interstellar mass passes through our local system?"
Where will a tracer ball be after time t in a fluid simulation ((super-)fluid NDEs Non-Differential Equations) of e.g. a vortex turbine next to a stream?
How do General Relativity, Quantum Field Theory, Bernoulli's, Navier Stokes, and the Standard Model explain how to read and write to points in spacetime and how do we solve gravity?
Did I forget to cite myself (without a URL)? Notable enough for a citation it isn't.
"[Edu-sig] ChatGPT for py teaching" (2023) Editing Python Mailing List. (2023)
No, LLMs do not learn a sufficient world model for answering basic physics questions that aren't answered in the training corpus; and, AGI-strength AI is necessary for ethical reasoning given the liability in that application domain.
Hopefully, LLMs can at least fill in with possible terms like '4π' given other uncited training corpus data. LLMs are helpful for Evolutionary Algorithmic methods like mutation and crossover, but then straight-up ethical selection.
Ask [the LLM] to return a confidence estimate when it can't know the correct answer, as with hard and thus valuable e.g. physics problems. What tone of voice did Peabody take in explaining to Sherman, and what does an LLM emulate?
Thoughts on the Python packaging ecosystem
This is a great writeup by a central figure in Python packaging, and gets to the core of one of Python packaging's biggest strengths (and weaknesses): the PyPA is primarily an "open tent," with mostly independent (but somewhat standards-driven) development within it.
Pradyun's point about unnecessary competition rings especially true to me, and points to (IMO) a hard reality about where the ecosystem needs to go: at some point, there need to be some prescriptions about the one good tool to use for 99.9% of use cases, and with that will probably come some hurt feelings and disregarded technical opinions (including possibly mine!). But that's what needs to happen in order to produce a uniform tooling environment and UX.
The primary (probably build and) packaging system for a software application should probably support the maximum level of metadata sufficient for downstream repackaging tools.
Metadata for the ultimate software package should probably include a sufficient number of attributes in its declarative manifest:
Package namespace and name,
Per-file paths and Checksums, and at least one cryptographic signature from the original publisher. Whether the server has signed what was uploaded is irrelevant if it and the files within don't match a publisher signature at upload time?
And then there's the permissions metadata, the ACLs and context labels to support any or all of: SELinux, AppArmor, Flatpak, OpenSnitch, etc.. Neither Python packages nor conda packages nor RPM support specifying permissions and capabilities necessary for operation of downstream packages.
You can change the resolver, but the package metadata would need to include sufficient data elements for Python packaging to be the ideal uni-language package manager imho
Google Calls in Help from Larry Page and Sergey Brin for A.I. Fight
It’s interesting that Google—the inventor of the “T” in GPT—is on red alert due to ChatGPT. And Google has a bunch of research in AI (see Dean’s recent blogpost [1]), so what gives with their lack of AI product/execution/strategy? Too scared to upend their existing cash cow maybe, aka innovator’s dilemma?
Also, it doesn’t inspire confidence in Pichai that Sergei and Brin were consulted about something that should be on the forefront of Google’s strategy. Maybe that’s too harsh but I just find all this surprising.
[1] https://ai.googleblog.com/2023/01/google-research-2022-beyon...
why woykd you say they are on "red alert"? Sounds like lazy investors pumping or dumping to me.
Google already has very large LLMs online for search and other applications.
A similar take on similar spin: https://twitter.com/westurner/status/1614002846394892288
From April 2022, "Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance" https://ai.googleblog.com/2022/04/pathways-language-model-pa... :
> In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved state-of-the-art few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains in understanding the capabilities that emerge with few-shot learning as we push the limits of model scale.
> […] More recent LLMs, such as GLaM, LaMDA, Gopher, and Megatron-Turing NLG, achieved [SOTA] few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains [for few-shot LLMs] /2
DeepMind Dreamerv3 is more advanced than all the LLMs.
I don’t think the issue of Google is a lack of advanced AI technology but the transfer and realization of products. So many Google’s products are half-backed, only half-heartedly developed and then be buried at the end. For a giant like Google, they must be stuck in a rut and to get free need all the help they can get. If they continue the business as usual, I afraid they will experience a never-seen before landslide. I believe calling back the founders at this critical time is the right move.
I disagree with your assessment of their comparative performance.
"Beyond Tabula Rasa: Reincarnating Reinforcement Learning" https://ai.googleblog.com/2022/11/beyond-tabula-rasa-reincar... :
> Furthermore, the inefficiency of tabula rasa RL research can exclude many researchers from tackling computationally-demanding problems. For example, the quintessential benchmark of training a deep RL agent on 50+ Atari 2600 games in ALE for 200M frames (the standard protocol) requires 1,000+ GPU days. As deep RL moves towards more complex and challenging problems, the computational barrier to entry in RL research will likely become even higher.
> To address the inefficiencies of tabula rasa RL, we present “Reincarnating Reinforcement Learning: Reusing Prior Computation To Accelerate Progress” at NeurIPS 2022. Here, we propose an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. While some sub-areas of RL leverage prior computation, most RL agents are still largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. We have also released our code and trained agents to enable researchers to build on this work.
Feed-Forward with Prompt Engineering is like RL; which prompt elements should remain given objective or subjective error?
On-demand electrical control of spin qubits (2023)
"On-demand electrical control of spin qubits" (2023) http://dx.doi.org/10.1038/s41565-022-01280-4
> Once called a ‘classically non-describable two-valuedness’ by Pauli, the electron spin forms a qubit that is naturally robust to electric fluctuations. Paradoxically, a common control strategy is the integration of micromagnets to enhance the coupling between spins and electric fields, which, in turn, hampers noise immunity and adds architectural complexity. Here we exploit a switchable interaction between spins and orbital motion of electrons in silicon quantum dots, without a micromagnet. The weak effects of relativistic spin–orbit interaction in silicon are enhanced, leading to a speed up in Rabi frequency by a factor of up to 650 by controlling the energy quantization of electrons in the nanostructure. Fast electrical control is demonstrated in multiple devices and electronic configurations. Using the electrical drive, we achieve a coherence time T2,Hahn ≈ 50 μs, fast single-qubit gates with Tπ/2 = 3 ns and gate fidelities of 99.93%, probed by randomized benchmarking. High-performance all-electrical control improves the prospects for scalable silicon quantum computing. High-performance all-electrical control is a prerequisite for scalable silicon quantum computing. The switchable interaction between spins and orbital motion of electrons in silicon quantum dots now enables the electrical control of a spin qubit with high fidelity and speed, without the need for integrating a micromagnet.
Is this Quantum of Silicon; or Quantum Dots on Silicon?
Used to be that quantum dots were for the the next level display tech beyond OLED, which doesn't require magnets either.
"Rowhammer for qubits: is it possible?" https://www.reddit.com/r/quantum/comments/7osud4/rowhammer_f... and its downstream mentions: https://news.ycombinator.com/item?id=27294577
"Bell's inequality violation with spins in silicon" (2015) https://arxiv.org/abs/1504.03112
I had heard that Bell's actually means that there is a high error rate in transmitting quantum states - 60%, I thought Wikipedia had said - through entanglement relations with physical descriptions. Doesn't entangled satellite communication violate Bell's, too?
Maybe call it and emissions a "Hot Tub Time Machine", eh?
Reverse engineering a neural network's clever solution to binary addition
The trick of performing binary addition by using analog voltages was used in the IAS family of computers (1952), designed by John von Neumann. It implemented a full adder by converting two input bits and a carry in bit into voltages that were summed. Vacuum tubes converted the analog voltage back into bits by using a threshold to generate the carry-out and more complex thresholds to generate the sum-out bit.
Nifty, makes one wonder if logarithmic or sigmoid functions for ML could be done using this method. Especially as we approach the node size limit, perhaps dealing with fuzzy analog will become more valuable.
There are a few startups making analog ML compute. Mythic & Aspinity for example
https://www.eetimes.com/aspinity-puts-neural-networks-back-t...
Veritasium has a bit of an video on Mythic - Future Computers Will Be Radically Different (Analog Computing) - https://youtu.be/GVsUOuSjvcg
From "Faraday and Babbage: Semiconductors and Computing in 1833" https://news.ycombinator.com/item?id=32888210 and then "Qubit: Quantum register: Qudits and qutrits" https://news.ycombinator.com/item?id=31983110:
>>> The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only: [...] Qubit#Physical_implementations: https://en.wikipedia.org/wiki/Qubit#Physical_implementations
> - note the "electrons" row of the table
According to this Table on wikipedia, it's possible to use electron charge (instead of 'spin') to do Quantum Logic with Qubits.
How is that doing quantum logical computations with electron charge different from from what e.g. Cirq or Tequila do (optionally with simulated noise to simulate the Quantum Computer Engineering hardware)?
FWIU, analog and digital component qualities are not within sufficient tolerance to do precise analog computation? (Though that's probably debatable for certain applications at least, but not for general purpose computing architectures?) That is, while you can build adders out of voltage potentials quantified more specifically than 0 or 1, you might shouldn't without sufficient component spec tolerances because noise and thus error.
IMHO, Turing Tumble and Spintronics are neat analog computer games.
(Are Qubits, by Church-Turing-Deutsch, sufficient to; 1) simuluate arbitrary quantum physical systems; or 2) run quantum logical simulations as circuits with low error due to high coherence? https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93... )
>> See also: "Quantum logic gate" https://en.wikipedia.org/wiki/Quantum_logic_gate
Analog computers > Electronic analog computers aren't Electronic digital computers: https://en.wikipedia.org/wiki/Analog_computer#Electronic_ana...
Heat pumps of the 1800s are becoming the technology of the future
In a house I own in Melbourne, Australia, I just replaced an old gas central heating system with 3 new top of the line Daikin mini split heat pumps. The new units can heat the entire house for the same amount of electrical energy that was used to run the FAN in the old gas unit. They are crazy efficient.
Ducts are dead.
The Daikin Alira X is the gold-plated option and cost $8k AUD for 2x2.5kw and 1x7.1kw units including installation. Payback time is about 3 years. The system is oversized, but enables excellent zoning and of course provides cooling which is a must on 40C/104F days.
Why do they seem to be so much more expensive in the US?
> Ducts are dead.
Ducts are still needed to circulate air, especially if you want to remove stale air (e.g., bathrooms, kitchen) and bring in (filtered) fresh air (to bedrooms).
Not being snarky but why don't you just open the window for that?
I have a CO2 detector that I believe is a reasonable proxy for stale air. When it goes above 1000 I simply open the windows. By the time I remember to close the windows the reading is almost always below 500.
Where I live, it can get very cold. Not always very efficient to open windows for 5 months out of the year.
A great option for keeping CO2 levels down in a house is with an HRV (or ERV) [1] that will heat the fresh air coming in to cycle it throughout the house.
From the ERV/HRV (Energy Recovery Ventilation / Heat Recovery Ventilation) wikipedia page: https://en.wikipedia.org/wiki/Energy_recovery_ventilation#Ty... :
> During the warmer seasons, an ERV system pre-cools and dehumidifies; During cooler seasons the system humidifies and pre-heats.[1] An ERV system helps HVAC design meet ventilation and energy standards (e.g., ASHRAE), improves indoor air quality and reduces total HVAC equipment capacity, thereby reducing energy consumption.
> ERV systems enable an HVAC system to maintain a 40-50% indoor relative humidity, essentially in all conditions. ERV's must use power for a blower to overcome the pressure drop in the system, hence incurring a slight energy demand.
In Jan 2023, the ERV wikipedia article has a 'Table of Energy recovery devices by Types of transfer supported': Total and Sensible :
> [ Total & Sensible transfer: Rotary enthalpy wheel, Fixed Plate ]
> [ Sensible transfer only: Heat pipe, Run around coil, Thermosiphon, Twin Towers ]
Latent heat: https://en.wikipedia.org/wiki/Latent_heat :
> In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Sensible heat: https://en.wikipedia.org/wiki/Sensible_heat
There's a broader Category:Energy_recovery page which includes heat pumps. https://en.wikipedia.org/wiki/Category:Energy_recovery
Are heat pumps more efficient than ERVs? Do heat pumps handle relative humidify in the same way as ERVs?:
IME you don't use an ERV alone. You'd use it _with_ a Heat Pump. The ERV is all about transferring heat from one airstream to another. It's _not_ a device that manages indoor temperatures though, just recovers some of the latent energy in the air. In the process it's also managing humidity mainly as a by-product. The humidification properties of the ERV allow you to run the Heat Pump in a more efficient manner. I have an HVAC geek friend who explained the whole process, but essentially (in non physics/fluid dynamics[?] terminology), if the Heat Pump doesn't need to dehumidify air it can operate more efficiently.
How Nvidia’s CUDA Monopoly in Machine Learning Is Breaking
It is quite weird to talk about all the frameworks which are built on top of CUDA eventually, but not talking about Rocm or OpenCL.
OpenAI's Triton compiles down to CUDA atm (if I read their github right), and only supports Nvidia GPUs.
PyTorch 2.0's installation page only mentions CPU and CUDA targets, therefor it's effectively all Nvidia GPUs.
While all the frameworks and abstractions could offer other back-ends in theory the story of anything ML related on the other big name in the industry, AMD, is still poor.
If anybody looses business because of bad decisions it is AMD, not Nvidia, who lead the whole industry. I am not convinced that anything will change in the near future.
From https://news.ycombinator.com/item?id=32904285 re: AMD ROcm, HIPIFY, :
> AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni... [...]
> ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced. [...]
From https://github.com/RadeonOpenCompute/clang-ocl :
> RadeonOpenCompute/OpenCL compilation with clang compiler
A better overview from the docs: "Machine Learning and High Performance Computing Software Stack for AMD GPU" https://rocmdocs.amd.com/en/latest/Installation_Guide/Softwa...
Large language models as simulated economic agents (2022) [pdf]
I love it. Instead of (a) running mathematical experiments that model human beings as utility-maximizing agents in a highly-simplified toy economy (easy and cheap, but unrealistic), or (b) running large-scale social experiments on actual human beings (more realistic, but hard and expensive), the authors propose (c) running large-scale experiments on large language models (LLMs) trained to respond, i.e., behave, like human beings. Recent LLMs seem to model human beings well enough for it!
Abstract:
> Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. These models can be used the same way economists use homo economicus: they can be given endowments, information, preferences, and so on and then their behavior can be explored in scenarios via simulation. I demonstrate this approach using OpenAI’s GPT3 with experiments derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986) and Samuelson and Zeckhauser (1988). The findings are qualitatively similar to the original results, but it is also trivially easy to try variations that offer fresh insights. Departing from the traditional laboratory paradigm, I also create a hiring scenario where an employer faces applicants that differ in experience and wage ask and then analyze how a minimum wage affects realized wages and the extent of labor-labor substitution.
Attempting to draw any kind of conclusions about the real world and human behaviour from a chatbot. Can't decide if this is hilarious or disturbing.
It is so implausible that the training process that creates LLMs might learn features of human behavior that could then be uncovered via experimentation? I showed, empirically, that one can replicate several findings in behavioral economics with AI agents. Perhaps the model "knows" how to behave from these papers, but I think the more plausible interpretation is that it learned about human preferences (against price gouging, status quo bias, & so on) from its training. As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.
> As such, it seems quite likely that there are other latent behaviors captured by LLMs and yet to be discovered.
>> What NN topology can learn a quantum harmonic model?
Can any LLM do n-body gravity? What does it say when it doesn't know; doesn't have confidence in estimates?
>> Quantum harmonic oscillators have also found application in modeling financial markets. Quantum harmonic oscillator: https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
"Modeling stock return distributions with a quantum harmonic oscillator" (2018) https://iopscience.iop.org/article/10.1209/0295-5075/120/380...
... Nudge, nudge.
Behavioral economics: https://en.wikipedia.org/wiki/Behavioral_economics
https://twitter.com/westurner/status/1614123454642487296
Virtual economies do afford certain opportunities for economic experiments.
Homelab analog telephone exchange
You might want to consider looking at Asterisk, the open source PBX. They have a list of analog cards that they support.
Asterisk (PBX) > Derived products https://en.wikipedia.org/wiki/Asterisk_(PBX)
"What is the technology behind Sangoma Meet?" https://help.sangoma.com/community/s/article/What-is-the-tec...
> Sangoma Meet is based on WebRTC, which provides video conferencing and is supported by most of the major web browsers today.
> Our software stack is built upon several open source tools, including Jitsi Meet, FreeSWITCH™, HAProxy, Prometheus, Grafana, collectd, and other tools used for provisioning, deploying and managing the service.
FWIU, Sangoma Talk includes the Sangoma Meet functionality: https://www.sangoma.com/products/communications-services/tea...
> Mobile Soft Client: Take your company phone extension with you anywhere using a mobile soft client. Forward calls from the office, receive voicemails, start a video meeting, and much more! When you call your customers or clients through the Sangoma Talk app, they will see your office phone number, which allows you to maintain your personal device privacy. Available for iOS & Android devices.
GVoice (originally GrandCentral) can't do voice or video call transfer to the mobile soft client app, FWIU
A 116kb WASM of Blink that lets you run x86_64 Linux binaries in the browser
I believe it's for some Linux binaries, statically compiled. Having a portable subset of Linux is pretty cool though.
Maybe this turns into an anti-distro, a collection of portable apps not specific to a Linux distro? (Or Linux, even.)
From https://github.com/simonw/datasette-lite/issues/26 :
> Micropip or Mambalite or picomamba or Zig.
> "Better integration with conda/conda-forge for building packages" [pyodide/pyodide#795]( https://github.com/pyodide/pyodide/issues/795)
> Emscripten-forge > Adding packages: https://github.com/emscripten-forge/recipes#adding-packages
> - https://github.com/emscripten-forge/recipes/tree/main/recipe...
> -- emscripten-forge/recipes/blob/main/recipes/recipes_emscripten/picomamba/recipe.yaml: https://github.com/emscripten-forge/recipes/blob/main/recipe...
> --- mamba-org/picomamba: https://github.com/mamba-org/picomamba
From emscripten-forge/recipes https://github.com/emscripten-forge/recipes :
> Build wasm/emscripten packages with conda/mamba/boa. This repository consists of recipes for conda packages for emscripten. Most of the recipes have been ported from pyodide.
> While we already have a lot of packages built, this is still a big work in progress.
SQLite Wasm in the browser backed by the Origin Private File System
I think this will be great for extensions. Currently the only solid choice is to use the dead simple storage.local which only allows to retrieve things by ID.
There's one problem though, this new API is for the web, so the nature of this storage is temporary - obviously the user must be able to clear it when clearing site data and this what makes it currently an unviable solution for a persistent extensions data storage. https://bugs.chromium.org/p/chromium/issues/detail?id=138321...
I’m getting Flash flashbacks. Flash apps had filesystem access in a designated area, with some amount of user control, that is not easily wiped from the browser; Flash games used that for saves and data. Years after the demise of Flash, the web platform is still catching up.
It's different though, extensions are like local apps and deserve persistent storage, while you are talking about how Flash was being used by remote sources. Also my proposed approach is that this data would be wiped when the extension is removed from the browser - which is what happens for storage.local
Some way to indicate which "WASM contexts" (?) have utilized how much disk space would be great for open source multiple implementations not-Flash, too.
From https://news.ycombinator.com/item?id=32953286 https://westurner.github.io/hnlog/#story-32950199 :
> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tab tabs according to their relative resource utilization
And then that tabs are (sandboxed) subprocesses running as the same user though.
Containers may have unique SELinux MCS labels, and browser tab processes probably should too.
containers/container-selinux: https://github.com/containers/container-selinux
https://github.com/kai5263499/awesome-container-security/iss...
> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU, RAM, Disk, [GPU, TPU, QPU] (Linux: cgroups,)
Like Flash
The i3-gaps project has been merged with i3
Hi. I'm the maintainer of i3-gaps and also a maintainer for i3.
The story of this merge is not only several years long, but a true success story in OSS in my eyes.
I took on i3-gaps by taking an existing patch and rebasing it to the latest i3 HEAD. From there it became popular and I took on the maintainership, eventually contributing to i3 itself and finally becoming a maintainer there as well.
Whilst originally gaps were considered an "anti feature" for i3, years ago we already decided that we'd accept adding gaps into i3. Clearly the fork was popular, and as someone else pointed out here as well, the Wayland "port" of i3, sway, added gaps from the beginning on with great success.
However, the original gaps patch was focused on being small and easy to maintain. It caused a few issues and had some drawbacks. We made it a condition that porting gaps into i3 would have to resolve these issues. Alas, this could've meant a lot of work that no one took on for the years to follow.
Recently, however, the maintainers of i3 got together (a chance to meet arose randomly). During that meeting we decided that it'd be better to just merge the fork and improve it later. And as it happened, Michael, the author and main maintainer of i3, did all that work during the port as well.
What resulted is the end of almost a decade of i3-gaps, and a much better implementation thereof. I'm incredibly happy to see this happen after all this time, and a big shoutout to Michael here for all that work.
Edit: Hadn't realized Michael was commenting here already. I guess leaving the background and story from my side of things doesn't hurt regardless.
What do you think about getting alt-tab support in there? Here to say this: https://github.com/westurner/dotfiles/blob/develop/scripts/i...
You want to use i3 to collapse focus management into a single temporal dimension?
I'll die defending your right to do so, but dear god your taste is atrocious. It's like you finally got out of prison and decided to decorate your bedroom window with iron bars.
In every window manager that supports it, alt tab does two things.
Alt tab with a long hold on alt lets you select another window, albeit from a linear list as you describe, by cycling through with tab and shift-tab.
Quickly typing alt-tab now cycles between the window you came from and the window you just selected. That’s the super useful value of the feature.
Is there an i3 command to (a) leap to another window from a selection and (b) leap back and forth between the window you came from and the window you just chose?
> Is there an i3 command to (a) leap to another window from a selection and (b) leap back and forth between the window you came from and the window you just chose?
Not for windows as far as I know, but for workspaces, yes: https://i3wm.org/docs/userguide.html#back_and_forth https://i3wm.org/docs/userguide.html#workspace_auto_back_and...
For (a) I've been using rofi. I bound a key to open its window switcher which gives me a searchable list of all open windows. There are quite a few other options: https://wiki.archlinux.org/title/I3#Jump_to_open_window
For (b) I don't know what the best option is but I see some people posting their scripts in this thread: https://github.com/i3/i3/issues/838 I think I would bind some keys to mark and then focus by mark if I wanted to do this instead of having a toggle.
So far as I know, the only way to get that functionality is to put the windows next to each other and to navigate them spatially (or to write a script that does it and bind that script to alt+tab).
But grouping windows logically based on how they're used has always just felt right, so I've never really considered that you want to keep track of the order in which windows were previously selected (using alt-tab to navigate even just three or four windows can be quite a trick). It always seemed like a necessary evil since floating-only window managers can't handle the kind of spatial focus-navigation that i3 does.
Do you happen to know an alt tab that just uses the currently visible desktops ? ( I have two monitors, and want to alt-tab between windows without finding the old ones hidden)
---
Also, I feel bad using this thread for feature requests/questions. Lovely work guys, and I am very grateful!
Do you mean currently visible desktops or windows? https://github.com/sagb/alttab is the utility of my choice to get what I wanted.
Show HN: Futurecoder – A free interactive Python course for coding beginners
Some highlights:
- 100% free and open source (https://github.com/alexmojaki/futurecoder), no ads or paid content.
- No account required at any point. You can start instantly. (You can create an account if you want to save your progress online and across devices. Your email is only used for password resets)
- 3 integrated debuggers can be started with one click to show what your code is doing in different ways.
- Enhanced tracebacks make errors easy to understand.
- Useful for anyone: You can have the above without having to look at the course. IDE mode (https://futurecoder.io/course/#ide) gives you an instant scratchpad to write and debug code similar to repl.it.
- Completely interactive course: run code at every step which is checked automatically, keeping you engaged and learning by doing.
- Every exercise has many small optional hints to give you just the information you need to figure it out and no more.
- When the hints run out and you're still stuck, there are 2 ways to gradually reveal a solution so you can still apply your mind and make progress.
- Advice for common mistakes: customised linting for beginners and exercise-specific checks to keep you on track.
- Construct a question that will be well-received on sites like StackOverflow: https://futurecoder.io/course/#question
- Also available in French (https://fr.futurecoder.io/), Tamil (https://ta.futurecoder.io/), and Spanish (https://es-latam.futurecoder.io/). Note that these translations are slightly behind the English version, so the sites themselves are too as a result. If you're interested, help with translation would be greatly appreciated! Translation to Chinese and Portuguese is also half complete, and any other languages are welcome.
- Runs in the browser using Pyodide (https://pyodide.org/). No servers. Stores user data in firebase.
- Progressive Web App (PWA) that can be installed from the browser and used offline.
-----------
A frequent question is how does futurecoder compare to Codecademy? Codeacademy has some drawbacks:
- No interactive shell/REPL/console
- No debuggers
- Basic error tracebacks not suitable for beginners
- No stdin, i.e. no input() so you can't write interactive programs, and no pdb.
- No gradual guidance when you're stuck. You can get one big hint, then the full solution in one go. This is not effective for learners having difficulty.
- Still on Python 3.6 (futurecoder is on 3.10)
I am obviously biased, but I truly believe futurecoder is the best resource for adult beginners. The focus on debugging tools, improved error messages, and hints empowers learners to tackle carefully balanced challenges. The experience of learning feels totally different from other courses, which is why I claim that if someone wants to start learning how to code, futurecoder is the best recommendation you can make.
This looks really good, going through the first couple of tasks it seems well considered.
I'm introducing my 8yo daughter to programming at the moment, she is beginning to play around with Scratch. I'm keeping my eye out some something closer to proper coding though. I think this may be a little too far at the moment, but I may try her out on it with me sitting next to her and see how she gets on!
Is there any way for users to construct their own multiple stage tutorials? (It looks like we can do single questions)
Currently you have the console output, have you considered having a canvas/bitmap output that could be targeted with the various Python drawing and image manipulation apis?
Incredibly generous of you to make it open source!
> Is there any way for users to construct their own multiple stage tutorials?
I really hope some kind of GUI to do that can exist one day, but it's definitely a complicated feature that I'd need help from contributors to build. Same for graphical output.
> (It looks like we can do single questions)
I think you're talking about the question wizard. That's for helping people to write good quality questions about their own struggles to post on StackOverflow and similar sites. It's not for making 'challenges' for others to solve.
> Incredibly generous of you to make it open source!
Thank you! I'm really trying to improve the state of education and make the world a better place. I hope that in addition to directly helping users, I can inspire other educators, raise the bar, and help them build similar products. To this end, futurecoder is powered by many open source libraries that I've created which are designed to also be useful in their own right:
Debuggers: these are integrated in the site but also usable in any environment:
- https://github.com/alexmojaki/birdseye
- https://github.com/alexmojaki/snoop
- https://github.com/alexmojaki/cheap_repr (not a debugger, but used by the above two as well as directly by futurecoder)
Tracebacks:
- https://github.com/alexmojaki/stack_data (this is also what powers the new IPython tracebacks)
- https://github.com/alexmojaki/executing (allows highlighting the exact spot where the error occurred, but also enables loads of other magical applications)
- https://github.com/alexmojaki/pure_eval
You can see a nicer presentation (with pictures) of the above as well as other projects of mine on my profile https://github.com/alexmojaki
Libraries which I specifically extracted from futurecoder to help build another similar educational site https://papyros.dodona.be/?locale=en (which does have a canvas output, at least for matplotlib):
- https://github.com/alexmojaki/sync-message (allows synchronous communication with web workers to make input() work properly)
- https://github.com/alexmojaki/comsync
Thanks! FWICS, futurecoder (and JupyterLite) may be the best way to run `print("hello world!")` in Python on Chromebooks for Education and Chromebooks with Family Link which don't have VMs or Containers ((!) which we rely upon on the server side to host container web shells like e.g. Google Colab and GitHub Codespaces (which aren't available for kids < 13) and cocalc-docker and ml-tooling/ml-workspace and kaggle/docker-python and https://kaggle.com/learn )
Also looked at codesters. quobit/awesome-python-in-education: https://github.com/quobit/awesome-python-in-education
Looks like `Ctrl-Enter` works, just like jupyter/vscode.
iodide-project/iodide > "Compatibility with 'percent' notebook format" which works with VScode, Spyder, pycharm, https://github.com/iodide-project/iodide/issues/2942:
# %%
import sympy as sy
import numpy as np
import scipy as sp
import pandas as pd
# %%
print("hello")
# %%
print("world")
Does it work offline? jupyterlite/jupyterlite "Offline PWA access"
https://github.com/jupyterlite/jupyterlite/issues/941Tell HN: Vim users, `:x` is like `:wq` but writes only when changes are made
`:x` leaves the modification time of files untouched if nothing was changed.
:help :x
Like ":wq", but write only when changes have been made.
I learned to do `:wq` after I learned that `:X` encrypts your file. When typing without really paying attention to the screen, I've twice encrypted my file with password like `cd tmp`, then saved the config file, breaking my system.
After that, I switched to `:wq` (and sometimes `:w` `:q`) which is much safer against over-typing.
Thanks for bringing this up.
I'm toying with either disabling it,
cmap X <Nop>
or, mapping it down to a lower x. cmap X x
1. Does anyone see anything this could interfere with?2. Does anyone know a better way to turn off the `:X` encryption option?
Sadly, having to remap definitely dulls the shine of `:x`.
Mapping to 'x' is dangerous since on a system where you don't have your vimrc you'll get the original behavior of 'X'. Ideally you'd map 'X' to deliver a small electric shock. ;-)
cmap X x
:help X
> Mapping to 'x' is dangerous since on a system where you don't have your vimrc you'll get the original behavior of 'X'.Which is still to prompt? A person could take their chances.
Besides, wouldn't that form of intentionally aversive conditioning actually boost the learning rate and create hyperassociations to the behavior your're attempting to stimulus extinct or just learn over?
Ask HN: Is there academic research on software fragility?
I keep finding articles that more or less talk about this but not some serious research on the topic. Do someone have a few pointers?
Edit: to clarify what I mean by fragility, it's how complex software, when changed, is likely to break with unexpected bugs, i.e., fixing a bug causes more.
How software changes over time?
API versioning, API deprecation
Code bloat: https://en.wikipedia.org/wiki/Code_bloat#
"Category:Software maintenance" costs: https://en.wikipedia.org/wiki/Category:Software_maintenance
Pleasing a different audience with fewer, simpler features
Lack of acceptance tests to detect regressions
Regression testing https://en.wikipedia.org/wiki/Regression_testing :
> Regression testing (rarely, non-regression testing [1]) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. [2] If not, that would be called a regression.
Fragile -> Software brittleness https://en.wikipedia.org/wiki/Software_brittleness
Thanks I will check out those references mentioned by wikipedia and also include the term software brittleness in my searches. It seems related even if maybe not exactly my goal
Yeah IDK if that one usage on Wikipedia is consistent:
> [Regression testing > Background] Sometimes re-emergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Often, a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software. Frequently, a fix for a problem in one area inadvertently causes a software bug in another area.
Seattle Public Schools sues TikTok, YouTube, Instagram over youth mental health
.. as a Seattle dad .. I fucking love this awkward yet clearly precedent-setting case. I know most of the board and the majority of us are fed up and taking the power we have to task and accountability. This has been in the works for over four years and we know the end game: new laws, adopted by other cities and states. .. it’s been a clearly focused project for over 4 years now. One of my friends is on the board. I knew it was coming Q1 2023 and glad they stayed on task. A bevy of mental health experts are core to this - so due diligence. I don’t know the layout of the legal team, but was told they have put this thru the ringer - the goal has always been a legal precedent, then actual legislation then adopted by other entities as well. The Portland School District has a parallel lawsuit as well. The pushback will be WELL propagandized and vilified. I can only imagine what garbage FOX will spew.
The lawsuit is going to be dismissed due to lack of standing.
.. it’s been a clearly focused project for over 4 years now. A colleague is on the board..A bevy of mental health experts are core to this - so due diligence. I don’t know the layout of the legal team, but was told they have put this thru the ringer - the goal has always been a legal precedent, then actual legislation, then adoption by other entities as well. The Portland School District has a parallel lawsuit as well on the tails of ours.
.. I suggest this light reading: https://app.leg.wa.gov/rcw/default.aspx?cite=7.48
Is there a specific law that these firms are supposed to have violated? (Does there need to be? I am not a lawyer.)
Does the school district have standing to sue on behalf of the students? (Is the injury that they pay more for mental health services for students?)
I’m genuinely curious. There’s a surface analogy to the cases against opioid manufacturers and distributors, but those are controlled substances whereas YouTube is clearly not.
There isn’t even a clear causal effect of the from YouTube -> depression, IMO. Research on this has been quite low quality.
And what about the role of the parents who are the owners of the phones on which their children run TikTok?
Again genuine curiosity here, not trying to be a jerk.
The geek wire article mentions that they are referring to the public nuisance law.
Probably would have been more cost effective to have worked with Amazon in Seattle on the Kids launcher on the Fire OS fork of Android (the one that merges the App Store and Launcher for the kids).
It's not safe to allow school administrators to jam and deny students' (possibly distracting) communications at least on their personal devices, eh?
Perhaps students could voluntarily submit to an App Launcher for focusing on school that deprioritizes content streams that haven't been made educational while attending unpaid conpulsory education programs under threat of prosecution for truancy, not nuisance.
Non- FireOS Android forks have the "Digital Wellbeing" tools for helping oneself focus despite persistent distractions that will always exist IRL.
Mechanical circuits: electronics without electricity [video]
Life is like a box of terrible analogies. - Oscar Wilde
This analogy is quite interesting.
Each element and connector are actually equivalents of circuit loops that are spliced together when you are connecting them.
So a spinotronics diode works more like a loop of "diode wire" out of which you can make multiple diodes by splicing other connectors and elements into it.
That's why you can make FULL BRIDGE RECTIFIER with just two spinotronic "diodes".
Electrons are described as fluids when there is superconductivity.
From https://en.wikipedia.org/wiki/Superconductivity :
> Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. [1][2] An electric current through a loop of superconducting wire can persist indefinitely with no power source.
FWIU, an electric current pattern described as EM hertz waves (e.g. as sinusoids) is practically persisted at Lagrangian points and in nonterminal, non-intersecting Lorentz curve paths at least?
IRL electronic components waste energy as heat like steaming, over-pressurized water towers. And erasing bits releases heat instead of dropping the 1 onto the negative or ground "return path"
I agree that Spintronics is a great game for mechanical circuits, which are in certain sufficient ways like electronic circuits, which can't persist qubits for any reasonable unit of time.
Python malware starting to employ anti-debug techniques
I’m wonder if there is room for a security model based around “escrow builds”.
Imagine if PyPi could take pure source code, and run a standardized wheel build for you. That pipeline would include running security linters on the source. Then you can install the escrow version of the artifact instead of the one produced by the project maintainers.
You can even have a capability model - most installers should not need to run onbuild/oninstall hooks. So by default don’t grant that.
This sidesteps a bunch of supply-chain attacks. The cost is that there is some labor required to maintain these escrow pipelines.
With modern build tools I think this might not be unworkable, particularly given that small libraries would be incentivized to adopt standardized structures if it means they get the “green padlock” equivalent.
Libraries that genuinely have special needs like numpy could always go outside this system, and have a “be careful where you install this package from” warning. But most libraries simply have no need for the machinery being exploited here.
Signed, Reproducible builds from source off a trusted build farm are possible with conda-forge, emscripten-forge, Fedora COPR, and OpenSUSE OBS Open Build System https://github.com/pyodide/pyodide/issues/795#issuecomment-1...
What does it mean for a package to have been signed with the key granted to the CI build server?
Does a Release Manager (or primary maintainer) again sign what the build farm produced once? What sort of consensus on PR approval and build output justifies use of the build artifact signing key granted to a CI build server?
How open are the build farm and signed package repo and pubkey server configurations? https://github.com/dev-sec https://pulpproject.org/content-plugins/
The Reproducible Builds project aims to make it possible to not need to trust your build machines, perhaps PyPI could use that approach.
"Did the tests pass" for that signed Reproducible build?
Conda > Adding packages > Running unit tests: https://conda-forge.org/docs/maintainer/adding_pkgs.html#run...
From https://github.com/thonny/thonny/issues/2181 :
> * https://conda-forge.org/docs/maintainer/updating_pkgs.html
> Pushing to regro-cf-autotick-bot branch¶ When a new version of a package is released on PyPI/CRAN/.., we have a bot that automatically creates version updates for the feedstock. In most cases you can simply merge this PR and it should include all changes. When certain things have changed upstream, e.g. the dependencies, you will still have to do changes to the created PR. As feedstock maintainer, you don’t have to create a new PR for that but can simply push to the branch the bot created. There are two alternatives […]
nektos/act is one way to run a github-actions.yml build definition locally; without CI (e.g. GitLab Runner, which requires ~--privileged access to the docker/Podman socket) to check whether you get the exact same build artifacts as the CI build farm https://github.com/nektos/act
A Multi-stage Dockerfile has multiple FROM instructions: you can build 1) a container for running the build which has build essentials like a compiler (GCC, LLVM) and packaging tools and keys; and 2) COPY the build artifact (probably one or more signed software packages) --from the build stage container to a container which appropriately lacks a compiler for production. https://www.google.com/search?q=multi+stage+Dockerfile
Are there guidelines for excluding entropy like the commit hash and build time so that the artifact hashes are exactly the same; are reproducible on my machine, too?
Adding design-by-contract conditions to C++ via a GCC plugin
What's the advantage of invariants over unit testing? Seems like there must be lot of overhead at runtime.
>I was ending up with garbage, and not realizing it until I had visualized it with Graphviz!
>Imagine if we had invariants that we could assert after every property change to the tree.
Have you tried writing unit tests? What didn't work about them that you decided to try invariants?
icontract is one implementation of Design by Contract for Python; which is also like Eiffel, which is considered ~the origin of DbC. icontract is fancier than compile-time macros can be. In addition to Invariant checking at runtime, icontract supports inheritance-aware runtime preconditions and postconditions to for example check types and value constraints. Here are the icontract Usage docs: https://icontract.readthedocs.io/en/latest/usage.html#invari...
For unit testing, there's icontract-hypothesis; with the Preconditions and Postconditions delineated by e.g. decorators, it's possible to generate many of the fuzz tests from the additional Design by Contract structure of the source.
From https://github.com/mristin/icontract-hypothesis :
> icontract-hypothesis combines design-by-contract with automatic testing.
> It is an integration between icontract library for design-by-contract and Hypothesis library for property-based testing.
> The result is a powerful combination that allows you to automatically test your code. Instead of writing manually the Hypothesis search strategies for a function, icontract-hypothesis infers them based on the function’s [sic] precondition
Paper-thin solar cell can turn any surface into a power source
> These durable, flexible solar cells, which are much thinner than a human hair, are glued to a strong, lightweight fabric, making them easy to install on a fixed surface. They can provide energy on the go as a wearable power fabric or be transported and rapidly deployed in remote locations for assistance in emergencies. They are one-hundredth the weight of conventional solar panels, generate 18 times more power-per-kilogram, and are made from semiconducting inks using printing processes that can be scaled in the future to large-area manufacturing.
Because they are so thin and lightweight, these solar cells can be laminated onto many different surfaces. For instance, they could be integrated onto the sails of a boat to provide power while at sea, adhered onto tents and tarps that are deployed in disaster recovery operations, or applied onto the wings of drones to extend their flying range. This lightweight solar technology can be easily integrated into built environments with minimal installation needs.
> [...] They found an ideal material — a composite fabric that weighs only 13 grams per square meter, commercially known as Dyneema
They also make ultralight backpacking backpacks out of Dyneema, but without the UV-curable glue.
> […] “A typical rooftop solar installation in Massachusetts is about 8,000 watts. To generate that same amount of power, our fabric photovoltaics would only add about 20 kilograms (44 pounds) to the roof of a house,” he says
Paper: “Printed Organic Photovoltaic Modules on Transferable Ultra-thin Substrates as Additive Power Sources” (2022) https://doi.org/10.1002/smtd.202200940
Solar energy can now be stored for up to 18 years, say scientists
> Long-term storage of the energy they generate is another matter. The solar energy system created at Chalmers back in 2017 is known as ‘MOST’: Molecular Solar Thermal Energy Storage Systems.
/? MOST Molecular Solar Thermal Energy Storage https://www.google.com/search?q=MOST%3A+Molecular+Solar+Ther... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C44&q=MOS...
> The technology is based on a specially designed molecule of carbon, hydrogen and nitrogen that changes shape when it comes into contact with sunlight.
> It shape-shifts into an ‘energy-rich isomer’ - a molecule made up of the same atoms but arranged together in a different way. The isomer can then be stored in liquid form for later use when needed, such as at night or in the depths of winter.
> A catalyst releases the saved energy as heat while returning the molecule to its original shape, ready to be used again.
> Over the years, researchers have refined the system to the point that it is now possible to store the energy for an incredible 18 years
"Chip-scale solar thermal electrical power generation" (2022) https://doi.org/10.1016/j.xcrp.2022.100789
What prevents this from being scaled up?
Previous submissions (2022): https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Solar panels open crop lands to farming energy
This article promotes false assumptions.
Ordinary solar can work fine in fields, because most plants don't use light beyond a few hours' worth, and endure the heat, after.
A practical arrangement is bifacial panels in vertical fencerows running north-south, to pick up morning and afternoon sun, spaced widely enough for equipment to run between. Blocking morning sun preserves dew, and blocking afternoon cuts heat stress.
Certain cereal crops get slightly lower yield from reduced light, but reduced water loss can make up the difference.
FWIU, also sheep can graze under the shade of solar panels, thus eliminating the need to robo-mow beneath solar panels.
Livestock production improves from reduced weather stress. Lower evaporation loss cuts irrigation load.
California pulls the plug on rooftop solar
This was always a subsidy for rich people by poor people. The power generated by rooftop solar isn’t not worth 30 cents
Yep. I've always considered net metering immoral. Rich folks using poor folks for a free nighttime battery.
The fix is super simple. Just meter by the minute and pay out current wholesale electric rates as you send to the grid. When you buy, you pay retail for delivery depending on the instantaneous market.
Don't want to buy at potentially high rates during evening peak? Sounds like you need to invest in a battery system.
If you want to pretend you are a micro power plant, you should get paid as one.
With Buy-All Sell-All you buy all you use at retail rates and sell all you produce at wholesale rates. Buy-All Sell-All has a later breakeven point than Net Metering, where you buy what you can't produce at retail and sell back the rest at wholesale or better, which is an indirect subsidy for a resilient power grid with residential renewables with external benefits.
What you describe sounds like Buy-All Sell-All, except you're allowed to use and store what you produce before paying retail rates for electricity purchased from the service provider.
Is it anti-competitive to deny residential renewable energy producers the right to use the clean energy they invested in producing if they want to purchase electricity?
Another exclusive monopoly contract: if you buy water from me, you can't use the water you capture yourself.
Net metering in the United States: https://en.wikipedia.org/wiki/Net_metering_in_the_United_Sta...
Net metering > "Post-net metering" successor tariffs: https://en.wikipedia.org/wiki/Net_metering#Post-net_metering...
We want there to be renewable residential energy. Subsidizing renewable energy will hasten adoption. We should subsidize residential renewable energy if we want there to be more renewable energy.
If we make the break-even point later in time, residential renewable energy will be less lucrative.
I’m so confused, is there something that prevents the “use what you produce and sell excess at wholesale?” That seems like it would be the sanest policy. If you can’t use your own power then I think getting paid retail rates is the only fair thing since that’s how much the power is worth to you.
FWIU, Buy-All Sell-All contracts have a "termination of agreement to provide service clause" if the residential renewables are not directly attached to the grid; it's against their TOS to use your own renewable energy and sell the rest, which is probably monopolistic and anti-competitive.
Is it legal to have a cutover so that it's possible to use one's own renewable energy when the power's out, given an exclusive Buy-All Sell-All agreement?
Perhaps there's an opportunity for a solution here: at the junction of batteries, renewables, and local [government-granted-monopoly with exclusive first-mover rights of way over and under other infrastructure] electrical-utility junction; there could be a controller that knows at least:
- 1a) when the grid is down
- 1b) when the grid wants the customer to slowly increase load e.g. after the power has been out
- 1c) when it's safe to send more electricity to the grid e.g. at retail or wholesale or intraday rates
- 2a) how full are the local batteries
- 2b) the current and projected local load && how much of that can be throttled down
- 2ba) how full and heated the hot water tank(s) are
- 2bb) the current and projected external and internal air temperature and humidity
- 2bba) the current and projected internal air temperature and humidity, per e.g. bath fans and attic fans with or without in-wall-controllers with humidistats
- 2bc) projected electrical needs for cooking, baking, microwaving (typically at 100W*15amps=1500W or more)
- 2c) how many volts at how many amps the local renewables are producing
But IIUC, Buy-All Sell-All service provision agreements threaten termination of service if the customer/competitor does anything but sell all locally produced electricity to the grid by direct connection, so an emergency cut-over that charges your batteries off your solar panels instead of the grid (e.g. when the grid is down) is forbidden.
New Docker Desktop: Run WASM Applications Alongside Linux Containers in Docker
This feels huge.
I haven't messed with WASM yet but I do love the idea of being able to build something that targets wasm/wasi with Docker like I'm already doing today and still package all of its dependencies into a single Docker image.
I also love the fact that code that targets wasm can run in a browser or on the machine in an isolate without hard forking.
It used to be called EAR file.
HTTP SXG and Web Bundles (and SRI) - components of W3C Web Packaging - may be useful for a signed WASM package format: https://github.com/WICG/webpackage#packaging-tools
How is WASM distinct from an unsigned binary blob?
Nothing other than trying to reboot the ecosystem for VCs.
https://docs.oracle.com/javase/7/docs/technotes/tools/window...
Sigstore is a CNCF project for centralized asset signatures for packages, containers, software artifacts; Cosign, Gitsign: https://docs.sigstore.dev/#how-to-use-sigstore
Re: TUF, Sigstore, W3C DIDs, CT Certificate Transparency logs, W3C Web Bundles; and reinventing the signed artifact wheel: https://news.ycombinator.com/item?id=30682329 ("Podman can transfer container images without a registry")
From "HTTP Messages Signatures" (~SXG) https://news.ycombinator.com/item?id=29281449 :
> blockcerts/cert-verifier-js ?
blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates/cert-verifier-js
When will emscripten output java byte code?
Just because something existed before something else doesn't mean a competitor can't spring up and have more momentum. It feels like the JVM is massively falling behind and it loses out massively on things like memory efficiency.
Even on that WASM is a follower,
"NestedVM provides binary translation for Java Bytecode. This is done by having GCC compile to a MIPS binary which is then translated to a Java class file. Hence any application written in C, C++, Fortran, or any other language supported by GCC can be run in 100% pure Java with no source changes."
With greetings from 2006, http://nestedvm.ibex.org/
How does the memory usage change? Does Java still require initial RAM reservation? /? Java specify how much RAM https://www.google.com/search?q=java+specify+how+much+ram
VOC transpiles Python to Java bytecode. Py2many transpiles Python to many languages but not yet Java.
Apache Arrow can do IPC to share memory references to structs with schema without modification between many languages now; including JS and WASM. https://arrow.apache.org/
FWIU Service Workers and Task Workers and Web Locks are the browser APIs available for concurrency in browsers and thus WASM. https://github.com/jupyterlab/jupyterlab/issues/1639#issueco...
"WebVM" https://news.ycombinator.com/item?id=30168491 :
> Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"? [Or Git; though there's already isomorphic-git in JS]
"WebGPU" https://news.ycombinator.com/item?id=30601415
Emscripten-compiled WASM can be packaged with ~conda packages and built and hosted by emscripten-forge ( which works like conda-forge, which has Python, R, Julia, Rust) to be imported from JS and WASM. Here's the picomamba recipe.yml on emscripten-forge: https://github.com/emscripten-forge/recipes/blob/main/recipe... and for CPython: https://github.com/emscripten-forge/recipes/blob/main/recipe...
Browsers could run WASM containers, too. How does the browser sandbox+ WASM runtime sandbox (that lacks WASI) compare to the security features of Linux containers?
How do the docstrings look after transpilation?
Are there relative performance benchmarks that help estimate the overhead of the WASM-recompilation and runtime? How much slower is it to run the same operations with the same code in a runtime with WASI support?
Are there cgroups and other container features for WASM applications?
Is there any way to tell whether an unsigned WASM bundle is taking 110% of CPU in a browser tab process?
Do browser tabs yet use cgroups functionality to limit resource exhaustion risks?
Should we be as confident in unsigned WASM in a WASM runtime as with TUF-signed containers?
Ask HN: Which books have made you a better thinker and problem solver?
Your choices needn't be only math books. They can come from any discipline or genre.
When you mention any book please add a line or two as to why it made you a better thinker and problem solver.
This suggestion is humorous, but absolutely true: Potty Training In 3 Days.
Before having children, I thought I was fairly empathetic and introspective, but raising a child helped me realize how superficial those traits in myself were.
I'm being completely honest when I say this book made me a better leader and project manager - having a better understanding of the motivations of others, incentivizing those looking to you for guidance based on their own goals/desires, providing those with tools they need to succeed, and taking a macro view of a problem and allowing those under me to flourish and find creative ways to solve problems that take advantage of their strengths and idiosyncrasies.
I'm in no way suggesting that you infantilize those around you, just that teaching my toddler to shit opened my eyes to the way I approached problems, and Brandi Brucks' book helped me approach things differently with great success!
"The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-Step, Lasting Change for You and Your Child" https://www.google.com/search?kgmid=/g/11h7dr5mm6&hl=en-US&q...
"Everyday Parenting: The ABCs of Child Rearing" (Kazdin, Yale,) https://www.coursera.org/learn/everyday-parenting :
> The course will also shed light on many parenting misconceptions and ineffective strategies that are routinely used.
Re: Effective praise and Validating parenting
https://wrdrd.github.io/docs/consulting/kids
I love Kazdin, great additional suggestion. I didn't know there was a course on Coursera though, thanks for sharing!
Thanks for these suggestions, bought the books & signed up for the course!
Building arbitrary Life patterns in 15 gliders
Interestingly enough, the concept of placing gliders at a distance away seems to touch on the relativity of space and time. Here, with space, we are also encoding the time at which a certain pattern (a glider) appears where it's needed. In a rigid system like the GoL, we can't trade space with time easily, since everything happens at a constant speed, but it makes one wonder...
...if there's a GoL version where time varies somehow¹ with something²
¹ directly?
² amount of activity? mass?
There have been a lot of GoL variants over the years, but I don't remember running into any attempts to vary the speed of evolution in different locations on the same grid.
The idea that all neighbors move to the next tick simultaneously is a fundamental assumption in cellular automata in general. If you try changing that, the optimizations that allow us to simulate CAs at any kind of reasonable speed ... all stop working, pretty much. It's kind of painful even to think about.
Which means there are probably very interesting rules out there somewhere, where CAs run faster/slower depending on pattern density -- it's just going to be very tricky to explore that particular search space.
The "superstep" that we practically impose upon simulations of entropy and emergence is out of accord with our modern understanding of non-regularly-quantizable spacetime. The debuggable Von Neumann instruction pipeline precludes "in-RAM computing" which conceivably does converge if consensus-level error correction is necessary.
The term 'superstep' reminds me of the HashLife algorithm https://en.wikipedia.org/wiki/Hashlife for computing the Game of Life. It computes multiple generations at the same time, and runs at different speeds in different parts of the universe, but only with the purpose of computing CGoL faster, not to introduce any relativity.
How does it "touch on relativity of space and time"?
I think that was just saying "more space between initial gliders implies a longer time needed to complete construction". There's no Einsteinian relativity to be found here.
(A Doppler effect does show up in Conway's Life sometimes, but that's about as far as we get with analogies to the physical universe...!)
How nonlocal are the entanglements in Conway's game of cellular automata, if they're entanglements with symmetry; conservation but emergence? TIL about the effect of two Hadamard gates upon a zero.
Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord :
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
From "Convolution Is Fancy Multiplication" https://news.ycombinator.com/item?id=25194658 :
> FWIW, (bounded) Conway's Game of Life can be efficiently implemented as a convolution of the board state: https://gist.github.com/mikelane/89c580b7764f04cf73b32bf4e94...
Conway's Game is a 2D convolution; without complex phase or constructive superposition.
Convolution theorem: https://en.wikipedia.org/wiki/Convolution_theorem :
> In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
From Quantum Fourier transform: https://en.wikipedia.org/wiki/Quantum_Fourier_transform :
> The quantum Fourier transform can be performed efficiently on a quantum computer with a decomposition into the product of simpler unitary matrices. The discrete Fourier transform on 2^{n} amplitudes can be implemented as a quantum circuit consisting of only O(n^2) Hadamard gates and controlled phase shift gates, where n is the number of qubits.[2] This can be compared with the classical discrete Fourier transform, which takes O(n*(2^n)) gates (where n is the number of bits), which is exponentially more than O(n^2).
MicroPython officially becomes part of the Arduino ecosystem
Thonny has MicroPython support.
"BLD: Install thonny with conda and/or mamba" https://github.com/thonny/thonny/issues/2181
Mu editor has MicroPython support: https://codewith.mu/
For VSCode, there are a number of extensions for CircuitPython and MicroPython:
joedevivo.vscode-circuitpython https://marketplace.visualstudio.com/items?itemName=joedeviv...
Pymakr https://github.com/pycom/pymakr-vsc/blob/next/GET_STARTED.md
Pico-Go: https://github.com/cpwood/Pico-Go
/? CircuitPython MicroPython: https://www.google.com/search?q=circuitpython+micropython
Aurdino IDE now has support for Raspberry Pi Pico.
arduino-pico: https://arduino-pico.readthedocs.io/en/latest/index.html
Rshell and ampy are CLI tools for MicroPython:
rshell: https://github.com/dhylands/rshell
ampy: https://github.com/scientifichackers/ampy
Fedora MicroPython docs: https://developer.fedoraproject.org/tech/languages/python/mi...
awesome-micropython: https://github.com/mcauser/awesome-micropython#ides
awesome-arduino: https://github.com/Lembed/Awesome-arduino
KiCad (ngspice) is an open source tool for circuit simulation. Tinkercad is another.
TIL about Mecanum wheels.
wokwi/rp2040js: https://github.com/wokwi/rp2040js:
> Raspberry Pi Pico Emulator for the Wokwi Simulation Platform. It blinks, runs Arduino code, and even the MicroPython REPL!
What are some advantages of Arduino IDE? (which is cross-platform and now supports MicroPython and Pi Pico W (a $6 IC with MicroUSB and a pinout spec))
ELIZA is Turing Complete
It's referring to the ELIZA scripting language, not the original "therapist" interface. A programming language being Turing-complete isn't really news, it's... the main point, unless you are intentionally trying to avoid being Turing-complete.
> A programming language being Turing-complete isn't really news, it's... the main point, unless you are intentionally trying to avoid being Turing-complete.
For example, Bitcoin "smart contracts" are intentionally not Turing-complete, and there are not per-opcode costs like there are for EVM and eWASM (which are embedded in other programs than Ethereum)
"The Cha Cha Slide Is Turing Complete" https://news.ycombinator.com/item?id=32477593 :
> Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness
> Church-Turing thesis: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
Halting problem: https://en.wikipedia.org/wiki/Halting_problem :
> A key part of the proof is a mathematical definition of a computer and program, which is known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first cases of decision problems proven to be unsolvable. This proof is significant to practical computing efforts, defining a class of applications which no programming invention can possibly perform perfectly.
Quantum Turing machine > History: https://en.wikipedia.org/wiki/Quantum_Turing_machine :
> [...] any quantum algorithm can be expressed formally as a particular quantum Turing machine. However, the computationally equivalent quantum circuit is a more common model.[1][2]
> History: [2005] A quantum Turing machine with postselection was defined by Scott Aaronson, who showed that the class of polynomial time on such a machine (PostBQP) is equal to the classical complexity class PP.
Complexity Zoo > Petting Zoo > {P, NP, PP,}, Modeling Computation > Deterministic Turing Machine https://complexityzoo.net/Petting_Zoo#Deterministic_Turing_M...
-
When I read the title, I too assumed it was about dialectical chatbots and - having just read turtledemo.chaos - wondered whether there's divergence and potentially infinite monkeys and then emergence of a reverse shell to another layer of indirection; turtles all the way down w/ emergence.
Draft RFC: Cryptographic Hyperlinks
This hashlink spec seems to duplicate the existing Named Information (ni:) URI scheme (RFC6920), as well as widely deployed non-standard solutions, namely magnet: URI's. These don't seem to currently support the Multihash format that's recommended here, but could easily be extended/amended to that effect. Not seeing the point of this.
Neither RFC6920 nor magnet: URIs appear to support the "just add a url parameter with the hash to the existing URL" use case, FWICS.
CAS Content-addressable storage: https://en.wikipedia.org/wiki/Content-addressable_storage#Op... : IPFS <CID>/path
RFC6920: https://www.rfc-editor.org/rfc/rfc6920.html
Magnet URI scheme: https://en.wikipedia.org/wiki/Magnet_URI_scheme
draft-sporny-hashlink-07 2021: https://datatracker.ietf.org/doc/html/draft-sporny-hashlink-... :
<url>?hl=<resource-hash>
The URL-parameter method is application-specific though, not sure it’s very useful to have a standard for that. Applications can just use whatever parameter name makes sense for them plus an existing hash/CAS encoding for the value.
How else can browsers check the hash of a file downloaded over HTTPS?
The main focus of the draft RFC is to define a new "hl" hashlink URL scheme. This doesn't necessariy target browsers, though they could certainly choose to support that scheme. The URL-parameter encoding, which is not the recommended encoding, is merely for "existing applications utilizing historical URL schemes", which may or may notinclude browsers. The draft explicitly states: "Implementers should take note that the URL parameter-based encoding mechanism is application specific and SHOULD NOT be used unless the URL resolver for the application cannot be upgraded to support the RECOMMENDED encoding."
HTTP SRI Subresource Integrity allows for specifying which cryptographic hash is presented, like ld-proofs has a "future-proof" signatureSuite attribute : https://developer.mozilla.org/en-US/docs/Web/Security/Subres...
> Subresource Integrity with the <script> element
> You can use the following <script> element to tell a browser that before executing the https://example.com/example-framework.js script, the browser must first compare the script to the expected hash, and verify that there's a match.
<script
src="<a href="<a href="https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js" rel="nofollow noopener" target="_blank">https://example.com/example-framework.js"
integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
crossorigin="anonymous"></script>
[...]
Note: An integrity value may contain multiple hashes separated by whitespace. A resource will be loaded if it matches one of those hashes
To create a SHA384 SRI HTTP resource hash for the integrity= attr: cat FILENAME.js | openssl dgst -sha384 -binary | openssl base64 -A
### 1.1. Multiple Encodings
A hashlink can be encoded in two different ways, the RECOMMENDED way
to express a hashlink is:
hl:<resource-hash>:<optional-metadata>
To enable existing applications utilizing historical URL schemes to
provide content integrity protection, hashlinks may also be encoded
using URL parameters:
<url>?hl=<resource-hash>
Implementers should take note that the URL parameter-based encoding
mechanism is application specific and SHOULD NOT be used unless the
URL resolver for the application cannot be upgraded to support the
RECOMMENDED encoding.
[...]
#### 3.2.1. Hashlink as a Parameterized URL Example
The example below demonstrates a simple hashlink that provides
content integrity protection for the "http://example.org/hw.txt"
file, which has a content type of "text/plain":
http://example.org/hw.txt?hl=
zQmWvQxTqbG2Z9HPJgG57jjwR154cKhbtJenbyYTWkjgF3e
Hydrogen-producing rooftop solar panels nearing commercialization
Is this more or less flammable than rooftop solar?
Could adjacent H2O and CO2 capture and storage help mitigate hydrogen fire risk?
I really cringe at the thought of H2 at home for multiple reasons.
- a wide explosive concentration range (LEL to UEL). The wider the range, the more opportunity you have for encountering an explosive mix if there's a leak.
- a low ignition energy. A small static discharge has more than enough energy to ignite H2.
- It's a functionally difficult gas to work with. It's an escape artist. You have to use the right materials. You have issues like embrittlement, hydrogen stress cracking. It's not great from a volumetric energy density perspective - more watts are required for pumping X units of energy than a fuel with a higher energy density. To bump up the energy density, you compress it and or liquefy it.. which costs energy, you've got high working pressures, and are possibly dealing with cryogenics as well.
FWIU actually Green hydrogen is where they're actually considered the total "path to value".
Charge the grid with it; whatever it is, if it's "economical": charge the grid with it.
> [ Hydrogen Safety: https://en.wikipedia.org/wiki/Hydrogen_safety ; videos: [ ]]
>> Contents: Prevention, Inerting and purging, Ignition source management (two rocks, HVAC, lighting, ), Mechanical integrity and reactive chemistry, Leaks and flame detection systems, Ventilation and flaring (all facilities that process Hydrogen must have anti-static ventilation systems), Inventory management and facility spacing, Cryogenics, Human factors, Incidents, *Hydrogen codes and standards*, Guidelines*
Would capturing CO2 and water with the same or adjacent PV/TPV+ panels help mitigate Hydrogen Hazard? FWIU, Aerogels and hydrogels can be made from CO2.
A possibility would be to put the H2 into a different storage medium like ammonia or methanol, rather than storing it as gaseous hydrogen.
"With a plan to decarbonize heating systems with hydrogen, Modern Electron raises $30M" (2022) https://techcrunch.com/2022/02/03/with-a-plan-to-decarbonize... :
> The second, which is still under development but about to make its debut, is what they’re calling the Modern Electron Reserve, which rather than burning natural gas — which is mostly CH4, or methane — reduces it to solid carbon (in the form of graphite) and hydrogen gas. The gas is passed on to the furnace to be burned, and converted to both heat and energy, while the graphite is collected for disposal or reuse.
And there's a picture of what's left after they extract just the Hydrogen from PNG/LNG for one day of home heat.
Letting the grass grow longer is one way to absorb carbon locally; longer grass is more efficient at absorbing carbon (e.g. carbon emitted by comparatively inefficiently burning natural gas for heat (an exothermic critical reaction))
Show HN: I built my own PM tool after trying Trello, Asana, ClickUp, etc.
Hey HN,
Over the past two years, I've been building Upbase, an all-in-one PM tool.
I've tried so many project management tools over the years (Trello, Asana, ClickUp, Teamwork, Wrike, Monday, etc.) but they've all fallen short. Most of them are overly complicated and painful to use. Some others, like Trello, are too limited for my needs.
Most importantly, most of these tools tend to be focused on team collaboration and completely ignore personal productivity.
They are useful for organizing my work, but not great at helping me stay focused to get things done.
That's why I decided to build Upbase.
I try to make it clean and simple, without all the bells and whistles. Apart from team collaboration, I added many personal productivity features, including Weekly/Daily planner, Time blocking, Pomodoro Timer, Daily Journal, etc. so I don't need another to-do list app.
Now I can use Upbase to collaborate with my team AND manage your personal stuff at the same time, without all the bloat.
If these resonate with you, then give Upbase a try. It has a Free Forever plan though.
Let me know if you have any feedback or questions!
One thing I've always wanted in a tool like this is the ability to map out probabilities, i.e., we do A, then B but after that we do either C,D or E. Each one on these has an associated probability (C: 0.4, D: 0.5, E: 0.1) and an associated estimate (C: 10 +- 5 normal, D: 12 +- 3 uniform, E: 3 +-1 normal).
The UI would look like a graph and like ms project it could include resource levelling in order to show bottlenecks.
I know this probably sounds complicated but I think it maps to reality fairly well and thus actually simple (fighting reality is hard).
Gantt charts can be made in MS Project, Google Sheets,: https://en.wikipedia.org/wiki/Gantt_chart
Critical path method > Basic techniques: https://en.wikipedia.org/wiki/Critical_path_method#Basic_tec... :
> Components: The essential technique for using CPM [8][9] is to construct a model of the project that includes the following:
> (1) A list of all activities required to complete the project (typically categorized within a work breakdown structure), (2) The time (duration) that each activity will take to complete, (3) The dependencies between the activities and, (4) Logical end points such as milestones or deliverable items.
> Using these values, CPM calculates the *longest path* of planned activities to logical end points or to the end of the project, and *the earliest and latest that each activity can start and finish without making the project longer.* This process determines which activities are "critical" (i.e., on the longest path) and which have "total float" (i.e., can be delayed without making the project longer). In project management, a critical path is the sequence of project network activities which add up to the longest overall duration, regardless if that longest duration has float or not. This determines the *shortest time possible to complete the project.\ "*
Re: [Hilbert curve, Pyschedule, CSP,] Scheduling of [OS, Conference Room,] and other Resources https://news.ycombinator.com/item?id=31777451 https://westurner.github.io/hnlog/#comment-31777451
Complexity and/or Time estimates can be stuffed into nonexclusive namespaced label names on GitHub/GitLab/Gitea:
#ComplexityEstimate:
C:1
C: Fibonacci[n]
C: (A), J, Q, K, (A)
#TimeEstimate:
T:2d
T:5m
#Good First Issue
GitLab EE and Gitea have time tracking on Issues and Pull Requests.Gitea has untyped Issue dependency edges, but there could probably easily be another column in the is-it-a through table for the many-to-many Issue edges table to support typed edges with URIs i.e. JSONLD RDF.
GitLab Free supports the "relates to" Linked Issue relation; EE also supports "blocks"/"is blocked by".
Planning poker: https://en.wikipedia.org/wiki/Planning_poker
Agile estimation: https://www.google.com/search?q=agile+estimation
"Agile Estimating and Planning" (2005) https://g.co/kgs/kDScM7
CPM sounds a lot like a PERT chart, which was invented in the 1950's to help design and build the Polaris nuclear submarines during the Cold War[0]. PERT has been part of Microsoft Project for decades, so it's readily available.
When you really need it (like in the case of tens of thousands of people trying to build and ship a single project at lighting speed) PERT is an extremely powerful and effective project management methodology. If, on the other hand, your "project management division" is you, it's a dangerously seductive time sink that will consume huge amounts of your time building and tuning and gathering and updating data and information, for arbitrarily close to zero direct real benefit and huge net negative benefit. The increase in effectiveness you gain from all that modeling is, in software development projects, negligible and the cost of doing all that modeling is much higher than you think it will be if you've never done it (that's why we don't do waterfall planning in software - it's not that no one's thought of it, it's that it's not effective on projects of any real complexity). As with any approach to planning, PERT works best at a particular scale and project type, and it's typically a quite large scale non-software project.
In my personal opinion, from a software development standpoint, the valuable part of building a PERT chart is doing the work and thinking required to draw a dependency map for your tasks. Drawing all those lines to show what has to be done before what is an incredibly effective tool for helping you flesh out and find dependencies (tasks) you hadn't realized needed to be on your list. Use something like MS Project to build that dependency diagram, then force yourself to stop using Project because it's too seductive at making you feel like the data is giving you power when it's really just consuming your brainpower ineffectively. Use dependency mapping to build a waterfall caliber understanding of what you need to build, then set it aside and use more appropriate agile style approaches to actually work through the project in an optimum manner (which often means not building it in exactly the way you mapped out originally).
[0]https://en.m.wikipedia.org/wiki/Program_evaluation_and_revie...
WBS: Work Breakdown Structure: https://en.wikipedia.org/wiki/Work_breakdown_structure
PERT -> see also ->
"Project network" https://en.wikipedia.org/wiki/Project_network :
> Other techniques: The condition for a valid project network is that it doesn't contain any circular references.*
> Project dependencies can also be depicted by a predecessor table. Although such a form is very inconvenient for human analysis, project management software often offers such a view for data entry.
> An alternative way of showing and analyzing the sequence of project work is the design structure matrix or dependency structure matrix.
design structure matrix or dependency structure matrix: https://en.wikipedia.org/wiki/Design_structure_matrix
READMEs, Issues, Pull Requests, and Project Board Cards may contain Nested Markdown Task Lists with Issue (and actual Pull Request) # references:
- [ ] Objective
- [x] Task 1 +tag
- [ ] #237 (GitHub fills in the Title and Open/Closed/Merged state and adds a *hover card*)
- [x] Multiline Markdown list item indentation
<URL|text|>
- ID#:
- Title:
- Labels: [ ]
- Description: |
- htps://URL#yaml-yamlld
- [x] Multiline Markdown list item indentation w/ --- YAML front matter delimiters
---
- id:
- title:
- labels: [ ]
---
- htps://URL#yaml-yamlld
Time management > Setting priorities and goals > The Eisenhower Method: https://en.wikipedia.org/wiki/Time_management#The_Eisenhower... : | | Important | Not important
| Urgent |
| Not Urgent |
From "Ask HN: Any well funded tech companies tackling big, meaningful problems?" https://news.ycombinator.com/item?id=24412493 :> https://en.wikipedia.org/wiki/Strategic_alignment ... "Schema.org: Mission, Project, Goal, Objective, Task" https://news.ycombinator.com/item?id=12525141
NSA urges orgs to use memory-safe programming languages
# Infosec Memory Safety
## Hardware
- Memory protection: https://en.wikipedia.org/wiki/Memory_protection
- NX Bit: https://en.wikipedia.org/wiki/NX_bit
- Can non-compiled languages (e.g. those with mutable code objects like Python) utilize the NX bit that the processor supports?
- Can TLA+ find side-channels (which bypass all software memory protection features other than encryption-in-RAM)?
- How do DMA and IOMMU hardware features impact software memory safety controls? https://news.ycombinator.com/item?id=23993763
- DMA: Direct Memory Access
- DMA attack > Mitigations: https://en.wikipedia.org/wiki/DMA_attack
- IOMMU: I-O Memory Management Unit; GPUs, Virtualization, https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_ma...
- Kernel IOMMU parameters: Ctrl-F "iommu": https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...
- RDMA: Remote direct memory access https://en.wikipedia.org/wiki/Remote_direct_memory_access
## Software
- Type safety > Memory management and type safety: https://en.wikipedia.org/wiki/Type_safety#Memory_management_...
- Memory safety > Types of memory errors: https://en.wikipedia.org/wiki/Memory_safety#Types_of_memory_...
- Template:Memory management https://en.wikipedia.org/wiki/Template:Memory_management
- Category:Memory_management https://en.wikipedia.org/wiki/Category:Memory_management
- Reference (computerscience) https://en.wikipedia.org/wiki/Reference_(computer_science)
- Pointer (computer programming) https://en.wikipedia.org/wiki/Pointer_(computer_programming)
- Smart pointer (computer programming) in C++: unique_ptr, shared_ptr and weak_ptr; Python: weakref, Arrow Plasma IPC, https://en.wikipedia.org/wiki/Smart_pointer
- Manual Memory Management > Resource Acquisition Is Initialization https://en.wikipedia.org/wiki/Manual_memory_management#Resou...
- Resource acquisition is initialization (C++ (1980s), D, Ada, Vala, Rust), #Reference_counting (Perl, Python (CPython,), PHP,) https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...
- Ada > Language constructs > Concurrency https://en.wikipedia.org/wiki/Ada_(programming_language)#Con...
- C_dynamic_memory_allocation#Common_errors: https://en.wikipedia.org/wiki/C_dynamic_memory_allocation#Co...
- Python 3 > C-API > Memory Managment: https://docs.python.org/3/c-api/memory.html
- The Rust Programming Language > 4. Understanding Ownership > 4.1. What is Ownership? https://doc.rust-lang.org/book/ch04-00-understanding-ownersh...
- The Rust Programming Language > 6. Fearless Concurrency > Using Message Passing to Transfer Data Between Threads https://doc.rust-lang.org/book/ch16-02-message-passing.html#...
> One increasingly popular approach to ensuring safe concurrency is message passing, where threads or actors communicate by sending each other messages containing data. Here’s the idea in a slogan from the Go language documentation: “Do not communicate by sharing memory; instead, share memory by communicating.”
> To accomplish message-sending concurrency, Rust's standard library provides an implementation of channels. A channel is a general programming concept by which data is sent from one thread to another.
> You can imagine a channel in programming as being like a directional channel of water, such as a stream or a river. If you put something like a rubber duck into a river, it will travel downstream to the end of the waterway.
- The Rust Programming Language > 15. Smart Pointers > Smart Pointers: https://doc.rust-lang.org/book/ch15-00-smart-pointers.html
- The Rust Programming Language > 19. Advanced Features > Unsafe Rust: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html
- Secure Rust Guidelines > Memory management, > Checklist > Memory management: https://anssi-fr.github.io/rust-guide/05_memory.html
- Go 101 > "Type-Unsafe Pointers" https://go101.org/article/unsafe.html https://pkg.go.dev/unsafe
- https://github.com/rust-secure-code/projects#side-channel-vu...
- Segmentation fault > Causes, Examples, : https://en.wikipedia.org/wiki/Segmentation_fault
- "CWE CATEGORY: Pointer Issues" https://cwe.mitre.org/data/definitions/465.html
- "CWE CATEGORY: Memory Buffer Errors" https://cwe.mitre.org/data/definitions/1218.html
- "CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer" https://cwe.mitre.org/data/definitions/119.html
- "CWE CATEGORY: SEI CERT C Coding Standard - Guidelines 08. Memory Management (MEM)" https://cwe.mitre.org/data/definitions/1162.html
- "CWE CATEGORY: CERT C++ Secure Coding Section 08 - Memory Management (MEM)" https://cwe.mitre.org/data/definitions/876.html
- SEI CERT C Coding Standard > "Rule 08. Memory Management (MEM)" https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pa...
- SEI CERT C Coding Standard > "Rec. 08. Memory Management (MEM)" https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pa...
- Invariance (computer science) https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invari...
- TLA+ Model checker https://en.wikipedia.org/wiki/TLA%2B#Model_checker > The TLC model checker builds a finite state model of TLA+ specifications for checking invariance properties.
- Data remnance; after the process fails or is ended, RAM is not zeroed: https://en.wikipedia.org/wiki/Data_remanence
- Memory debugger; valgrind, https://en.wikipedia.org/wiki/Memory_debugger
- awesome-safety-critical https://awesome-safety-critical.readthedocs.io/en/latest/#so... ; Software Safety Standards, Handbooks; Formal Verification; backup/ https://github.com/stanislaw/awesome-safety-critical/tree/ma...
- > Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools: https://news.ycombinator.com/item?id=24511280
TEE Trusted Execution Environment > Hardware support, TEE Operating Systems: https://en.wikipedia.org/wiki/Trusted_execution_environment#...
List of [SGX,] vulnerabilities: https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
Protection Ring: https://en.wikipedia.org/wiki/Protection_ring ... Memory Segmentation: https://en.wikipedia.org/wiki/Memory_segmentation
.data segment: https://en.wikipedia.org/wiki/Data_segment
.code segment: https://en.wikipedia.org/wiki/Code_segment
NX bit: https://en.wikipedia.org/wiki/No-execute_bit
Arbitrary code execution: https://en.wikipedia.org/wiki/Arbitrary_code_execution :
> This type of attack exploits the fact that most computers (which use a Von Neumann architecture) do not make a general distinction between code and data,[6][7] so that malicious code can be camouflaged as harmless input data. Many newer CPUs have mechanisms to make this harder, such as a no-execute bit. [8][9]
> - Memory debugger; valgrind, https://en.wikipedia.org/wiki/Memory_debugger
"The GDB developer's GNU Debugger tutorial, Part 1: Getting started with the debugger" (2021) https://developers.redhat.com/blog/2021/04/30/the-gdb-develo...
"Debugging Python C extensions with GDB" (2021) https://developers.redhat.com/articles/2021/09/08/debugging-... & "Python Devguide" > "GDB support" https://devguide.python.org/advanced-tools/gdb/ :
run, where, frame, p(rint),
py-list, py-up/py-down, py-bt, py-locals, py-print
/? site:github.com inurl:awesome inurl:gdb
https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw.../? vscode debugger: https://www.google.com/search?q=vscode+debugger
/? juyterlab debugger: https://www.google.com/search?q=jupyterlab+debugger
Ghidra: https://en.wikipedia.org/wiki/Ghidra
> Ghidra can be used as a debugger since Ghidra 10.0. Ghidra's debugger supports debugging user-mode Windows programs via WinDbg, and Linux programs via GDB. [11]
Ghidra 10.0 (2021) Release Notes: https://ghidra-sre.org/releaseNotes_10.0beta.html
"A first look at Ghidra's Debugger - Game Boy Advance Edition" (2022) https://wrongbaud.github.io/posts/ghidra-debugger/ :
> - Debugging a program with Ghidra using the GDB stub
> - Use the debugging capability to help us learn about how passwords are processed for a GBA game
/? site:github.com inurl:awesome ollydbg ghidra memory https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
Memory forensics: https://en.wikipedia.org/wiki/Memory_forensics
awesome-malware-analysis > memory-forensics: https://github.com/rshipp/awesome-malware-analysis/blob/main...
github.com/topics/memory-forensics: https://github.com/topics/memory-forensics :
- microsoft/avml: https://github.com/microsoft/avml :
/dev/crash
/proc/kcore
/dev/mem
> NOTE: If the kernel feature `kernel_lockdown` is enabled, AVML will not be able to acquire memory.Aluminum formate Al(HCOO)3: Earth-abundant, scalable, & material for CO2 capture
"Aluminum formate, Al(HCOO)3: An earth-abundant, scalable, and highly selective material for CO2 capture" (2022) https://www.science.org/doi/10.1126/sciadv.ade1473
> Abstract: A combination of gas adsorption and gas breakthrough measurements show that the metal-organic framework, Al(HCOO)3 (ALF), which can be made inexpensively from commodity chemicals, exhibits excellent CO2 adsorption capacities and outstanding CO2/N2 selectivity that enable it to remove CO2 from dried CO2-containing gas streams at elevated temperatures (323 kelvin). Notably, ALF is scalable, readily pelletized, stable to SO2 and NO, and simple to regenerate. Density functional theory calculations and in situ neutron diffraction studies reveal that the preferential adsorption of CO2 is a size-selective separation that depends on the subtle difference between the kinetic diameters of CO2 and N2. The findings are supported by additional measurements, including Fourier transform infrared spectroscopy, thermogravimetric analysis, and variable temperature powder and single-crystal x-ray diffraction.
"NIST Breakthrough: Simple Material Could Scrub Carbon Dioxide From Power Plant Smokestacks" (2022) https://scitechdaily.com/nist-breakthrough-simple-material-c... :
> [What to do with all of the captured CO2?]
>> - Generate more formic acid to capture more CO2
- Make protein powder (#Solein,). And nutrients and flavors?
- Feed algae
- Feed plants, greenhouses
- CAES Compressed Air Energy Storage
- Firefighting: https://twitter.com/westurner/status/1572664456210948104
- Make aerogels. Aerogels are useful for firefighter protective clothing, extremely lightweight insulation, extremely lightweight packing materials, aerospace; and no longer require supercritical drying to produce: https://twitter.com/westurner/status/1572662622423584770?t=A...
- Make hydrogels. Hydrogels are useful for: firefighting: https://www.google.com/search?q=hydrogel+firefighting https://twitter.com/westurner/status/1572664456210948104
- Make diamonds, buckyballs, fullerenes, graphene
- Water filtration: activated carbon, nanoporous graphene
Is there a similar process ("MOF,) for capturing Methane from flue gas? Is that before or after the CO2 capture?
- Methane is worse for Earth than CO2 FWIU. And there are many uncapped wells leaking methane, as now visible from space: https://news.ycombinator.com/item?id=33431427
- It's possible to make CBG Cleaner Burning Gasoline from methane (natural gas)
How does the #CophenHill recycling and flue capture facility currently handle CO2 and Methane capture?
- "Transforming carbon dioxide into jet fuel using an organic combustion-synthesized Fe-Mn-K catalyst." (2020) https://doi.org/10.1038/s41467-020-20214-z https://news.ycombinator.com/item?id=25559414
Electrons turn piece of wire into laser-like light source
Full paper is here https://europepmc.org/article/ppr/ppr485152
"Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) Ye Tian, Dongdong Zhang, Yushan Zeng, Yafeng Bai, Zhongpeng Li, and 1 more https://doi.org/10.21203/rs.3.rs-1572967/v1
> Abstract: Surface plasmonic with its unique confinement of light is expected to be a cornerstone for future compact radiation sources and integrated photonics devices. The energy transfer between light and matter is a defining aspect that underlies recent studies on optical surface-wave-mediated spontaneous emissions. But coherent stimulated emission, being omnipresent in every laser system, remains to be realized and revealed in the optical near fields unambiguously and dynamically. Here, we present the coherent amplification of Terahertz surface plasmon polaritons via free electron stimulated emission. We demonstrate the evolutionary amplification process with a frequency redshift and lasts over 1-mm interaction length. The complementary theoretical analysis predicts a 100-order surface wave growth when a properly phase-matched electron bunch is used, which lays the ground for a stimulated surface wave light source and may facilitate capable means for matter manipulation, especially in the Terahertz band.
Polariton: https://en.wikipedia.org/wiki/Polariton
Surface plasmon polaritons : https://en.wikipedia.org/wiki/Surface_plasmon_polaritons :
> [...] Application of SPPs enables subwavelength optics in microscopy and photolithography beyond the diffraction limit. It also enables the first steady-state micro-mechanical measurement of a fundamental property of light itself: the momentum of a photon in a dielectric medium. Other applications are photonic data storage, light generation, and bio-photonics.[2][3][4][5]
Near-infrared window in biological tissue: https://en.wikipedia.org/wiki/Near-infrared_window_in_biolog...
NIRS Near-infrared spectroscopy > Applications: https://en.wikipedia.org/wiki/Near-infrared_spectroscopy#App...
This looks ripe for some cutting edge table-top physics!
TabPFN: Transformer Solves Small Tabular Classification in a Second
From https://twitter.com/FrankRHutter/status/1583410845307977733 :
> This may revolutionize data science: we introduce TabPFN, a new tabular data classification method that takes 1 second & yields SOTA performance (better than hyperparameter-optimized gradient boosting in 1h). Current limits: up to 1k data points, 100 features, 10 classes. 1/6
[Faster and more accurate than gradient boosting for tabular data: Catboost, LightGBM, XGBoost]
Mathics: A free, open-source alternative to Mathematica
Is there a reasonably neutral comparison of Mathics vs Mathematica anywhere?
Based on an amazing showcase[1] Mathematica is right at the top of my list of languages to learn if it (and at least some of the surrounding tooling) ever becomes open source. I wonder how many of those examples would give useful results in Mathics, or what their equivalents would be.
The thing about Mathematica / the Wolfram language is that it's quite a bit harder to create an open source interpreter than it was for eg R (which is actually a FOSS interpteter for the commercial package S) or for Matlab (not sure what the status of Octave, the FOSS interpeter for Matlab code, is, I read an entry on a mailing list a long time ago that its sole dev was giving up). A lot of the symbolic solvers that are used under the hood are Wolfram's IP, and it would be a monumental effort to recreate something similarly powerful from scratch.
You've heard of SageMath right?
The biggest thing missing from SageMath is a step by step solver. (Edit to add a caveat, I'm sure many professionals depend on minutiae of one or the other)
Feature-wise I'd say Sage has more Mathematica functionality than Octave does for MATLAB. Sage is not trying to be compatible however presumably it wouldn't be that hard if the functionality is there?
Sage is a bit of a Frankenstein though
SageMath (and the cocalc-docker image, and JupyterLite, and mambaforge, ) include SymPy; which can be called with `evaluate=False`
Advanced Expression Manipulation > Prevent expression evaluation: https://docs.sympy.org/latest/tutorials/intro-tutorial/manip...
> There are generally two ways to prevent the evaluation, either pass an evaluate=False parameter while constructing the expression, or create an evaluation stopper by wrapping the expression with UnevaluatedExpr.
From "disabling automatic simplification in sympy" https://stackoverflow.com/a/48847102 :
> A simpler way to disable automatic evaluation is to use context manager evaluate. For example,
from sympy.core.evaluate import evaluate
from sympy.abc import x,y,z
with evaluate(False):
print(x/x)
sage.symbolic.expression.Expression.unhold() and `hold=True`:
https://doc.sagemath.org/html/en/reference/calculus/sage/sym...IIRC there is a Wolfram Jupyter kernel?
WolframResearch/WolframLanguageForJupyter: https://github.com/WolframResearch/WolframLanguageForJupyter
mathics/IMathics is the Jupyter kernel for mathics: https://github.com/mathics/IMathics@main#egg=imathics
#pip install jupyter_console imathics
#conda install -c conda-forge -y jupyter_console jupyterlab
mamba install -y jupyter_console jupyterlab
jupyter console
jupyter kernelspec list
pip install -e git+https://github.com/mathics/imathics@main#egg=mathics
jupyter console --kernel=
%?
%logstart?
%logstart -o demo.log.py
There are Jupyter kernels for Python, Mathics, Wolfram, R, Octave, Matlab, xeus-cling, allthekernels (the polyglot kernel). https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
https://github.com/ml-tooling/best-of-jupyter#jupyter-kernel...The Python Jupyter kernel checks IPython.display.display()'d objects for methods in order to represent an object in a command line shell, graphical shell (qtconsole), notebook (.ipynb), or a latex document: _repr_mimebundle_(), _repr_html_(), _repr_json_(), _repr_latex_(), ..., __repr__(), __str__()
The last expression in an input cell of a notebook is implictly displayed:
from IPython.display import display
%display? # argspec, docstring
%display?? # ' & source code
display(last_expresssion)
Symbolic CAS mobile apps with tabling and charting and varying levels of support for complex numbers and quaternions, for example: Wolfram Mathematica, Desmos, Geogebra, JupyterLite, Jupyter on mobileAstronomers Discover Closest Black Hole to Earth
Micro black hole > Black holes in quantum theories of gravity https://en.wikipedia.org/wiki/Micro_black_hole#Black_holes_i...
Virtual black hole https://en.wikipedia.org/wiki/Virtual_black_hole
Timeline of gravitational physics and relativity https://en.wikipedia.org/wiki/Timeline_of_gravitational_phys...
- [ ] Superfluid Quantum Gravity
-- [ ] GR + Bernoulli's re: Dark Matter/Energy: Fedi (2017),
"What If (Tiny) Black Holes Are Everywhere?" https://youtu.be/srVKjWn26AQ
> Just one Planck relic per 30km cube, and that’s enough to make up most of the mass in the universe
Quantum foam: https://en.wikipedia.org/wiki/Quantum_foam
Sudo: Heap-based overflow with small passwords
Another day, another CVE in tool that we rely on everyday
The first question that we all want to ask
Could it be mitigated by safer, modern tech?
Yes.
One does not even need to reach for Rust. Literally any other language (in common use) other than very badly used C++ would not have had this problem.
C delenda est.
but if sudo was written in java we'd have other problems ;)
Do you know one of the reasons why Multics had a better security score than UNIX on DoD assement?
PL/I does bounds checking by default.
For reals. The fact that C toolchains have never even offered a bullet-proof bounds-checked (no UB) mode, no matter what the slowdown, boggles the mind. For something like sudo, literally running 100x slower would not be an issue. Its highest priority should be security.
100x slower would definitely be an issue. I am prompted for my sudo password probably 30x daily.
How long does sudo take to load on your system. Multiply that by 3000, is it really a noticeable number?
Seems to float around 0.005s (using "time sudo -k" to avoid timing user input), so yeah, x3000 = 15 seconds. Very noticeable.
Or even the x100 from (G)GP, that's a half second. Sometimes spiking to a full second.
I have once seen a proposal to make any sudo call wait for 10 seconds before doing anything so that the person running it has a moment to thing about what they just did do (and have ~10 seconds to ^C cancel it).
The person arguing also brought up that normally anything needing sudo should be automatized so that should be fine.
I'm not doing enough system administration to judge if that is a sane idea or not ;=)
That might be a reasonable idea but the time probably shouldn't be imposed by performance constraints.
Does it say somewhere that non-neteork-io PAM modules are supposed to be constant time?
def add_noise(t=10):
time.sleep(t-1)
time.sleep(
uniformrandom(min=0,max=1))
Constant time: https://en.wikipedia.org/wiki/Time_complexity#Constant_time(Re: Short, DES passwords https://en.wikipedia.org/wiki/Triple_DES :
> A CVE released in 2016, CVE-2016-2183 disclosed a major security vulnerability in DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of DES and 3DES, NIST has deprecated DES and 3DES for new applications in 2017, and for all applications by the end of 2023.[1] It has been replaced with the more secure, more robust AES.
Except for PQ. For PQ in 2022: https://news.ycombinator.com/item?id=32760170 :
> NIST PQ algos are only just now announced: https://news.ycombinator.com/item?id=32281357 : Kyber, NTRU, {FIPS-140-3}? [TLS1.4/2.0?]
I'm not sure I follow this comment.
Adding a random amount of time seems like a reasonable thing to do.
Not sure what the links are all about, or the discussion of time complexity... I mean, there isn't an "input size" to talk about big-O scaling anyway, in the case of sudo.
Should the time to complete the (single-character) password-hashing/key-strengthening routine vary in relation to any aspect of the input ?
Timing attacks > Avoidance https://en.wikipedia.org/wiki/Timing_attack
Cree releases LEDs designed for horticulture
Iirc chlorophyll has 2 very distinct absorption peaks [1] so it should be possible to design lights that target those frequencies. But knowing how LEDs work and the hacks with wavelength adjustment we use with phosphorus and others I’m sure it’s not easy.
[1] https://upload.wikimedia.org/wikipedia/commons/f/f6/Chloroph...
I mix 6000K/6500K (daylight) LED strips with 2700K (warm white) ones and plants seem very happy about it and it is OK for humans, too.
In fact I previously used only 6000K LED strips (in an insulated box so no other light at all) on cycad and other seeds and got very healthy plants.
I think the key for commercial applications like vertical farms is to optimise energy use by trying not to waste electricity on wavelengths that won't do much to boost production.
UV Ultraviolet light (UV-A, UV-B, and UV-C,) and near-UV "antimicrobial violet light" are sanitizing radiation.
Natural sunlight sanitizes because the radiation from the sun includes UV-* band radiation that is sufficient to radiate organic cells at this distance.
EM light/radiation intensity decreases as a function of the square of the distance from the light source (though what about superfluidic wave functions and accretion discs and microscopic black hole interiors (every 30km by one estimate) and Lagrangian points,).
"Inverse-square law" https://en.wikipedia.org/wiki/Inverse-square_law
Ultraviolet: https://en.m.wikipedia.org/wiki/Ultraviolet
> Short-wave ultraviolet light damages DNA and sterilizes surfaces with which it comes into contact. For humans, suntan and sunburn are familiar effects of exposure of the skin to UV light, along with an increased risk of skin cancer. The amount of UV light produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere. [2] More energetic, shorter-wavelength "extreme" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. [3] However, ultraviolet light (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates, including humans. [4] The UV spectrum, thus, has effects both beneficial and harmful to life.
Ultraviolet > Solar ultraviolet: https://en.wikipedia.org/wiki/Ultraviolet#Solar_ultraviolet
> The atmosphere blocks about 77% of the Sun's UV, when the Sun is highest in the sky (at zenith), with absorption increasing at shorter UV wavelengths. At ground level with the sun at zenith, sunlight is 44% visible light, 3% ultraviolet, and the remainder infrared. [23][24] Of the ultraviolet radiation that reaches the Earth's surface, more than 95% is the longer wavelengths of UVA, with the small remainder UVB. Almost no UVC reaches the Earth's surface. [25] The fraction of UVB which remains in UV radiation after passing through the atmosphere is heavily dependent on cloud cover and atmospheric conditions. On "partly cloudy" days, [...]
(Infrared light ~is heat.
There's usually plenty of waste heat from mechanical friction, exothermic chemical processes with gravity, lossy AC-DC conversion, and electronic components that waste energy/electron_field_disturbance//thermions/heat like steaming water towers sans superconducting channels wide enough for electrons to not start tunneling out without a ground or a negative)
Ultraviolet > Human health-related effects > harmful effects https://en.wikipedia.org/wiki/Ultraviolet#Harmful_effects
UV-A and UV-B protective eyewear (U6 polycarbonate) can be purchased in bulk. https://www.amazon.com/s?k=uvb+ansi+z87+glasses+20+pairs
Phlare: open-source database for continuous profiling at scale
What about maintenance on the existing projects? Open issues on Github:
Grafana: 2.6k issues, 275 PRs
Loki: 531 issues, 113 PRs
Mimir: 305 issues, 40 PRs
Tempo: 159 issues, 19 PRs
While they are open-source projects, I bet their software is still driven by customer feedback. I wouldn't put it past them to prioritize paying customer requests which takes time away from implementing features, doing bugfixes or code review.
I don't think it's fair to say that paying customer feedback is prioritised at the expense of the open-source community.
The thing with the core Grafana product being open source is that there's not that much dissimilarity between paying/enterprise Grafana users, and open-source users. Feedback from one set will almost always work in favor for the other.
NASA finds super-emitters of methane
How could a 2 mile long methane plume in New Mexico have been undetected for any significant amount of time?
From what I understand basic environmental monitoring is done in 2022 around all major industrial facilities in the U.S.
what interesting is there is nothing there. The only something are gas wells which the whole area is dotted with, so my only guess is leaky gas wells? https://www.google.com/maps/@32.3761968,-104.0819087,4787m/d...
Looks like Marathon Oil is the company
True story. I have worked oil and gas for a while and am familiar with this area. It can easily be a well not properly P&A'd (plugged and abandoned) - quite common down there. It is most likely a Marathon well (HARROUN COM #002), or it could be Eastland Oil (HARROUN A #007). If you want to get a better idea for yourself, you can check the EMNRD - https://ocd-hub-nm-emnrd.hub.arcgis.com/
Is there a way to IDK 3d earthen print a geodesic dome over the site and capture the waste methane (natural gas) into local tanks?
TIL about CBG: Cleaner Burning Gasoline
From https://twitter.com/westurner/status/1564443689623195650 :
> In September 2021 we covered a new "green gasoline" concept from @NaceroCo [in Penwell, TX] that involves constructing gasoline hydrocarbons by assembling smaller #methane molecules from natural gas
From https://www.houstonchronicle.com/opinion/outlook/article/Opi... :
> The Inflation Reduction Act imposes a fee of [$900/ton] of methane starting in 2024 — this is roughly twice the current record-high price of natural gas and five times the average price of natural gas in 2020.
> These high fees present a strong incentive
... "Argonne invents reusable [polyurethane] sponge that soaks up oil, could revolutionize oil spill and diesel cleanup" (2017) https://www.anl.gov/article/argonne-invents-reusable-sponge-...
FWIU, heat engines are useful with all thermal gradients: pipes, engines, probably solar panels and attics; "MIT’s new heat engine beats a steam turbine in efficiency" (2022) https://www.freethink.com/environment/heat-engine
I hear what you are saying, and there is a ton of room in the E&P space for improvements of all kinds. You would be shocked at how incredibly we are behind technologically (I remember just last year over hearing someone say, "We just figured out our cloud strategy."). In terms of a dome over fields or units to collect stray methane, that may be an issue. We are loathe to construct "enclosed spaces" for gases as that can be a safety issue. It doesn't take much stray anything to kill you out there. We have all sorts of stories of people going into an enclosed space, passing out and dying, only to have more people die trying to get them out. Sounds bad, I know, but this is coming from someone who has come across a few dead bodies out in the field for various reasons - mostly just being stupid. Fees are funny in oil and gas - we complain about how much money we don't have and then spend it frivolously elsewhere. That inflation act, at the state level there are all sorts of those out there and some companies care and some don't. If you want to see something crazy, check out the NDIC (North Dakota Industrial Commission). In terms of oil and gas data, theirs is the most centralized, easily accessed, and complete in the country (NM isn't bad, CA used to be better, TX is garbage-which is odd, LA is god awful, and PA is meh). The NDIC keeps really good track of flaring, so to see how much natural gas is just burned up at the cost of getting the oil out (not such a great infrastructure for moving gas and historically the price hasn't been a good inducement to build any). To get the well level data, it is $150/year, but well worth it if you are working that basin and also in comparison to all of the data services out there. https://www.dmr.nd.gov/oilgas/stats/statisticsvw.asp
So there only needs to be a bit of concrete in a smaller structure that exceeds bunker-busting bomb specs and 'funnels' (?) the natural gas to a tank or a bladder?
Are there existing methods for capturing methane from insufficiently-capped old wells?
Are the new incentives/fees/fines enough to motivate action thus far in this space?
OpenAPI is one way to specify integrable APIs. An RDFS vocabulary for this data is probably worthwhile; e.g. NASA Earth Science (?) may have a schema that all of the state APIs could voluntarily adopt?
Presumably the CophenHill facility handles waste methane? We should build waste-to-energy facilities in the US, too
FWIU Carbon Credits do not cover methane, which is worse than CO² for #ActOnClimate
Natural gas isn't stored on site, it needs to be piped to the nearest plant to be processed and put into a sales line. Capturing methane from insufficiently capped old wells would not be economic in most cases. If a company was called out on it, they would just go dump more cement in it to make sure the gas is contained. 90% of the time, wells that are plugged are plugged well, the ones that aren't and are just abandoned maybe leak only 1-5 thousand cubic feet per day - nothing worth doing anything about (to the company financially). Fines typically mean nothing to E&P with where they are now - though some have gotten clever about it - example being North Dakota keeps flaring down by whatever you are flaring you have to cut your oil production in some proportionate manner (oil is the more desired product) - though that was rescinded during the last big price down turn and am not sure if that is back in effect. To your statement about APIs - that is one thing the oil industry is terrrrrrible with. Our data collection and cleaning is abysmal. I agree with your statement, and I would be all for it, but E&P companies can't even get their own production numbers right - a good example is if you check out the fracfocus database where companies volunteer up their fracturing job compositions. Generally it is useful, but the people who input the data, similar to who would probably be handling this can barely spell and data cleaning would be a nightmare. waste to energy facilities are great, and there are some interesting things out in the oilfield but, like everything else, there needs to be more financial incentive for companies to build them/use them.
So, in 2022, it's cheaper to dump concrete than to capture it, but the new fines this year aren't enough incentive to solve for: capture to a tank and haul, build unprocessed natural gas pipelines, or process onsite and/or fill tankers onsite?
Data quality: https://en.wikipedia.org/wiki/Data_quality
... Space -based imaging.
How long should they wait to up the methane fee if it's not enough to incentivize capping closed wells?
It is totally cheaper to dump some cement (it is mostly gel with a topping of cement). To P&A older wells maybe runs $12-25K (assuming, like, a 5-8k ft. depth conventional well)...and I may be running a little high on that number. That gets a small truck out there with a small crew to pull tubing and dump alternating layers of cement and gel (cement goes on the top and across formations that would be ground water bearing). Fun fact, if you have to go back into an abandoned well and you come across red cement at the top, that is indicative of someone losing a nuclear based well tool in there and to call someone before going further. A typical 7.5-10k foot lateral unconventional well (horizontal wells) down that way will run about $7-8 million depending (and 6 wells on average on a well pad), but aren't really the issue, but just giving you some numbers to sort of show that fining someone $100K for something serious isn't that big an expense and not really a deterrent. Natural gas lines are always a big deal to oil and gas companies - if you build it they will come. Most space in pipelines for operators is spoken for before they even dig the first trench.
Options:
A. Privately and/or Publicly grant to P&A wells
($25k+ * n_wells)
+ ($7-8m+ * m_wells)
B. Build natural gas pipelines that run past those well sites (approval,)C. increase the incentives/fines/fees
**
Shouldn't it be pretty easy to find such tools with IDK neutron detection and/or imaging at what distance?
Show HN: Linen – Open-source Slack for communities
Hi HN, My name is Kam. I'm the founder of Linen.dev. Linen communities is a Slack/Discord alternative that is Google-searchable and customer-support friendly. Today we are open-sourcing Linen and launching Linen communities. You can now create a community on Linen.dev without syncing it from Slack and Discord!
I initially launched Linen as a tool to sync Slack and Discord conversations to a search engine-friendly website. As I talked to more community managers, I quickly realized that Slack and Discord communities don't scale well and that there needs to be a better tool, especially for open-source knowledge-based communities. Traditionally these communities have lived on forums that solved many of these problems. However, from talking to communities, I found most of them preferred chat because it feels more friendly and modern. We want to bring back a bunch of the advantages of forums while maintaining the look and feel of a chat-based community.
Slack and Discord are closed apps that are not indexable by the internet, so a lot of content gets lost. Traditional chat apps are not search engine friendly because most search engines have difficulty crawling JS-heavy sites. We built Linen to be search engine friendly, and our communities have over 30,000 pages/threads indexed by google. Our communities that have synced their Slack and Discord conversations under their domain have additional 40,000 pages indexed. We accomplish this by conditionally server rendering pages based on whether or not the browser client is a crawler bot. This way, we can bring dynamic features and a real-time feel to Linen and support search engines.
Most communities become a support channel, and managing this many conversations is not what these tools are designed for. I've seen community admins hack together their own syncs and internal devices to work to stay on top of the conversations. This is why we created a feed view, a single view for all the threads in all the channels you care about. We added an open and closed state to every thread so you can track them similarly to GitHub issues or a ticketing system. This way, you and your team won't miss messages and let them drop. We also allow you to filter conversations you are @mentioned as a way of assigning tickets. I think this is a good starting point, but there is a lot more we can improve on.
How chat is designed today is inherently interrupt-driven and disrupts your team's flow state. Most of the time, when I am @mentioning a team member, I actually don't need them to respond immediately. But I do want to make sure that they do eventually see it. This is why we want to redesign how the notification system works. We are repurposing @mentions to show up in your feed and your conversation sections and adding a !mention. A @mention will appear in your feed but doesn't send any push notifications, whereas a !mention will send a notification for when things need a real-time synchronous conversation. This lets you separate casual conversations from urgent conversations. When everything is urgent, nothing is. (credit: Incredibles) This, along with the feed, you can get a very forum-like experience to browse the conversations.
Linen is free with unlimited history for public communities under https://linen.dev/community domain. We monetize by offering a paid version based on communities that want to host Linen under their subdomain and get the SEO benefits without managing their own self-hosted instance.
We are a small team of 3, and this is the first iteration, so we apologize for any missing features or bugs. There are many things we want to improve in terms of UX. In the near term, we want to improve search and add more deep integrations, DMs, and private channels. We would appreciate any feedback, and if you are curious about what the experience looks like, you can join us here at Linen.dev/s/linen
If I'm an organization choosing a chat platform, why would I want to use Linen versus Mattermost, which is also self-hostable and open-source, and much more mature? Or Matrix and Element (or any other Matrix client)?
This space is getting pretty crowded, and I'm not sure why I'd want to use Linen rather than one of the many alternatives.
I think it's misleading to call Mattermost "open source". Mattermost the for-profit company makes two products, one of which is open source, and a much larger one (which is approximately feature complete wrt modern Slack-likes) which is absolutely not.
The open source product they make does not work like a normal open source project: submitted improvements to it will presumably be denied if they add functionality that exists in the proprietary and closed source second product, also called Mattermost, or if they serve users (like removing the silent and nonconsensual phone-home spyware from segment.io that they embed in the released binaries). It's a normal and expected thing in the open source world for user communities to be able to participate in the software project. That's one of the defined goals of the idea of open source: you can fix and improve it, and share your fixes and improvements.
Mattermost the company is not really an "open source company", as they build and sell proprietary software as their main source of revenue. Their goal is increasing the use of proprietary software, and every user of the open source Mattermost is a target for them to convince to stop using open source and start using proprietary software. This is presumably why they make their open source version phone home to tell them about your usage of it, so that they can try to get you to pay them to start using the proprietary one if you get big/rich enough.
Furthermore, to even contribute to the "open source" project of Mattermost, you are required to sign a CLA, because they want to be able to resell your work commercially under nonfree licenses. You'll note there is no CLA required for most real open source work, because they don't care if you retain copyright - the open source license is all they need. There is no CLA required to contribute to Linux or gcc.
Mattermost the software is not really an "open source project" because you can't meaningfully improve the software as the maintainer has a vested financial interest in keeping basic features (like message expiry/retention periods/SSO) out of it to direct the community to become paid customers of their proprietary software.
Mattermost wants to use the work of contributors to an open source project to further their proprietary software goals, while maintaining their open source bait as a neutered stub.
It's open source in name only. The real Mattermost product is proprietary and the open source version only exists as a fake open source project to serve as an onramp for selling their closed source proprietary one.
> [Open-core Software Firm X] wants to use the work of contributors to an open source project to further their proprietary software goals, while maintaining their open source bait as a neutered stub.
Source-available software: https://en.wikipedia.org/wiki/Source-available_software
Open-core model > Examples: https://en.wikipedia.org/wiki/Open-core_model#Examples
Who can merge which pull requests to the most-tested and most-maintained branch of a fork?
CLAs are advisable regardless of software license.
Re: Free and Open Source governance models: https://twitter.com/westurner/status/1308465144863903744
Protobuf-ES: Protocol Buffers TypeScript/JavaScript runtime
Arrow Flight RPC (and Arrow Flight SQL, a faster alternative to ODBC/JDBC) are based on gRPC and protobufs: https://arrow.apache.org/docs/format/Flight.html:
> Arrow Flight is an RPC framework for high-performance data services based on Arrow data, and is built on top of gRPC and the IPC format.
> Flight is organized around streams of Arrow record batches, being either downloaded from or uploaded to another service. A set of metadata methods offers discovery and introspection of streams, as well as the ability to implement application-specific methods.
> Methods and message wire formats are defined by Protobuf, enabling interoperability with clients that may support gRPC and Arrow separately, but not Flight. However, Flight implementations include further optimizations to avoid overhead in usage of Protobuf (mostly around avoiding excessive memory copies
"Powered By Apache Arrow in JS" https://arrow.apache.org/docs/js/index.html#powered-by-apach...
What's with your slightly on-topic but mostly off-topic comments? Your post history almost looks like something written by GPT-3, following the same format and always linking to a lot of external resources that only briefly touch the subject.
It looks like a bot which scans a comment, identifies some buzz phrases, and quotes those lines, replying with generic linked information (e.g. wikipedia) about those phrases.
No.
Read: https://westurner.github.io/hnlog/
If you have a question about what I just took the time to share with you here, that would be great. Otherwise, I'm going to need you on Saturday.
[deleted]
We need a replacement for TCP in the datacenter [pdf]
FWIU, barring FTL; superluminal communication breakthroughs and controls, Deep Space Networking needs a new TCP, as well:
From https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_t... :
void tcp_retransmit_timer(struct sock *sk) {
/* Increase the timeout each time we retransmit. Note that
* we do not increase the rtt estimate. rto is initialized
* from rtt, but increases here. Jacobson (SIGCOMM 88) suggests
* that doubling rto each time is the least we can get away with.
* In KA9Q, Karn uses this for the first few times, and then
* goes to quadratic. netBSD doubles, but only goes up to *64,
* and clamps at 1 to 64 sec afterwards. Note that 120 sec is
* defined in the protocol as the maximum possible RTT. I guess
* we'll have to use something other than TCP to talk to the
* University of Mars.
*
* PAWS allows us longer timeouts and large windows, so once
* implemented ftp to mars will work nicely. We will have to fix
* the 120 second clamps though!
*/
/? "tp-planet" "tcp-planet"
https://www.google.com/search?q=%22tp-planet%22+%22tcp-plane... https://scholar.google.com/scholar?q=%22tp-planet%22+%22tcp-...A Message from Lunny on Gitea Ltd. and the Gitea Project
> I created Gitea
No mention of Gogs anywhere, really classy.
(Gitea is a fork of Gogs)
> Gitea was created by Lunny Xiao, who was also a founder of the self-hosted Git service Gogs.
Gogs is a clone of GitHub and GitLab (which were both originally written in Ruby with the Ruby on Rails CoC Convention-over-Configuration Web Framework), which were built because Trac didn't support Git or multiple projects, Sourceforge didn't support Git or on-prem, and git patchbombs as attachments over emailing lists needed Pull Requests, and Issues and PRs should pull from the same sequence of autoincrement (*) integer keys.
- You can do ~GitHub Pages with Gitea and an idempotent git post-receive-hook that builds static HTML from a repo revision, tests, deploys to revid/ and updates a latest/ symlimk, and logs; or with HTTP webhooks.
- "Feature: Allow interacting with tickets via email" https://github.com/go-gitea/gitea/issues/2386#issuecomment-6...
- It's not safe to host Gitea on the same server as the CI (e.g. DroneCI) host if you grant permissions to the docker socket to the CI container: you need another VM at least to run the CI controller and workers on_push() with Gitea. https://docs.drone.io/server/provider/gitea/ :
> Please note we strongly recommend installing Drone on a dedicated instance. We do not recommend installing Drone and Gitea on the same machine due to network complications, and we definitely do not recommend installing Drone and Gitea on the same machine using docker-compose.
GitHub and GitLab centralize git for project-based collaboration, which is itself a distributed system.
Linux System Call Table – Chromiumos
System call: https://en.wikipedia.org/wiki/System_call
Strace and similar tools can trace syscalls to see what kernel system calls are made by a process: https://en.wikipedia.org/wiki/Strace#Similar_tools
Google/syzkaller https://github.com/google/syzkaller :
> syzkaller ([siːzˈkɔːlə]) is an unsupervised coverage-guided kernel fuzzer. Supported OSes: Akaros, FreeBSD, Fuchsia, gVisor, Linux, NetBSD, OpenBSD, Windows
Fuschia / Zircon syscalls: https://fuchsia.dev/fuchsia-src/reference/syscalls
"How does Go make system calls?" https://stackoverflow.com/questions/55735864/how-does-go-mak...
Variability, not repetition, is the key to mastery
Over how many generations?
Genetic algorithm: https://en.wikipedia.org/wiki/Genetic_algorithm
Mutation: https://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)
Crossover: https://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)
Selection: https://en.wikipedia.org/wiki/Selection_(genetic_algorithm)
...
AlphaZero / MuZero: https://en.wikipedia.org/wiki/MuZero :
> MuZero was trained via self-play, with no access to rules, opening books, or endgame tablebases.
Self-play algorithms essentially mutate and select according to the game rules. For a generally-defined mastery objective, are there subjective and/or objective game rules, and is there a distance metric for ranking candidate solutions?
The Docker+WASM Technical Preview
Michael Irwin from Docker here (and author of the blog post too). Happy to answer your questions, hear feedback, and more!
maybe a naive question: is there a way to run some form of docker in the browser? It could be a great education / demo tool
Great question! There isn't a way to run Docker directly in the browser. But, there are tools (like Play with Docker at play-with-docker.com) that lets you interact with a CLI in the browser to run commands against a remote cloud instance. I personally use this a lot for demos and workshops!
But... certainly a neat idea to think about what Wasm-based applications could possibly look like/run in the browser!
Is it possible to sandbox the host system from the guests in WASM?
Are there namespaces and cgroups and SECCOMP and blocking for concurrent hardware access in WASM, or would those kernel protections be effective within a WASM runtime? Do WASM runtimes have subprocess isolation?
/? subprocess isolation https://www.google.com/search?q=subprocess+isolation on a PC:
- TIL about teh Endokernel: "The Endokernel: Fast, Secure, and Programmable Subprocess Virtualization" (2021) https://arxiv.org/abs/2108.03705#
> The Endokernel introduces a new virtual machine abstraction for representing subprocess authority, which is enforced by an efficient self-isolating monitor that maps the abstraction to system level objects (processes, threads, files, and signals). We show how the Endokernel can be used to develop specialized separation abstractions using an exokernel-like organization to provide virtual privilege rings, which we use to reorganize and secure NGINX. Our prototype, includes a new syscall monitor, the nexpoline, and explores the tradeoffs of implementing it with diverse mechanisms, including Intel Control Enhancement Technology. Overall, we believe sub-process isolation is a must and that the Endokernel exposes an essential set of abstractions for realizing this in a simple and feasible way.
Sandbox (computer security) > Implementations https://en.wikipedia.org/wiki/Sandbox_(computer_security)
- [x] Linux containers
- [ ] WASM with or without WASI
eWASM has costed opcodes; basically like dynamic tracing in CPython.
Are there side channels for many or most of these sandboxing methods; even at the CPU level?
google/gvisor could be useful for this? https://github.com/google/gvisor :
> gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system surface. It includes an Open Container Initiative (OCI) runtime called runsc that provides an isolation boundary between the application and the host kernel.
Python 3.11.0 Released
"What’s New In Python 3.11" https://docs.python.org/3.11/whatsnew/3.11.html
Tomorrow the Unix timestamp will get to 1,666,666,666
Approximations of Pi: https://en.wikipedia.org/wiki/Approximations_of_π
> In the 3rd century BCE, Archimedes proved the sharp inequalities 223⁄71 < π < 22⁄7, by means of regular 96-gons (accuracies of 2·10−4 and 4·10−4, respectively).
223/71 = 3.1408450704225
666/212 = 3.1415094339622
π = 3.14159265359
22/7 = 3.1428571428571
Is π a good radix for what types of math in addition to Trigonometry? And then what about e for natural systems; the natural log."Why do colliding blocks compute pi?" https://youtu.be/jsYwFizhncE https://www.3blue1brown.com/lessons/clacks-solution https://www.reddit.com/r/3Blue1Brown/comments/r29vm5/rationa... ... Geogebra: https://www.geogebra.org/m/BhxyBJUZ :
> The applet shows the old method used to approximate the value of π. Archimedes used a 96-sided polygons to find that the value of π is 223/71 < π < 22/7 (3.1408 < π < 3.1429). In 1630, an Austrian astronomer Christoph Grienberger found a 38-digit approximation by using 10^40-sided polygons. This is the most accurate approximation achieved by this method.
Bringing Modern Authentication APIs (FIDO2 WebAuthn, Passkeys) to Linux Desktop
WebAuthN: https://en.wikipedia.org/wiki/WebAuthn
FIDO2: https://en.wikipedia.org/wiki/FIDO2_Project
Arch/WebAuthN: https://wiki.archlinux.org/title/WebAuthn
U2F/FIDO2: https://wiki.archlinux.org/title/Universal_2nd_Factor
TPM: https://wiki.archlinux.org/title/Trusted_Platform_Module#Oth...
Seahorse: https://en.wikipedia.org/wiki/Seahorse_(software)
GNOME Keyring: https://en.wikipedia.org/wiki/GNOME_Keyring
tpmfido: https://github.com/psanford/tpm-fido
"PEP 543 – A Unified TLS API for Python" (withdrawn*) https://peps.python.org/pep-0543/#interfaces
> Specifying which trust database should be used to validate certificates presented by a remote peer.
certifi-system-store https://github.com/tiran/certifi-system-store/blob/main/src/...
truststore/_openssl.py: https://github.com/sethmlarson/truststore/blob/main/src/trus...
"Help us test system trust stores in Python" (2022) https://sethmlarson.dev/blog/help-test-system-trust-stores-i... :
python -m pip install \
--use-feature=truststore Flask
Science, technology and innovation isn’t addressing world’s most urgent problems
> Changing directions: Steering science, technology and innovation for the Sustainable Development found that research and innovation around the world is not focused on meeting the UN’s Sustainable Development Goals, which are a framework set up to address and drive change across all areas of social justice and environmental issues.
https://globalgoals.org/ #GlobalGoals #SDGs #Goal17
Each country (UN Member State) prepares an annual country-level report - an annual SDG report - on their voluntary progress toward their self-defined Targets (which are based upon Indicators; stats).
Businesses that voluntarily prepare a sustainability report necessarily review their SDG-aligned operations' successes and failures. The GRI Corporate Sustainability report is SDG aligned; so if you prepare an annual Sustainability report, it should be easy to review aligned and essential operations.
GRI Global Reporting Initiative is also in NL: https://en.wikipedia.org/wiki/Global_Reporting_Initiative
> Critically, the report finds that research in high-income and middle-income countries contributes disproportionally to a disconnect with the SDGs. Most published research (60%-80%) and innovation activity (95%-98%) is not related to the SDGs.
Strategic alignment: https://en.wikipedia.org/wiki/Strategic_alignment
https://USAspending.gov resulted from tracking State-level grants in IL: the Federal Funding Accountability and Transparency Act: https://en.wikipedia.org/wiki/Federal_Funding_Accountability...
Unfortunately, https://performance.gov/ and https://USAspending.gov/ do not have any way to - before or after funding decisions - specify that a funded thing is SDG-aligned.
IMHO, we can easily focus on domestic priorities and also determine where our spending is impactful in regards to the SDGs.
> Illustrating the imbalance, the report found that 80 percent of SDG-related inventions in high-income countries were concentrated in just six of the 73 countries
Lots of important problems worth money to folks:
#Goal1 #NoPoverty
#Goal2 #ZeroHunger
#Goal3 #GoodHealth
#Goal4 #QualityEducation
#Goal5 #GenderEquality
#Goal6 #CleanWater
#Goal7 #CleanEnergy
#Goal8 #DecentJobs
#Goal9 #Infrastructure
#Goal10 #ReduceInequality
#Goal11 #Sustainable
#Goal12 #ResponsibleConsumption
#Goal13 #ClimateAction
#Goal14 #LifeBelowWater
#Goal15 #LifeOnLand
#Goal16 #PEACE #Justice
#Goal17 #Partnership #Teamwork
If you label things with #GlobalGoal hashtags, others can find solutions to the very same problems.
Quantum Monism Could Save the Soul of Physics
It reminds me of Sean Carrols Hilbert Space Fundamentalism.
Sean M. Carroll: Reality as a Vector in Hilbert Space
Hilbert space https://en.wikipedia.org/wiki/Hilbert_space :
From sympy.physics.quantum.hilbert https://github.com/sympy/sympy/blob/master/sympy/physics/qua... :
__all__ = [
'HilbertSpaceError',
'HilbertSpace',
'TensorProductHilbertSpace',
'TensorPowerHilbertSpace',
'DirectSumHilbertSpace',
'ComplexSpace',
'L2',
'FockSpace'
]
From sympy.physics.quantum.operator
https://github.com/sympy/sympy/blob/master/sympy/physics/qua... : __all__ = [
'Operator',
'HermitianOperator',
'UnitaryOperator',
'IdentityOperator',
'OuterProduct',
'DifferentialOperator'
]
From SymPy.physics.quantum.operatorset https://github.com/sympy/sympy/blob/master/sympy/physics/qua... : """ A module for mapping operators to their corresponding eigenstates and vice versa
It contains a global dictionary with eigenstate-operator pairings.
If a new state-operator pair is created,
this dictionary should be updated as well.
It also contains functions operators_to_state and state_to_operators for mapping between the two. These can handle both classes and instances of operators and states.
See the individual function descriptions for details.
TODO List:
- Update the dictionary with a complete list of state-operator pairs
"""
From sympy.physics.quantum.represent https://github.com/sympy/sympy/blob/master/sympy/physics/qua... : """Logic for representing operators in state in various bases.
TODO:
* Get represent working with continuous hilbert spaces.
* Document default basis functionality.
"""
# ...
__all__ = [
'represent',
'rep_innerproduct',
'rep_expectation',
'integrate_result',
'get_basis',
'enumerate_states'
]
# ...
def represent(expr, **options):
"""Represent the quantum expression in the given basis.
"I am one with the universe"From tequila/simulators/simulator_cirq https://github.com/tequilahub/tequila/blob/master/src/tequil... :
from tequila.wavefunction.qubit_wavefunction import QubitWaveFunction
From tequila.circuit.qasm https://github.com/tequilahub/tequila/blob/master/src/tequil... :> """ Export QCircuits as qasm code OPENQASM version 2.0 specification from:
> A. W. Cross, L. S. Bishop, J. A. Smolin, and J. M. Gambetta, e-print arXiv:1707.03429v2 [quant-ph] (2017). https://arxiv.org/pdf/1707.03429v2.pdf
Why are you posting this?
Because there is a functional executable symbolic algebra implementation of such Hilbert spaces and their practical representations (and qubit applications) that's approachable because it's not ambiguous MathTeX without automated tests and test assertions.
Because it's easier to learn math things by preparing a notebook with MathTex and/or SymPy expressions with a MathTeX representation and then make test assertions about the symbolic expression and/or `assert np.allclose()` with real valued parameters after symbolic construction and derivation
Geothermal may beat batteries for energy storage
Many of the greenhouse Youtube channels I watch do a smaller DIY version of this. They have either dark barrels of water in the greenhouse with the windows facing south to the sun, or a thin metal wall filled with clay and pipes acting as a sun-heat-battery. They pump the water through the barrels or clay battery into pipes that are under ground to store the heat. In winter time they extract the heat from the ground keeping the greenhouse warm using a combination of solar and commercial power for the pumps. Heat is also extracted from compost bins at each end of the greenhouse. This works well in extremely cold climates in Canada and Alaska. A few of these folks do all of this without using any electricity at all and somehow manage to get water moving through the pipes using heat convection alone.
I've been thinking about doing something like this but connecting it to pipes under the foundation of my home so I can get rid of the wallboard heaters or just leave them off. Electricity is the only commercial utility near me.
This reminds me a lot of the Earthship community in the middle of the desert, which always focusses on smart ways of reusing water for different purposes, and by not wasting water as much as possible. The houses usually have a clean/grey/blackwater + a rainwormbox system to process fecals and reuse it for growing plants. Their air conditioning system is basically just a pipe in the ground where the hot air flows through, cools down, and automatically gets pushed through the house when they open the windows on the roof.
I was always wondering why there are no systems converting the unused electricity in potential energy by moving water to a higher ground. And more importantly: Why there are no water storages on the roof.
> I was always wondering why there are no systems converting the unused electricity in potential energy by moving water to a higher ground
There are a bunch of pumped storage facilities around [1]. But they work best at massive scale, so suitable locations are somewhat limited. Plus they are expensive to build and often face environmental protests (similar to building dams). Still, it's a solution I'm a fan of.
[1] https://en.wikipedia.org/wiki/List_of_pumped-storage_hydroel...
emphasis on massive scale.
Moving 500,000 kg (over 1 million pounds) 7.5 meters (~25 feet aka the height of a house) will give you about 10 kWh of energy. This is equivalent to running a 425W device all day, like a small air conditioner. The relationship is linear. Double the weight or the distance to double the energy. All of the metal at a scrap yard I know of amounts to less than half that weight, for reference.
I'm also a fan because pumped storage is a really interesting storage method, but it is beyond niche. It is very tough to move that kind of weight around efficiently for what you get back. Pumping water to great heights is not easy either. (see also: moving rail-carts up a mountain)
They are building a lot of them in China
https://en.wikipedia.org/wiki/List_of_pumped-storage_hydroel...
Makes sense, it is definitely a useful tool. I just think it is insufficient to act as storage. It can be good at producing variable amounts of Watts on demand but not so good at storing enough Watt-hours to keep things running for very long. I can see a great appeal for it to help with load-balancing for a significant amount of choppiness between supply and demand on the hour timescale.
For something like solar, where we will want to store over half our daily energy production at peak storage (ideally 2-3 days worth I think) - I don't think it holds up. Additionally, it doesnt seem like a good bet as a primary mechanism for either storage or on-demand generation if energy consumption continues to increase due to the rather large coefficients involved for scaling it up.
"The United States generated 4,116 terawatt hours of electricity in 2021"[1]
4,116 TWh/year = 11.2 TWh/day
The storage capacities for the largest items listed on the wiki is on the magnitude of GWh. The scale goes kilo-, Mega-, Giga-, then Terra. So we are talking about a need on the order of a thousand pumped storage facilities per country. The US would need over 50 of them per state (on average) in order to keep everything running without production for 24 hours. Doesnt matter how many solar panels we have, if we get 1 dark day then we would run out of power. If we tried to rely on solar entirely, we'd also still need very roughly half that amount of storage just to get through the night.
lithium batteries are obviously much better suited for overnight storage, but I have no idea what the numbers are on how much lithium is physically available to use as such storage.
If we want to get on the order of monthly to yearly storage to allow, for example, solar panels in alaska to provide enough energy for a resident to get through months of darkness - I have no idea what the leading storage options are, probably lithium still
[1]https://www.statista.com/statistics/188521/total-us-electric...
Sodium ion is expected to sharply take over cost limited applications some time in the next couple of years. There are pilot mass production programs designed to avoid scarce materials that drop into existing processes. Natron have products on the market (at presumably high cost) targetting datacenters for high safety applications.
For longer scale storage it's a tossup between opportunistic pumped hydro, CAES where geology makes it easy, hydrogen in similar areaswith caverns, ammonia, synthetic hydrocarbons, sodium ion, and one of the emerging molten salt or redox flow battery technogies. Lithium isn't really in the running due to resource limits.
Wires also have a lot of value for decreasing the need for storage. Joining wind and solar 1000s of km apart can greatly reduce downtime. Replacing as much coal and oil with those, and maintaining the OCGT and CCGT fleet is the fastest and most economic way to target x grams of CO2e per kWh where x is some number much smaller than the 400 of pure fossil fuels but bigger than around 50. Surplus renewable power (as adding 3 net watts of solar is presently cheaper than the week of storage to get an isolated area through that one week where capacity is 1/3rd the average) will subsidize initial investments into better storage and electrolysis with no further interventions needed.
Awesome response. I've come across the molten salt option but havent researched in depth. I saw it referenced as something a lot of scientists are hyping up, but I am not sure what kind of engineering challenges exist for implementation and maintenance.
Second paragraph is a bit too information dense, I had trouble following some of it. Renewable energy deficiencies will be localized, so i understand how wires help here. A larger connected area produces more stability, makes sense. Agreed with the carbon reduction priority to tackle coal and oil first. Surplus renewable power acting as a subsidy checks out, but that is skirting around the energy storage problem imo. Sounds like you are saying "instead of storing renewable energy, get more than you need and sell it back to the grid and then use those funds to buy the energy back later". This would certainly work for local consumers, but doesnt do too much to help the power grid itself manage what to do with the surplus energy. Sell it to neighboring power grids? Ties in to the first point about connecting a larger area - but what are the limits here? Can we physically connect the sunny side of earth to the dark side? (ignoring that it seems logistically/legally prohibitive)
the question really comes down to what should we be spending money on to get "better storage"? What are the best solutions for long-term local storage?
> the question really comes down to what should we be spending money on to get "better storage"? What are the best solutions for long-term local storage?
The solution I'm proposing is basically 'the best place to spend your money on storage is to not spend it on storage yet'
If the goal is to reduce emissions asap, then focusing on the strategy that removes x% of 100% of the emissions rather than 100% of y% of the emissions makes sense unless there are enough resources/money that y% is more than x%. And storage is currently expensive enough that you need many times as much money for this to be true to 99.9% confidence.
Getting a wind + solar system that has at least y watts at least eg. 90% of the time is remarkably affordable already and still going down.
In excellent climates new solar costs less per MWh than fuel for a gas turbine (and is not far off fuel for a nuclear reactor). Wind is not much more. Distribution, dealing with less than ideal sites and oversupply increase the cost, but an ideal mix has very little storage (4-12 hours) which can be delivered by lithium batteries.
By relying on the existing fossil fuel/hydro/nuclear/whatever to pick up the last 10% for now, you can replace more coal/oil more quickly than other strategies. During this build all storage technologies where they make the most sense so that when that last 10% is needed, prices will have dropped. I'm fairly sure some mix of green hydrogen and green ammonia burning in those same turbines will be one of the winners (ammonia in particular has negligible marginal cost of capacity allowing for a strategic reserve, and will be needed to replace fossil fuel derived fertilizer anyway).
In the unlikely case that there's an overnight $2 trillion investment in new wind/solar/powerlines and production capacity to match in the US then choosing a dispatchable power source from some or all of: expensive green hydrogen, expensive abundant existing batteries, expensive pumped hydro, and expensive nuclear or immediately going all in on commercialising every vaguely promising electrolyser tech becomes the priority.
Completely agree with the hybrid approach wrt reducing emissions. I am talking more towards work that would be done concurrently with that.
> During this build all storage technologies where they make the most sense so that when that last 10% is needed, prices will have dropped
this is kind of the point of what I'm getting at. Without any investment, none of the storage technologies are going to make much progress. If not financial investment, then at least a time investment from research/science teams. then again, maybe opportunism/free market will take care of this and we can assume any progress that can be made will be made by people trying to make a name for themselves or be first to market. I'm still curious to size up what that progress might look like for discussion/entertainment purposes in any case
Good storage solutions would immediately pay dividends through arbitrage, which would keep electric prices stable, and then anywhere renewable energy generation is more than demand and storage is sufficient, that stable price point could come down below the cost of using coal/oil as well as any other continuous production method. We would be able to consolidate power generation over time, not just space, and realize gains from that. As in, use massive bursts of energy production to top off storage and use them to exactly meet demand. Maybe this opens the door for more alternative energy production methods as well (that are better suited for burst than steady)
In terms of promising technologies, they're broadly categorisable as thermal, kinetic, battery/fuel cell, and thermochemical. Most of the promising ones are far enough along the learning curve that other markets (such as green hydrogen/ammonia for fertiliser driving electrolysers and small scale/more efficient chemical reactors) will drive the learning curve.
Thermal storage concepts include:
Molten salt thermal. short/medium for high grade heat. Most high grade heat is dispatchable (fire) and so doesn't make sense to store, or expensive (solar thermal, nuclear) and so isn't worth pursuing.
Sand thermal batteries. Low grade heat for medium/long term. Only useful for heating and some industrial purposes. Has a minimum size (neighborhood). Literally dirt cheap.
Thermochemical. I guess this is kind of a fuel? Use case is for low grade heat so it can go here. Phase change materials like sodium acetate or reversible solution like NaOH seem really appealing for heating. Back of envelope says it's close to competitive with electric heating, so I'd expect more attention as it's cheaper than any technology that stores work. No idea why it isn't being rolled out. You could even charge it with heat pumps for extremely high efficiency if needed.
Kinetic:
Lifting stuff. Only really works for water without large subsidies and only if you already have at least one handy reservoir like a watershed or cavern. No reason to expect it would suddenly get cheaper as digging holes and moving big things is already something lots of industries try to do cheaply. Great addition to existing hydro.
Sinking stuff (using buoys to store energy). I can't comprehend how this can be viable. I have seen it espoused, but it doesn't pass back of the envelope test unless I did a dumb.
Squashing stuff. Compressed air energy storage. Tanks are just barely competitive with last gen batteries capacity-wise, efficiency isn't great. There are concepts for underwater bladders (let the watter do the holding) or cavern based storage that seem viable at current rates. Achievable with abundant materials so worst case scenario we nut up and spend$500/kWh. Key word CAES, cavern or underwater energy storage
Battery/fuel cell:
Lithium ion: One of the best options currently. Will be heavily subsidised by car buyers. Has hit limits of current mining production which puts a floor on price and is ecologically devistating.
X ion where x is probably sodium: Great slot in replacement. Barring large surprises will expect it to replace LiFePO4 very soon for most uses. Expect the learning rate of lithium ion manufacturing to continue resulting in a sharp jump to $60/kWh in 2021 dollars and eventual batteries around $30/kWh. Key word natron (have just brought their first product to market and are working with other parts of the supply chain to scale up)
Flow batteries, air batteries and fuel cells. These are almost the same concept. You have a chemical reaction that makes electricity with a circular resource like hydrogen, methane, ammonia, or electrolyte. Downside is most versions require a prohibitive amount of some metal like rutheneum or vanadium or something. Not a fundamental limit, but not sure it will be a great avenue as research goes back a fair ways. Aluminum-air batteries are one interesting concept. Essentially turning Al smelters into fuel production facilities. Keywords iron-air aluminum-air, redox-flow, direct methane fuel cell, ammonia fuel cell, ammonia cracking, nickel fuel cell.
Molten salt batteries. Incredibly simple, cheap and scalable concept that has no problems with dendrites (and so theoretically no cycle limit) with one limitation on portability (they must be hot, sloshing is bad) and one as yet insurmountable deal breaking flaw (incredibly corrosive material next to an airtight insulating seal). Look up Ambri for details of an attempt which has presumably failed by now. There is a more recent attempt using a much lower temperature salt and sodium sulfur which shows promise. Keywords ambri, sodium sulfur battery.
Thermochemical:
Any variation on burning stuff you didn't dig up.
Hydrogen is hard to store more than a few days worth, but underground caverns could help. I expect a massive scandal about fugitive hydrogen, toxicity and greenhouse effect in the 2030s sometime. It's borderline competitive to make now. Main limitation is cost of energy (solved by more wind and solar and more 4 hour storage) and cost of capital (platinum/palladium/rutheneum/nickel are usually required). Lots of work going on to reduce the latter and to increase power density and efficiency. If you were directing a billion dollars of public funds this would probably be the place to put it. Keywords $200/kw electrolyser, hysata 95% efficient.
Methane, ammonia, dimethyl ether, methanol, etc. These are all far easier to store than hydrogen. Production needs large scale but is borderline viable already if you have cheap hydrogen. Keywords ammonia energy storage, synthetic fuels, efuels, green ammonia, direct ammonia electrolysis.
Then there's virtual batteries.
Many loads like aluminum smelting can be much more variable than they are now. Rearranging workflows such that they can scale up or down by 50% and change worker tasks to suit has the same function as storage during any period where consumption isn't zero. EV's can kinda fit here too and kinda fit actual storage (especially if they power other things)
Biofuels. Not technically storage, more dispatchable, but it serves a similarfunction. Bagasse is an option for a few percent of power. Waste stream methane is a possibility for a couple % of power. Limited by the extremely low efficiency of photosynthesis so something PV based will likely be a better way of making hydrocarbons from air and sunlight. Most other 'biofuels' are either fossil fuels with extra steps or ways of getting paid green energy credits for burning native forests. Some grad student might surprise us by creating a super-algae that's 10% efficient and doesn't all get eaten if there's a single bacterium in the room. Detangling it all is hard, but I wouldn't be surprised if wind + solar + biofuels + reigning in the waste was enough -- it certainly works for some people doing off grid.
I'd expect a system based on sodium ion (or even lithium) batteries and synthetic fuels to render any fossil fuel mix unviable in the next decade or two. More scalable batteries or scalable fuel cells would hasten this somewhat.
CAES (Compressed Air Energy Storage)
"Compressed air storage vs. lead-acid batteries" (2022) https://www.pv-magazine.com/2022/07/21/compressed-air-storag... :
> Researchers in the United Arab Emirates have compared the performance of compressed air storage and lead-acid batteries in terms of energy stored per cubic meter, costs, and payback period. They found the former has a considerably lower CAPEX and a payback time of only two years.
FWIU China has the first 100MW CAES plant; and it uses some external energy - not a trompe or geothermal (?) - to help compress air on a FWIU currently ~one-floor facility.
Couldn't CAES tanks be filled with CO2/air to fight battery fires?
A local CO2 capture unit should be able to fill the tanks with extra CO2 if that's safe?
Should there be a poured concrete/hempcrete cask to set over burning batteries? Maybe a preassembled scaffold and "grid crane"?
How much CO2 is it safe to flood a battery farm with with and without oxygen tanks after the buzzer due to detected fire/leak? There could be infrared on posts and drones surrounding the facility.
Would it be cost-advisable to have many smaller tanks and compressors; each in a forkable, stackable, individually-maintainable IDK 40ft shipping container? Due to: pump curves for many smaller pumps, resilience to node failure?
If CAES is cheaper than the cheapest existing barriers, it can probably be made better with new-gen ultralight hydrogen tanks for aviation, but for air ballast instead?
Do submarines already generate electricity from releasing ballast?
(FWIW, like all modern locomotives - which are already diesel-electric generators - do not yet have regenerative braking.)
PostgresML is 8-40x faster than Python HTTP microservices
Python is slow for ML. People will take time to realize it. The claim that most of the work is done in GPU -- covers only a small fraction of cases.
For example, in NLP a huge amount of pre and post processing of data is needed outside of the GPU.
Spacy is much faster on the GPU. Many folks don't know that Cudf (a Pandas implementation for GPUs) parallelizes string operations (these are notoriously slow on Pandas)... shrug...
Apache Ballista and Polars do Apache Arrow and SIMD.
The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF*, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/
Ask HN: How to become good at Emacs/Vim?
I've tried switching from IDEs like VSCode to Emacs (with evil mode) a few times now, but I always gave up after a while because my productivity decreases. Even after 1-2 weeks it's still not close to what it was with VScode. That's frustrating. But when I watch proficient people using these editors I'm always amazed at what they can do, and they appear more productive than I am with VSCode. So with enough effort it should be a worthwhile investment.
I think my problem is the lack of a structured guide/tutorial focused on real-world project usage. I can do all basic operations, but I'm probably doing them in an inefficient way, which ends up being slower than a GUI. But I don't know what I don't know, so I don't know what commands and keybindings I should use instead or what my options are.
How did you become good at using these editors? Just using them doesn't really work because by myself I'd never discover most of the features and keybindings.
>How did you become good at using these editors? Just using them doesn't really work because by myself I'd never discover most of the features and keybindings.
I used Vim (GVim to be specific) as part of my job in a semiconductor industry. Everybody in our team used it and we'd help each other. My recollection is not 100% sure, but I think nobody knew how awesome user manuals were (start with vimtutor and then go through `:h usr_toc.txt` https://vimhelp.org/usr_toc.txt.html a few times).
I've also collected a list of Vim resources here: https://learnbyexample.github.io/curated_resources/vim.html
For Emacs, check out https://www.emacswiki.org/emacs/SiteMap
A document with notes on the software tool https://westurner.github.io/tools/#vim :
- [x] SpaceVim, SpaceMacs
- [ ] awesome-vim > Learning Vim https://github.com/akrawchyk/awesome-vim#learning-vim
- [x] https://learnxinyminutes.com/docs/vim/
- [ ]
:help help
:h help
:h usr_toc.txt
:help noautoindent
- [ ] https://en.wikibooks.org/wiki/Learning_the_vi_Editor/Vim/Mod... : :help Command-line-mode
:help Ex-mode
- [ ] my dotvim with regexable comments: https://github.com/westurner/dotvim/blob/master/vimrcThe VSCode GitLab extension now supports getting code completions from FauxPilot
GitLab team member here.
This work was produced by our AI Assist SEG (Single-Engineer Group)[0]. The engineer behind this feature recently uploaded a quick update about this work and other things they are working on to YouTube[1].
[0] - https://about.gitlab.com/handbook/engineering/incubation/ai-...
Why do you call this a group? Why don't you say "one of our engineers did this"? I read the linked article [1] and that seems to be the accurate situation: not a group of people including one engineer, not one engineer at a time on a rotation, but literally one person. In what way is that a group?
[1]: https://about.gitlab.com/company/team/structure/#single-engi...
Great question. I'm not entirely clear on the origin of the name and it would probably be hard for me to find the folks behind this decision on Friday evening/Saturday so I'll share my interpretation.
At GitLab, we have a clearly defined organizational structure. Within that stucture, we have Product Groups[0] which are groups of people aligned around a specific category. The name "Single-Engineer Groups" reflects that this single engineer owns the category which they're focusing on.
I'll be sure to surface your question to the leader of our Incubation Engineering org. Thanks.
[0] - https://about.gitlab.com/company/team/structure/#product-gro...
jesus christ john, i cant wrap my head around the concept of single-person group!! haha
Gitlab is 600Kloc of JS and 1.8Mloc of Ruby. Of course a SEG would make sense to them.
OpenAPI, tests, and {Ruby on Rails w/ Script.aculo.us built-in back in the day, JS, and rewrite in {Rust, Go,}}? There's dokku-scheduler-kubernetes; and Gitea, a fork of Gogs, which is a github clone in Go; but Gitea doesn't do inbound email to Issues, Service Desk Issues, or (Drone,) CI with deploy to k8s revid.staging and production DNS domains w/ Ingress.
You Can Now Google the Balances of Ethereum Addresses
Looks like the data is from Etherscan.io .
Ethereum in BigQuery: https://console.cloud.google.com/marketplace/product/ethereu... and the ETL scripts: https://github.com/blockchain-etl/ethereum-etl
cmorqs-public/cmorq-eth-data in BigQuery: https://console.cloud.google.com/marketplace/product/cmorqs-...
blockchain-etl/awesome-bigquery-views has example SQL queries for querying the BigTable copy of the Ethereum blockchain: https://github.com/blockchain-etl/awesome-bigquery-views
Jupyter Notebooks showing how to query the Ethereum BigQuery Public Dataset:
/? Ethereum Kaggle: https://www.google.com/search?q=ethereum+kaggle
Blender: Wayland Support on Linux
I don't know much about the whole space, but from what I've read so far about Wayland, it starts to feel a lot like yet another "perpetual next-gen" tech, such as IPv6, fuel cells, the Semantic Web or XHTML2 (before that one was officially declared dead at least).
Like, according to the linked article, the standard is out there since 2008 - so the adoption period is already 14 years! And people are still haggling about basic stuff like color management, mouse cursors and window decorations?
What exactly is the envisioned timeframe for Wayland to replace X11 as the dominant windowing system?
If the standard has been promoted as the obvious next step for linux desktop environments for 14 years but still hasn't actually caught on, are we sure it really is the right direction to go?
I cannot say for sure that it is true, but I read (about a year ago) that about 90% of Linux users are already on Wayland.
I tend to believe it because I've seen news reports as of 3 years ago saying that Xserver is no longer being maintained.
The change from Xserver to an arrangement using Wayland is transparent to the Linux user (and Linux admin) and I heard that most of the major distros made the transition a few years ago. A corollary of that, if it is true, is that some of the many Linux users appearing on this site to attack Wayland are in fact (unknown to themselves) using Wayland.
Specifically, the "display server" (terminology? I mean the software that talks to the graphics driver and the graphics hardware) on most distros these days uses the Wayland protocol to talk to apps. An app that have not been modified to use the Wayland protocol to talk to the display server is automatically given a "connection" (terminology?) to XWayland, whose job is to translate the X protocol to the Wayland protocol.
I think `printenv XDG_SESSION_TYPE` will tell you whether you are running Wayland or the deprecated Xserver.
The OP begins, "Recently we have been working on native Wayland support on Linux." What that means is that the blender app no longer needs XWayland: it can talk directly to the display server (using the Wayland protocol). There are certain advantages to that: one advantage is that you can configure all the UI elements on your screen to be scaled by an arbitrary factor without everything getting blurry.
I'm using the latest MacOS to make this comment, but for over a year until a few weeks ago, I was using Linux for all my computing needs, and I went out of my way to run only apps that used the Wayland protocol to talk to the display server (because of the aforementioned ability to scale the UI without blurriness). Chrome had to be started with certain flags for it to use the Wayland protocol. To induce Emacs to speak Wayland, I had to use a special branch of the git source repo, called feature/pgtk.
Do you think Apple will ever contribute XQuartz back to the X11 / X.org open source community?
XQuartz has been a part of X.Org since forever
https://gitlab.freedesktop.org/xorg/xserver/-/tree/master/hw...
Xpra: Multi-platform screen and application forwarding system for x11
I do almost all my computing through Xpra these days. Being able to combine windows seamlessly from multiple VMs is a much more usable way to segregate workloads, and Xpra doesn't suffer from the same security (and increasingly compatibility) issues of X forwarding.
Same here. I wish I could do the same with a Windows VM and a remote MacOS host.
Have you heard of any such solutions?
There used to be a hack for getting integrated windows using Remote Desktop, but I can't remember the name of it anymore and Google isn't finding much :( Hopefully someone remembers (and it's still maintained).
Edit: Found it: https://github.com/rdesktop/seamlessrdp. Seems like there are probably more modern solutions now though.
IIRC WinSwitch + xpra could do seamless windows: http://winswitch.org/documentation/faq.html#protocols http://winswitch.org/about/ :
> Window Switch is a tool which allows you to display running applications on other computers than the one you start them on. Once an application has been started via a winswitch server, it can be displayed on other machines running winswitch client, as required.
> You no longer need to save and send documents to move them around, simply move the view of the application to the machine where you need to access it.
Wikipedia/Neatx links to https://en.wikipedia.org/wiki/Remmina (C) :
> It supports the Remote Desktop Protocol (RDP), VNC, NX, XDMCP, SPICE, X2Go and SSH protocols and uses FreeRDP as foundation.
But no xpra, for which Neatx has old python 2 scripts.
Retinoid restores eye-specific brain responses in mice with retinal degeneration
The retina is a very complex structure. I’m skeptical that if major damage happens to the structure, like wet macular degeneration or in retinal diseases like PIC, that the structure can ever function again.
Given that every body is capable of creating two working retina, why would you think it can’t re-create at least one working retina?
It clearly doesn’t know that the need exists. We have to find the right set of commands and hack the body to re-create one. But the procedure exists even if it involves retina removal and 5+ years of new retina growth (from infant to child).
Null hypothesis: A Nanotransfection (vasculogenic stromal reprogramming) intervention would not result in significant retinal or corneal regrowth
... With or without: a nerve growth factor, e.g. fluoxetine to induce plasticity in the adult visual cortex, combination therapy with cultured conjunctival IPS, laser mechanical scar tissue evisceration and removal, local anesthesia, robotic support, Retinoid
Nanotransfection: https://en.wikipedia.org/wiki/Tissue_nanotransfection :
> Most reprogramming methods have a heavy reliance on viral transfection. [22][23] TNT allows for implementation of a non-viral approach which is able to overcome issues of capsid size, increase safety, and increase deterministic reprogramming
How to turn waste polyethylene into something useful
From "Argonne invents reusable sponge that soaks up oil, could revolutionize oil spill and diesel cleanup" (2017) https://www.anl.gov/article/argonne-invents-reusable-sponge-... :
> [...] The scientists started out with common polyurethane foam, used in everything from furniture cushions to home insulation. This foam has lots of nooks and crannies, like an English muffin, which could provide ample surface area to grab oil; but they needed to give the foam a new surface chemistry in order to firmly attach the oil-loving molecules.
> Previously, Darling and fellow Argonne chemist Jeff Elam had developed a technique called sequential infiltration synthesis, or SIS, which can be used to infuse hard metal oxide atoms within complicated nanostructures.
> After some trial and error, they found a way to adapt the technique to grow an extremely thin layer of metal oxide “primer” near the foam’s interior surfaces. This serves as the perfect glue for attaching the oil-loving molecules, which are deposited in a second step; they hold onto the metal oxide layer with one end and reach out to grab oil molecules with the other.
> The result is Oleo Sponge, a block of foam that easily adsorbs oil from the water. The material, which looks a bit like an outdoor seat cushion, can be wrung out to be reused—and the oil itself recovered.
> At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research & Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface.
From "Reusable Sponge for Mitigating Oil Spills" https://www.energy.gov/science/bes/articles/reusable-sponge-... :
> A new foam called the Oleo Sponge was invented that not only easily adsorbs oil from water but is also reusable and can pull dispersed oil from an entire water column, not just the surface. Many materials can grab oil, but there hasn't been a way, until now, to permanently bind them into a useful structure. The scientists developed a technique to create a thin layer of metal oxide "primer" within the interior surfaces of polyurethane foam. Scientists then bound oil-loving molecules to the primer. The resulting block of foam can be wrung out to be used, and the oil itself recovered.
EU Passes Law to Switch iPhone to USB-C by End of 2024
Self-regulation only works if government regulation is a serious threat in case self-regulation fails. In this case, self-regulation failed, so government regulation stepped in to force industry to do what is right.
All the handwringing about stifling innovation is on its face ridiculous as mandates to use micro-usb didn't stop android phones from adopting the new and better standard as soon as it was viable.
All the handwringing about stifling innovation is on its face ridiculous as mandates to use micro-usb didn't stop android phones from adopting the new and better standard as soon as it was viable.
I'm not sure I understand your logic here. Android phones were able to quickly move to micro USB because they weren't restricted from doing so by a government regulation.
If there was a government mandate requiring phones to have whatever connector came before micro USB, wouldn't that have prevented the Android phones from changing connectors/choosing the new innovation?
Or was there a location where micro USB was required by law before, and since I don't live there, I wasn't affected by it?
Edit: Strike that last sentence, since I see in other responses that there was, indeed, a location where micro USB was mandated. So my new question is: How does a company change to a new/better connector, if it's required by law to use an old connector?
Edit edit: Looks like other responses show there was no mandate, that's just something that people on HN assume. It was a recommendation, not a law, like it is now.
> If there was a government mandate requiring phones to have whatever connector came before micro USB, wouldn't that have prevented the Android phones from changing connectors/choosing the new innovation?
Dunno, we have a test case tho: how'd the Micro-B to USB-C transition go in EU markets?
Vulhub: Pre-Built Vulnerable Environments Based on Docker-Compose
Most of these compose files are pretty outdated AND they depend on non-standard builds of containers for each respective application.
What else would you expect for setups intentionally trying to preserve past versions of software?
Reproducibility in [Infosec] Software Research requires DevOpSec, which requires: explicit data and code dependency specifications, and/or trusting hopefully-immutable software package archives, and/or securely storing and transmitting crytographically-signed archival (container) images; and then Upgrade all of the versions and run the integration tests with a git post-receive hook or a webhook to an external service dependency not encapsulated within the {Dockerfile, environment.yml/requirements.txt/postBuild; REES} dependency constraint model.
With pip-tools, you update the python software versions in a requirements.txt from a requirements.in meta-dependency-spec-file: https://github.com/jazzband/pip-tools#updating-requirements
$ pip-compile --upgrade requirements.in
$ cat requirements.tct
Poetry has an "Expanded dependency specification syntax" but FWIU there's not a way to specify unsigned or signed cryptographic hashes, which e.g. Pipfile.lock supports: hashes for every variant of those versions of packages on {PyPI, and third-party package repos with TUF keys, too}.From https://pipenv.pypa.io/en/latest/basics/#pipenv-lock :
$ pipenv lock
> pipenv lock is used to create a Pipfile.lock, which declares all dependencies (and sub-dependencies) of your project, their latest available versions, and the current hashes for the downloaded files. This ensures repeatable, and most importantly deterministic, builds"Reproducible builds" of a DVWA Deliberately Vulnerable Web Application is a funny thing: https://en.wikipedia.org/wiki/Reproducible_builds
Replication crisis https://en.wikipedia.org/wiki/Replication_crisis :
> The replication crisis (also called the replicability crisis and the reproducibility crisis) is an ongoing methodological crisis in which it has been found that the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method,[2] such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
Just rebuilding or re-pulling a container image does not upgrade the versions of software installed within the container. See also: SBOM, CycloneDx, #LinkedReproducibility, #JupyterREES.
`podman-pull` https://docs.podman.io/en/latest/markdown/podman-pull.1.html... ~:
podman image pull busybox
podman pull busybox
docker pull busybox
podman pull busybox centos fedora ubuntu debian
"How to rebuild and update a container without downtime with docker-compose?"
https://stackoverflow.com/questions/42529211/how-to-rebuild-... : docker-compose up -d --no-deps --build #[servicename]
"Statistics-Based OWASP Top 10 2021 Proposal"
https://dzone.com/articles/statistics-based-owasp-top-10-202...awesome-vulnerable-apps > OWASP Top 10 https://github.com/vavkamil/awesome-vulnerable-apps#owasp-to... :
> OWASP Juice Shop: Probably the most modern and sophisticated insecure web application
And there's a book, an Open Source Official Companion Guide book titled "Pwning Juice Shop": https://github.com/juice-shop/juice-shop#official-companion-...
If the versions installed in the book are outdated, you too can bump the version strings in the dependency specs in the git repo and send a PR Pull Request (which also updates the Screenshots and Menu > Sequences and Keyboard Shortcuts in the book&docs); and then manually test that everything works with the updated "deps" dependencies.
If it's an executablebooks/, a Computational Notebook (possibly in a Literate Computing style), you can "Restart & Run all" from the notebook UI button or a script, and then test that all automated test assertions pass, and then "diff" (visually compare), and then just manually read through the textual descriptions of commands to enter (because people who buy a Book presumably have a reasonable expectation that if they copy the commands from the book to a script by hand to learn them, the commands as written should run; it should work like the day you bought it for a projected term of many free word-of-mouth years.
From https://github.com/juice-shop/juice-shop#docker-container :
docker pull bkimminich/juice-shop
docker run --rm -p 3000:3000
With podman [desktop], podman pull bkimminich/juice-shop
podman run --rm -p 3000:3000 -n juiceshop0
I have read this multiple times and can still not figure out what you are trying to say and how it is related to OPs comment...
> Most of these compose files are pretty outdated AND they depend on non-standard builds of containers for each respective application.
>> What else would you expect for setups intentionally trying to preserve past versions of software?
So, I wrote about reproducibility in software; and Software Supply Chain Security. Specifically, how to do containers and keep the software versions up to date.
Are you challenging the topicality of my comment on HN - containing original research - to be facetious?
Bash 5.2
@jhamby on Twitter is currently refactoring bash to c++, and it's really interesting to read anecdotes about it and read about the progress. It's a really interesting codebase.
c2rust https://github.com/immunant/c2rust :
> C2Rust helps you migrate C99-compliant code to Rust. The translator (or transpiler), c2rust transpile, produces unsafe Rust code that closely mirrors the input C code. The primary goal of the translator is to preserve functionality; test suites should continue to pass after translation.
crust https://github.com/NishanthSpShetty/crust :
> C/C++ to Rust transpiler
"CRustS: A Transpiler from Unsafe C to Safer Rust" (2022) https://scholar.google.com/scholar?q=related:WIDYx_PvgNoJ:sc...
rust-bindgen https://github.com/rust-lang/rust-bindgen/ :
Automatically generates Rust FFI bindings to C (and some C++) libraries
nushell/nushell looks like it has cool features and is written in rust.
awesome-rust > Applications > System Tools https://github.com/rust-unofficial/awesome-rust#system-tools
awesome-rust > Libraries > Command-line https://github.com/rust-unofficial/awesome-rust#command-line
rust-shell-script/rust_cmd_lib https://github.com/rust-shell-script/rust_cmd_lib :
> Common rust command-line macros and utilities, to write shell-script like tasks in a clean, natural and rusty way
hey thanks for this I didn't know it existed. I'm still kind of a rust noob and working my way through Rust in action and various examples.
Mozilla reaffirms that Firefox will continue to support current content blockers
- [ ] ENH,SEC,UBY: indicate that DNS is locally overridden by entries in /etc/hosts
- [ ] ENH,SEC,UBY: Browser UI: indicate that a domain does not have DNSSEC record signatures
- [ ] ENH,SEC,UBT: Browser UI: indicate whether DNS is over classic UDP or DoH, DoT, DoQ (DNS-over-QUIC)
- [ ] ENH,SEC,UBY: browser: indicate that a page is modified by extensions; show a "tamper bit"
- [ ] ENH,SEC: Devtools?: indicate whether there are (matching) HTTP SRI Subresource Integrity signatures for any or some of the page assets
- [ ] ENH,SEC,UBY: a "DNS Domain(s) Information" modal_tab/panel like the Certificate Information panel
Manifest V3, webRequest, and ad blockers
Google made a faux-pas with this one...
Their stated goal is to improve the performance of the web request blocking API.
Their (unstated but suspected) goal is to neuter adblocking chrome extensions.
They should have made extensions get auto-disabled if they 'slow down web page loading too much'. Set the threshold for that to be say more than a 20% increase in page load time, but make the threshold decrease with time - eg. 10% in 2023, 5% in 2024, 2% in 2025, to finally 1% in 2026 etc.
Eventually, that would achieve both of Googles goals - since adblockers would be forced to shorten their lists of regex'es, neutering them, and performance would increase at the same time. Extension developers would have a hard time complaining, because critics will always argue they just have bloated inefficient code.
eWASM opcodes each have a real cost. It's possible to compile {JS, TypeScript, C, Python} to WASM.
What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
- [ ] UBY: Browsers: Strobe the tab tab or extension button when it's beyond (configurable) resource usage thresholds
- [ ] UBY: Browsers: Vary the {color, size, fill} of the tab tabs according to their relative resource utilization
- [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU, RAM, Disk, [GPU, TPU, QPU] (Linux: cgroups,)
> Their (unstated but suspected) goal is to neuter adblocking chrome extensions.
Except they didn't. There's already 3-4 adblockers that work perfectly as far as blocking ads is concerned. They do lose more advanced features, but 99.9% of people with adblockers installed never ever touch those features.
To claim that this neuters adblocking is truly ridiculous. It also ignores that Safari has the exact same restrictions yet no one complains that Apple wanted to neuter ad blocking.
> 99.9% of people with adblockers installed never ever touch those [advanced features]
Custom matching algorithms and ability to fine tune or expand matching algorithms according to new content blocking challenges are actually a kind of advanced feature that are used by all those users without them ever realizing it since their content blocker work seamlessly on their favorite sites without the need for intervention.
The declarativeNetRequest (DNR) API has been quite improved since it was first announced and its great, but since it's the only one we can use now, it's no longer possible to innovate by coming up with improved matching algorithm for network requests.
If the DNR had been designed 8 years ago according to the requirements of content blockers back then, it would be awfully equipped to deal with the challenges thrown at content blockers nowadays, so it's difficult to think the current one will be sufficient in the coming years.
Nothing stops the API evolving... Anyone can make a build of Chromium with a better API, test it out with their own extension, and if it works better, then they can send a PR to get that API into Chrome.
Obviously, extending an API is a long term commitment, so I can understand the Chrome team wanting to only do it if there is a decent benefit - "it makes my one extension with 10 users work slightly better" probably doesn't cut it.
> Nothing stops the API evolving... Anyone can make a build of Chromium with a better API, test it out with their own extension, and if it works better, then they can send a PR to get that API into Chrome.
This is not at all how API proposals are handled. There are a lot more (time, financial, logical) barriers to a change like this. See the role of the W3C in this: https://www.eff.org/deeplinks/2021/11/manifest-v3-open-web-p...
It is reasonable to expect BPF or a BPF-like filter. https://en.wikipedia.org/wiki/Berkeley_Packet_Filter
bromite/build/patches/Bromite-AdBlockUpdaterService.patch: https://github.com/bromite/bromite/blob/master/build/patches...
bromite/build/patches/disable-AdsBlockedInfoBar.patch: https://github.com/bromite/bromite/blob/master/build/patches...
bromite/build/patches/Bromite-auto-updater.patch: () https://github.com/bromite/bromite/blob/master/build/patches...
- [ ] ENH,SEC,UPD: Bromite,Chromium: is there a url syntax like /path.tar.gz#sha256=cba312 that chromium http filter downloader could use to check e.g. sha256 and maybe even GPG ASC signatures with? (See also: TUF, Sigstore, W3C Blockcerts+DIDs)
Bromite/build/patches/Re-introduce-*.patch: [...]
1Hz CPU made in Minecraft running Minecraft at 0.1fps [video]
Cool! Now build it in Turing Complete and export it to an FPGA
From KiCad https://en.wikipedia.org/wiki/KiCad :
> KiCad is a free software suite for electronic design automation (EDA). It facilitates the design and simulation of electronic hardware. It features an integrated environment for schematic capture, PCB layout, manufacturing file viewing, SPICE simulation, and engineering calculation. Tools exist within the package to create bill of materials, artwork, Gerber files, and 3D models of the PCB and its components.
https://www.kicad.org/discover/spice/ :
> KiCad integrates the open source spice simulator ngspice to provide simulation capability in graphical form through integration with the Schematic Editor.
PySpice > Examples: https://pyspice.fabrice-salvaire.fr/releases/v1.6/examples/i... :
+ Diode, Rectifier (AC to DC), Filter, Capacitor, Power Supply, Transformer, [Physical Relay Switche (Open/Closed) -> Vacuum Tube Transistor -> Solid-state [MOSFET,]] Transistor,
From the Ngspice User's Manual https://ngspice.sourceforge.io/docs/ngspice-37-manual.pdf :
> Ngspice is a general-purpose circuit simulation program for nonlinear and linear analyses.*
> Circuits may contain resistors, capacitors, inductors, mutual inductors, independent or dependent voltage and current sources, loss-less and lossy transmission lines, switches, uniform distributed RC lines, and the five most common semiconductor devices: diodes, BJTs, JFETs, MESFETs, and MOSFETs.
> [...] Ngspice has built-in models for the semiconductor devices, and the user need specify only the pertinent model parameter values. [...] New devices can be added to ngspice by several means: behavioral B-, E- or G-sources, the XSPICE code-model interface for C-like device coding, and the ADMS interface based on Verilog-A and XML.
Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness :
> In colloquial usage, the terms "Turing-complete" and "Turing-equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language. In real life this leads to the practical concepts of computing virtualization and emulation. [citation needed]
> Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (the "tape" corresponding to their memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, a universal computer is defined as a device with a Turing-complete instruction set, infinite memory, and infinite available time.
Church–Turing thesis: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis ... Lamda calculus (Church): https://en.wikipedia.org/wiki/Lambda_calculus
HDL: Hardware Description Language > Examples: https://en.wikipedia.org/wiki/Hardware_description_language#...
HVL: Hardware Verification Language: https://en.wikipedia.org/wiki/Hardware_verification_language
awesome-electronics > Free EDA Packages: https://github.com/kitspace/awesome-electronics#free-eda-pac...
https://github.com/TM90/awesome-hwd-tools
EDA: Electronic Design Automation: https://en.wikipedia.org/wiki/Electronic_design_automation
More notes for #Q12:
Quantum complexity theory https://en.wikipedia.org/wiki/Quantum_complexity_theory#Back... :
> A complexity class is a collection of computational problems that can be solved by a computational model under certain resource constraints. For instance, the complexity class P is defined as the set of problems solvable by a Turing machine in polynomial time. Similarly, quantum complexity classes may be defined using quantum models of computation, such as the quantum circuit model or the equivalent quantum Turing machine. One of the main aims of quantum complexity theory is to find out how these classes relate to classical complexity classes such as P, NP, BPP, and PSPACE.
> One of the reasons quantum complexity theory is studied are the implications of quantum computing for the modern Church-Turing thesis. In short the modern Church-Turing thesis states that any computational model can be simulated in polynomial time with a probabilistic Turing machine. [1][2] However, questions around the Church-Turing thesis arise in the context of quantum computing. It is unclear whether the Church-Turing thesis holds for the quantum computation model. There is much evidence that the thesis does not hold. It may not be possible for a probabilistic Turing machine to simulate quantum computation models in polynomial time. [1]
> Both quantum computational complexity of functions and classical computational complexity of functions are often expressed with asymptotic notation. Some common forms of asymptotic notion of functions are \Omega(T(n)) and \Theta(T(n)).
> \Theta(T(n)) expresses that something is bounded above by cT(n) where c is a constant such that c>0 and T(n) is a function of n, \Omega(T(n)) expresses that something is bounded below by cT(n) where c is a constant such that c>0 and T(n) is a function of n, and \Theta(T(n)) expresses both O(T(n)) and \Omega(T(n)). [3] These notations also their own names. O(T(n)) is called Big O notation, \Omega(T(n)) is called Big Omega notation, and \Theta(T(n)) is called Big Theta notation.
Quantum complexity theory > Simulation of quantum circuits https://en.wikipedia.org/wiki/Quantum_complexity_theory#Simu... :
> There is no known way to efficiently simulate a quantum computational model with a classical computer. This means that a classical computer cannot simulate a quantum computational model in polynomial time [P]. However, a quantum circuit of S(n) qubits with T(n) quantum gates can be simulated by a classical circuit with O(2^{S(n)}T(n)^{3}) classical gates. [3] This number of classical gates is obtained by determining how many bit operations are necessary to simulate the quantum circuit. In order to do this, first the amplitudes associated with the S(n) qubits must be accounted for. Each of the states of the S(n) qubits can be described by a two-dimensional complex vector, or a state vector. These state vectors can also be described a linear combination of its component vectors with coefficients called amplitudes. These amplitudes are complex numbers which are normalized to one, meaning the sum of the squares of the absolute values of the amplitudes must be one. [3] The entries of the state vector are these amplitudes.
Quantum Turing machine: https://en.wikipedia.org/wiki/Quantum_Turing_machine
Quantum circuit: https://en.wikipedia.org/wiki/Quantum_circuit
Church-Turing-Deutsch principle: https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93...
Computational complexity > Quantum computing, Distributed computing: https://en.wikipedia.org/wiki/Computational_complexity#Quant...
Hash collisions and exploitations – Instant MD5 collision
MD5 > History, Security > Collision vulnerabilities: https://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities
AI Seamless Texture Generator Built-In to Blender
Is there a way to run things like this with an AMD graphics card? Every Stable Diffusion project I've seen seems to be CUDA focused.
That's because Stable Diffusion is built with PyTorch. Which isn't optimized for anything but CUDA. Even the CPU is a second class citizen there. Let alone AMD or other graphics.
Not saying PyTorch doesn't run on anything else. You can but those will lag and some will be hackish.
Looks like Nvidia is on its way to be the next Intel.
From the Arch wiki, which has a list of GPU runtimes (but not TPU or QPU runtimes) and arch package names: OpenCL, SYCL, ROCm, HIP,: https://wiki.archlinux.org/title/GPGPU :
> GPGPU stands for General-purpose computing on graphics processing units.
- "PyTorch OpenCL Support" https://github.com/pytorch/pytorch/issues/488
- Blender re: removal of OpenCL support in 2021 :
> The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, …).
- https://gitlab.com/illwieckz/i-love-compute
- https://github.com/vosen/ZLUDA
- https://github.com/RadeonOpenCompute/clang-ocl
AMD ROCm: https://en.wikipedia.org/wiki/ROCm
AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation
ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :
> GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify
ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:
> HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:
> - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.
> - HIP allows developers to use the "best" development environment and tools on each target platform.
> - The [HIPIFY] tools automatically convert source from CUDA to HIP.
> - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*
Faraday and Babbage: Semiconductors and Computing in 1833
Timeline of Quantum Computing (1960-) https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_...
1960-1883 = 77 years later
From https://news.ycombinator.com/item?id=31996743 https://westurner.github.io/hnlog/#comment-31996743 :
> Qubit#Physical_implementations: https://en.wikipedia.org/wiki/Qubit#Physical_implementations
> - note the "electrons" row of the table
>> See also: "Quantum logic gate" https://en.wikipedia.org/wiki/Quantum_logic_gate
macOS Subsystem for Linux
You can just use Vagrant for this kind of setup. The benefit of that is you can provision so it is able to run your application and you're able to commit it to git and share it with others. Pretty sure it will have a qemu adapter too if you so choose.
brew install vagrant packer terraform
Podman Desktop is Apache 2.0 open source; supports Win, Mac, Lin; supports Docker Desktop plugins; and has plugins for Podman, Docker, Lima, and CRC/OpenShift Local (k8s) https://github.com/containers/podman-desktop : brew install podman-desktop
/? vagrant Kubernetes MacOS https://www.google.com/search?q=vagrant+Kubernetes+macosYou get all that put together one time on one box and realize you could have scripted the whole thing, but you need bash 4+ or Python 3+ so it all depends on `brew` first: https://github.com/geerlingguy/ansible-for-kubernetes/blob/m...
The Ansible homebrew module can install and upgrade brew and install and upgrade packages with brew: https://docs.ansible.com/ansible/latest/collections/communit...
And then write tests for the development environment too, or only for container specs in production: https://github.com/geerlingguy/ansible-for-kubernetes/tree/m... :
brew install kind docker
type -a python3; python3 -m site
python3 -m pip install molecule ansible-test yamllint
# molecule converge; ssh -- hostname
molecule test
# molecule destroy
westurner/dotfiles/scripts/upgrade_mac.sh: https://github.com/westurner/dotfiles/blob/develop/scripts/u...Perhaps not that OT, but FWIW I just explained exactly this in a tweet:
> Mambaforge-pypy3 for Linux, OSX, Windows installs from conda-forge by default. (@condaforge builds packages with CI for you without having to install local xcode IIRC)
conda install -c conda-forge -y nodejs
mamba install -y nodejs
https://github.com/conda-forge/miniforge#mambaforge-pypy3Global-Chem: A Free Dictionary from Common Chemical Names to Molecules
TIL about #cheminformatics and Linked Data (Semantic Web):
Cheminformatics: https://en.wikipedia.org/wiki/Cheminformatics
https://github.com/topics/cheminformatics :
- https://github.com/hsiaoyi0504/awesome-cheminformatics #Databases #See_also
- https://github.com/mcs07/PubChemPy :
> PubChemPy provides a way to interact with PubChem in Python. It allows chemical searches by name, substructure and similarity, chemical standardization, conversion between chemical file formats, depiction and retrieval of chemical properties.
http://chemicalsemantics.com/introduction-to-the-chemical-se... ... /? chemicalsemantics github ... https://github.com/semanticchemistry/semanticchemistry #See_also :
> The Chemical Information Ontology (CHEMINF) aims to establish a standard in representing chemical information. In particular, it aims to produce an ontology to represent chemical structure and to richly describe chemical properties, whether intrinsic or computed.
Looks like they developed the CHEMINF OWL ontology in Protege 4 (which is Open Source). /ontology/cheminf-core.owl: https://github.com/semanticchemistry/semanticchemistry/blob/...
- Does it -- the {sql/xml/json/graphql, RDFS Vocabulary, OWL Ontology} schema - have more (C)Classes and (P)Properties than other schema for modeling this domain?
- What namespaced strings and URIs does it specify for linking entities internally and externally?
LOV Linked Open Vocabularies maintains a database of many RDFS vocabularies and OWL ontologies (which are represented in RDF) https://lov.linkeddata.es/dataset/lov/terms?q=chemical
- "The Linking Open Data Cloud" (2007-) https://lod-cloud.net/
/? "cheminf" https://scholar.google.com/scholar?hl=en&as_sdt=0,43&qsp=1&q...
/? "cheminf" ontology https://scholar.google.com/scholar?hl=en&as_sdt=0,43&qsp=1&q...
"The ChEMBL database as linked open data" (2013) https://scholar.google.com/scholar?cites=1029919691588310633... ... citations:
"PubChem substance and compound databases" (2017) https://scholar.google.com/scholar?cites=7847099277060264658...
"5 Star Linked Data⬅" https://wrdrd.github.io/docs/consulting/knowledge-engineerin...
Thing > BioChemEntity https://schema.org/BioChemEntity
Thing > BioChemEntity > ChemicalSubstance https://schema.org/ChemicalSubstance
Thing > BioChemEntity > MolecularEntity https://schema.org/MolecularEntity
Thing > BioChemEntity > Protein https://schema.org/Protein
Thing > BioChemEntity > Gene https://schema.org/Gene
Some of the BioSchemas work [1] is proposed and pending inclusion in the Schema.org RDFS vocabulary [2].
[1] https://github.com/bioschemas
[2] https://github.com/schemaorg/schemaorg/issues/1028
Will newer Bioschema terms like BioSample, LabProtocol, SequenceAnnotation, and Phenotype be proposed for inclusion into the Schema.org vocabulary?: https://bioschemas.org/profiles/index#nav-draft
GCC's new fortification level: The gains and costs
Unfortunately not much in the way of performance measurements :(
It's on my TODO list. Watch out for Fedora change proposals for (hopefully) Fedora 38.
What if you develop inside of a Fedora 38 Docker container
FROM quay.io/fedora/fedora:38
RUN dnf install -y <bunch of tools>
Got any useful tips or flags to enable that're bleeding edge?It's not a runtime flag, you'll have to patch redhat-rpm-config to use _FORTIFY_SOURCE=3 instead of 2 and then build packages with it.
Of course if you only want to build your application with _FORTIFY_SOURCE=3, you can do it right away even on Fedora 36. The Fedora change will be to build the distribution (or at least a subset of packages) with _FORGIFY_SOURCE=3.
Poor writing, not specialized concepts, drives difficulty with legal language
Which attributes of a person are necessary to answer a legal question?
Python:
def has_legal_right(person: dict, right: str): -> bool
assert person
assert right
#
return NotImplementedError
def have_equal_rights(persons: list): -> bool
return NotImplementedError
Javascript: function hasRight(person, right) {
console.assert(person);
console.assert(right);
// return true || false;
}
function haveEqualRights(persons) {
// return true || false;
}
Maybe Lean Mathlib or Coq?... Therefore you've failed at the Law of Reciprocity.
U.S. appeals court rejects big tech’s right to regulate online speech
Does this mean that newspaper Information Service Providers are now obligated to must-carry opinion pieces from political viewpoints that oppose those of the editors in the given district?
Does this mean that newspapers in Texas are now obligated to carry liberal opinion pieces? Equal time in Texas at last.
Must-carry provision of a contract for service: https://en.wikipedia.org/wiki/Must-carry
I imagine they'd have to accept arbitrary submissions first. If you just worked there, probably they'd be forced to put up anything you wrote
no
How limited is the given district court of appeals case law precedent in regards to must-carry and Equal time rules for non-licensed spectrum Information Service providers? Are they now common carrier liability, too?
Equal time rules and American media history: https://en.wikipedia.org/wiki/Equal-time_rule
Who pays for all of this?
> "Give me my free water!"
From "FCC fairness doctrine" (1949-1987) https://en.wikipedia.org/wiki/FCC_fairness_doctrine :
> The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters. Stations were given wide latitude as to how to provide contrasting views: It could be done through news segments, public affairs shows, or editorials. The doctrine did not require equal time for opposing views but required that contrasting viewpoints be presented. The demise of this FCC rule has been cited as a contributing factor in the rising level of party polarization in the United States. [5][6]
Because the free flow of information is essential to democracy, it is in the Public Interest to support a market of new and established flourishing information service providers, not a market of exploited must-carry'ers subject to district-level criteria for ejection or free water for life. Shouldn't all publications, all information services be subject to any and all such Equal Time and Must-Carry interpretations?
Your newspaper may not regulate viewpoints: in its editorial section or otherwise. Must carry. Equal time.
The wall of one's business, perhaps.
You must keep that up there on your business's wall.
**
In this instance, is there a contract for future performance? How does Statute of Frauds apply to contracts worth over $500?
Transformers seem to mimic parts of the brain
Sincere question inspired purely by the headline: how many important ML architectures aren't in some way based on some proposed model of how something works in the brain?
(Not intended as a flippant remark, I know Quanta Magazine articles can generaly safely be assumed to be quality content, and that this is about how a language model unexpectedly seems to have relevance for understanding spatial awareness)
I think the response to this has two prongs:
- Some families of ML techniques (SVMs, random forests, gaussian processes) got their inspiration elsewhere and never claimed to be really related to how brains do stuff.
- Among NNs, even if an idea takes loose inspiration from neuroscience (e.g. the visual system does have a bunch of layers, and the first ones really are pulling out 'simple' features like an edge near an area), I think it's relatively uncommon to go back and compare specifically what's happening in the brain with a given ML architecture. And a lot of the inspiration isn't about human-specific cognitive abilities (like language), but is really a generic description of neurons which is equally true of much less intelligent animals.
> I think it's relatively uncommon to go back and compare specifically what's happening in the brain with a given ML architecture.
Less common but not unheard of. Here's one example, primarily on focused on vision: http://www.brain-score.org/
DeepMind has also published works comparing RL architectures like IQN to dopaminergic neurons.
The challenge is that its very cross-disciplinary and most DL labs don't have a reason to explore the neuroscience side while most neuro labs don't have the expertise in DL.
Is it necessary to simulate the quantum chemistry of a biological neural network in order to functionally approximate a BNN with an ANN?
A biological systems and fields model for cognition:
Spreading activation in a dynamic graph with cycles and magnitudes ("activation potentials",) that change as neurally-regulated heart-generated electron potentials (and,) reverberate fluidically with intersecting paths. And a partially extra-cerebral induced field which nonlinearly affects the original signal source through local feedback; Representational shift.
Representational shift: "Neurons Are Fickle. Electric Fields Are More Reliable for Information" (2022) https://neurosciencenews.com/electric-field-neuroscience-201...
Spreading activation: https://en.wikipedia.org/wiki/Spreading_activation
Re: 11D (11-Dimensional) biological network hyperparameters, ripples in (hippocampal, prefrontal,) association networks: https://news.ycombinator.com/item?id=18218504
M-theory String theory is also 11D, but IIUC they're not the same dimensions
Diffusion suggests fluids, which in physics and chaos theory suggests Bernoulli's fluid models (and other non-differentiable compact descriptions like Navier-Stokes), which are part of SQG Superfluid Quantum Gravity postulates.
Can e.g. ONNX or RDF with or without bnodes represent a complete connectome image/map?
Connectome: https://en.wikipedia.org/wiki/Connectome
Wave Field recordings are probably the most complete known descriptions of the brain and its nonlinear fields?
How such fields relate to one or more Quantum Wave functions might entail near-necessity of QFT: Quantum Fourier Transform.
When you replace the Self-attention Network part of a Transformer algorithm with classical FFT Fast Fourier Transform: ... From https://medium.com/syncedreview/google-replaces-bert-self-at... :
> > New research from a Google team proposes replacing the self-attention sublayers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Even more surprisingly, the team discovers that replacing the self-attention sublayer with a standard, unparameterized Fourier Transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs."
> > Would Transformers (with self-attention) make what things better? Maybe QFT? There are quantum chemical interactions in the brain. Are they necessary or relevant for what fidelity of emulation of a non-discrete brain?
> Quantum Fourier Transform: https://en.wikipedia.org/wiki/Quantum_Fourier_transform
The QFT acronym annoyingly reminds ne rather of Quantum Field Theory than Quantum Fourier Transforms ...
Yeah. And resolve QFT + { QG || SQG }
A more useful query, to Google dork:
/? QFT "field theory" "Superfluid" "quantum gravity" https://www.google.com/search?q=QFT+%22field+theory%22+%22Su... https://scholar.google.com/scholar?q=QFT+%22field+theory%22+...
/? QFT "field theory" "Superfluid quantum gravity" https://www.google.com/search?q=QFT+%22field+theory%22+%22Su... https://scholar.google.com/scholar?q=QFT+%22field+theory%22+...
Chaos researchers can now predict perilous points of no return
This sounds similar to work I did years ago to combine phase-space manifolds with a rule-based expert system to address problems diagnosing failures in mechanical systems exhibiting multi-modal operating regimes.
Hopefully the researchers found a simpler computational method than I did in trying to mate those two systems together. :)
What really caught my attention was the output of a probability curve showing how the system might operate in the never-before-seen regimes once the tipping point was reached. The ability to predict behavior outside the training set is a huge win. My method was only predictive while the the system operated in the training regime; outside that regime it was useless.
Could this detect/predict/diagnose e.g. mechanical failures in engines and/or motors, and health conditions, given sensor fusion?
Sensor fusion https://en.wikipedia.org/wiki/Sensor_fusion
Steady state https://en.wikipedia.org/wiki/Steady_state :
> In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. [1]
https://github.com/topics/steady-state
Control systems https://en.wikipedia.org/wiki/Control_system
https://github.com/topics/control-theory
Flap (disambiguation) > Computing and networks > "Flapping" (nagios alert fatigue,) https://en.wikipedia.org/wiki/Flap
Perceptual Control Theory (PCT) > Distinctions from engineering control theory https://en.wikipedia.org/wiki/Perceptual_control_theory :
> In the artificial systems that are specified by engineering control theory, the reference signal is considered to be an external input to the 'plant'.[7] In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in the methodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in the next section below, evokes the reference signal r from synapse-local memory, and the strength of r is proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems. [26]
> In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on the mathematics of PCT (and elaborated more fully in the referenced literature). The weighted sum is understood to result from reorganization.
> Engineering control theory is computationally demanding, but as the preceding section shows, PCT is not. For example, contrast the implementation of a model of an inverted pendulum in engineering control theory [27] with the PCT implementation as a hierarchy of five simple control systems. [28]
Structural Equation Modeling: https://en.wikipedia.org/wiki/Structural_equation_modeling https://github.com/topics/structural-equation-modeling
ros2_control https://control.ros.org/master/index.html
Limit cycle https://en.wikipedia.org/wiki/Limit_cycle
Finite Element Analysis https://en.wikipedia.org/wiki/Finite_element_method
> #FEM: Finite Element Method (for ~solving coupled PDEs Partial Differential Equations)
> #FEA: Finite Element Analysis (applied FEM)
awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea
GraphBLAS
> When applied to sparse adjacency matrices, these algebraic operations are equivalent to computations on graphs
Sparse matrix: https://en.wikipedia.org/wiki/Sparse_matrix :
> The concept of sparsity is useful in combinatorics and application areas such as network theory and numerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.
CuGraph has a NetworkX-like API, though only so many of the networkx algorithms are yet reimplemented with some possible CUDA-optimizations.
From https://github.com/rapidsai/cugraph :
> cuGraph operates, at the Python layer, on GPU DataFrames, thereby allowing for seamless passing of data between ETL tasks in cuDF and machine learning tasks in cuML. Data scientists familiar with Python will quickly pick up how cuGraph integrates with the Pandas-like API of cuDF. Likewise, users familiar with NetworkX will quickly recognize the NetworkX-like API provided in cuGraph, with the goal to allow existing code to be ported with minimal effort into RAPIDS.
> While the high-level cugraph python API provides an easy-to-use and familiar interface for data scientists that's consistent with other RAPIDS libraries in their workflow, some use cases require access to lower-level graph theory concepts. For these users, we provide an additional Python API called pylibcugraph, intended for applications that require a tighter integration with cuGraph at the Python layer with fewer dependencies. Users familiar with C/C++/CUDA and graph structures can access libcugraph and libcugraph_c for low level integration outside of python.
/? sparse https://github.com/rapidsai/cugraph/search?q=sparse
Pandas and scipy and IIRC NumPy have sparse methods; sparse.SparseArray, .sparse.; https://pandas.pydata.org/docs/user_guide/sparse.html#sparse...
From https://pandas.pydata.org/docs/user_guide/sparse.html#intera... :
> Series.sparse.to_coo() is implemented for transforming a Series with sparse values indexed by a MultiIndex to a scipy.sparse.coo_matrix.
NetworkX graph algorithms reference docs https://networkx.org/documentation/stable/reference/algorith...
NetworkX Compatibility > Differences in Algorithms https://docs.rapids.ai/api/cugraph/stable/basics/nx_transiti...
List of algorithms > Combinatorial algorithms > Graph algorithms: https://en.wikipedia.org/wiki/List_of_algorithms#Graph_algor...
To give an example of real world sparse matrices, power grids can have thousands and thousands of nodes, but most of those nodes only connect to a few other nodes at most that are local neighbors. Systems are highly sparse as a result
Integer factor graphs are sparse. https://en.wikipedia.org/wiki/Factor_graph#Message_passing_o...
Compared to the Powerset graph that includes all possible operators and parameter values and parentheses in infix but not Reverse Polish Notation, a correlation graph is sparse: most conditional probabilities should be expected to tend toward the Central Limit Theorem, so if you subtract (or substitute) a constant noise scalar, a factor graph should be extra-sparse. https://en.wikipedia.org/wiki/Central_limit_theorem_for_dire...
What do you call a factor graph with probability distribution functions (PDFs) instead of float64s?
Are Path graphs and Path graphs with cycles extra sparse? An adjacency matrix for all possible paths through a graph is also mostly zeroes. https://en.wikipedia.org/wiki/Path_graph
Methods of feature reduction use and affect the sparsity of a sparse matrix (that does not have elements for confounding variables). For example, from "Exploratory factor analysis (EFA) versus principal components analysis (PCA)" https://en.wikipedia.org/wiki/Factor_analysis :
> For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix.
Common Lisp names all sixteen binary logic gates
From File:Logical_connectives_Hasse_diagram.svg https://commons.wikimedia.org/wiki/File:Logical_connectives_...:
> Description: The sixteen logical connectives ordered in a Hasse diagram. They are represented by:
> - logical formulas
> - the 16 elements of V4 = P^4({})
> - Venn diagrams
> The nodes are connected like the vertices of a 4 dimensional cube. The light blue edges form a rhombic dodecahedron - the convex hull of the tesseract's vertex-first shadow in 3 dimensions.
Hasse diagram: https://en.wikipedia.org/wiki/Hasse_diagram
> A research question for a new school year: (2021, still TODO)
> The classical logical operators form a neat topology. Should we expect there to be such symmetry and structure amongst the quantum operators as well?
From Quantum Logic https://en.wikipedia.org/wiki/Quantum_logic :
> Quantum logic can be formulated either as a modified version of propositional logic or as a noncommutative and non-associative many-valued (MV) logic.[2][3][4][5][6]
> Quantum logic has been proposed as the correct logic for propositional inference generally, [...] group representations and symmetry.
> The more common view regarding quantum logic, however, is that it provides a formalism for relating observables, system preparation filters and states.[citation needed] In this view, the quantum logic approach resembles more closely the C*-algebraic approach to quantum mechanics. The similarities of the quantum logic formalism to a system of deductive logic may then be regarded more as a curiosity than as a fact of fundamental philosophical importance. A more modern approach to the structure of quantum logic is to assume that it is a diagram—in the sense of category theory—of classical logics
Quantum_logic#Differences_with_classical_logic: https://en.wikipedia.org/wiki/Quantum_logic#Differences_with...
Cirq > Gates and operations: https://quantumai.google/cirq/build/gates
Cirq > Operators and Observables: https://quantumai.google/cirq/build/operators
qiskit-terra/qiskit/circuit/operation.py Interface: https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/circ...
tequila/src/tequila/circuit/gates.py: https://github.com/tequilahub/tequila/blob/master/src/tequil...
Pauli matrices > Quantum information: https://en.wikipedia.org/wiki/Pauli_matrices#Quantum_informa...
From Quantum_information#Quantum_information_processing https://en.wikipedia.org/wiki/Quantum_information#Quantum_in... :
> The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch Sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators.
Google pays ‘enormous’ sums to maintain search-engine dominance, DOJ says
Google does this yet lies about their searches to maintain an illusion of competence.
For example, you can type in something like "purpose of life" and it will say it has 2 billion results, yet if you try to go to result 500, you can't, it will stop at 400, then change the number to 400 results only on the last page.
This happens for every query. Google lies about the astronomical number of search results, then only shows a few hundred at most.
I work for Google Search. The counts we show for results are estimated. They get more refined when you go deeper into the results. But yes, there are still likely to be millions of results for many things you query -- and most people are not going to be able to go through all millions of those. So we show usually up to around 40 pages / 400 of these. We have a help page about this here: https://support.google.com/websearch/answer/9603785
So when I google "cows" and it says "Page 2 of about 1,210,000,000 results" you know full well you aren't going to show anywhere near 1,201,000,000 results yet you program it to display that? And in fact it only shows "about 231 results" which is 0.000019234% of 1,210,000,000 results.
That doesn't sound like an estimate to me. To me, that is intentional misleading.
How is it misleading? It's an estimate of the total results, it doesn't say it's going to display them
"I'll sell you about 1,201,000,000 paper clips."
"OK."
ships 231 paper clips
"Hey I only got 231 paper clips, not 1,201,000,000."
"That's right. 1,201,000,000 was an estimate."
"You said about. So you estimated 1,201,000,000 paper clips but you actually only had 231?"
"No, I had the full 1,201,000,000. I sold them to you but I didn't say I would ship all of them. What kind of idiot uses more than a few hundred paper clips anyway? Plus, it saves us money on shipping costs."
You haven't paid Google for search: there is no sale of product or service to you, the user using free services for free.
You haven't signed any agreement with Google for search services. Google hasn't signed any agreement for future performance with you.
Google is not obligated to count every search result of every free search query. You are not entitled to such resource-intensive queries.
How much does COUNT() on a full table scan of billions of rows - with snippets - cost you on BigQuery or a similar pay-for-query-resources service?
>> you, the user using free services for free
Absolutely false, it is not free. I have provided them with my data which they will monetize.
It's the same as Hacker News not being free. I have provided Hacker News with my personal data.
For example, if you look through my post history just in the last day or so, you would know that Rufus Foreman owns a killer cis-gendered cat named Mr. Tiddlesworth, that Rufus Foreman is a Warren Buffett fan boy, and that when thinking of a generic search term to use as an example, the first thing to come to Rufus Foreman's mind is "cows".
Now imagine what sort of dark patterns an unscrupulous corporation like say, Hooli, could implement in order to target me with advertising tailored to my preferences!
If you tell the bartender your life story, they don't owe you free drinks (and they might as well sell a screenplay)
While it's true that they sell the data they collect, you can choose to not share such data and still receive the free services. "Bromite" is a fork of Chromium, for example.
If you spend time in their store and cause loss and order a bunch of free waters, do the Terms of Service even apply to you? What can they even do? What can LinkedIn do about scraping and resale of every public profile page?
Give me some free privacy on my free dsl line. (Note that ISPs can sell the entirety of a customer's internet PCAPs, for example, due to Pai's FCC rescinding a Wheeler FCC privacy rule https://www.theverge.com/2017/3/31/15138526/isp-privacy-bill... "Trump signs repeal of U.S. broadband privacy rules" (2017) https://www.reuters.com/article/us-usa-internet-trump/trump-... )
Bromite is not a Google service. It's a false precedence that anything open source from a corporation is a free service and makes their anti-privacy stance good. That's like saying a criminal is a good guy because he did 50 hours of charity work after murdering 2 people.
You can use the Chromium source code that Google contributes to, to browse the internet with and without ads and trackers that use obvious domain names: Microsoft Edge, Opera, Vivaldi, Bromite, ungoogled-chromium, Brave, Chrome.
You choose whether to shop at Google.
Google buying the default search engine position in browsers does not prevent users from changing the - possibly OpenSearch - browser search engine to DuckDuckGo or Ecosia.
You can force an address bar entry to a.tld/search=?${query} search w/:
Ctrl-L
?${query}
?how to change the default search engine
?how to block ads & trackers in {browser name}
?how to provide free search queries on a free search engine and have positive revenue after years of debt obligations to fairly build market share
You can choose to take their free s and search elsewhere, eh?Why would they now get out of paying for Firefox development using a revenue model, too?
(Competitors can and do use e.g. google/bazel the open source clone of google/blaze, which is what Chromium builds were built with before gn. Here's Chromium/BUILD.bazel, for example: https://source.chromium.org/chromium/v8/v8.git/+/master:BUIL... )
Android (and /e/ and LineageOS) do allow you to install browsers other than the Chrome WebView and Chrome. Is it possible to install anything other than Safari (WebKit) on iOS devices? Maybe from another software repository like F-droid? Hopefully current downstream releases with signed manifests and SafetyNet scanning uploaded apps
> You haven't signed any agreement with Google for search services.
Literally on absolutely every google search page: https://policies.google.com/terms
No one read terms and conditions, yes?
Terms of Service: https://en.wikipedia.org/wiki/Terms_of_service
Statute of Frauds applies to agreements regarding amounts over $500. Is this a conscionable agreement between which identified parties? Does what satisfy chain of custody requirements for criminal or civil admissability if the data is from not a trustless system but a centralized trustful system?
"Victory! Ruling in hiQ v. Linkedin Protects Scraping of Public Data" (2019) https://www.eff.org/deeplinks/2019/09/victory-ruling-hiq-v-l...
And then the interplay between a "Right to be Forgotten" and the community legal obligation to retain for lawful investigative law enforcement purposes. They don't know what they want: easy investigations, compromisable investigations, privacy
Ask HN: Best empirical papers on software development?
There are some good empirical papers, but I only know very few. What is your best empirical paper on software development?
From https://en.wikipedia.org/wiki/Experimental_software_engineer... :
> Experimental software engineering involves running experiments on the processes and procedures involved in the creation of software systems, with the intent that the data be used as the basis of theories about the processes involved in software engineering (theory backed by data is a fundamental tenet of the scientific method). A number of research groups primarily use empirical and experimental techniques.
> The term empirical software engineering emphasizes the use of empirical studies of all kinds to accumulate knowledge. Methods used include experiments, case studies, surveys, and using whatever data is available.
(CS) Papers We Love > https://github.com/papers-we-love/papers-we-love#other-good-... :
- "Systematic Review in Software Engineering" (2005)
-- "The Developed Template for Systematic Reviews in Software Engineering"
- "Happiness and the productivity of software engineers" (2019)
DevTech Research Group (Kibo, Scratch Jr,) > Publications https://sites.bc.edu/devtech/publications/
' > Empirical Research, instruments: https://sites.bc.edu/devtech/about-devtech/empirical-researc...
"SafeScrum: Agile Development of Safety-Critical Software" (2018) > A Summary of Research https://scholar.google.com/scholar?cites=9208467786713301421... (Gscholar features: cited by, Related Articles) https://link.springer.com/chapter/10.1007/978-3-319-99334-8_...
Re: Safety-Critical systems, awesome-safety-critical, and Formal Verification as the ultimate empirical study: https://news.ycombinator.com/item?id=28709239
Why public chats are better than direct messages
As I read this, I got the sinking feeling that I'd read it all before. But then I realised, it's just another case of someone thinking that their specific solution is best, solely on the basis that it worked for them, in their specific circumstances.
But here's the thing: every group is different, has different needs, and will respond differently to different styles of communication. Some people (especially people not using their first language) can find working in public stressful; some people get very stressed by the intensity of 1:1s. It takes all types, and as a manager, understanding how to get the best out of everyone is part of the job.
There is no single, correct way, and we know this because, if there was, then we wouldn't keep hearing about these interminable "solutions".
Yes, but so is which is best for which situation still the question?
Presuming that information asymmetry will hold over time is a bad assumption, regardless of cost of information security controls.
Why have these new collaborative innovative services succeeded where NNTP and > > indented, text-wrapped email forwards for new onboards have not?
Instead of Chat or IM, hopefully working on Issues with checkbox Tasks and Edges; and Pull Requests composed of Commits, Comments, and Code Reviews; with conditional Branch modification rules; will produce Products: deliverables of value to the customer, per the schema:Organization's Mission.
What style of communication is appropriate for a team in which phase of development, regardless of communications channel?
> Why have these new collaborative innovative services succeeded where NNTP and > > indented, text-wrapped email forwards for new onboards have not?
The new tools we have at our disposal are amazing. Of course they are better. But they are just tools. They don’t solve any problems relating to interpersonal communication any more than a hammer solves building a house.
> What style of communication is appropriate for a team in which phase of development, regardless of communications channel?
It’s the job of a manager to work that out. There is no formula. It’s not even possible to write one down. That’s the point.
Well, our societies value these communication businesses as among the most valuable corporations on Earth, so I think that there's probably some value in the tools that people suffer ads on to get for free.
"Traits of good remote leaders" (2019) https://news.ycombinator.com/item?id=24432088 :
"From Comfort Zone to Performance Management" (2009) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2... :
> "Table 4 – Correlation of Development Phases, Coping Stages and Comfort Zone transitions and the Performance Model" in "From Comfort Zone to Performance Management" White (2008) tabularly correlates the Tuckman group development phases (Forming, Storming, Norming, Performing, Adjourning) with the Carnall coping cycle (Denial, Defense, Discarding, Adaptation, Internalization) and Comfort Zone Theory (First Performance Level, Transition Zone, Second Performance Level), and the White-Fairhurst TPR model (Transforming, Performing, Reforming). The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages.
Planting trees not always an effective way of binding carbon dioxide
"Hemp twice as effective at capturing carbon as trees, UK researcher says" (2021) https://hempindustrydaily.com/hemp-twice-as-effective-at-cap... :
> “Industrial hemp absorbs between 8 to 15 tonnes of CO2 per hectare (3 to 6 tonnes per acre) of cultivation.”
> Comparatively, forests capture 2 to 6 tonnes of carbon per hectare (0.8 to 2.4 tonnes per acre), depending on the region, number of years of growth, type of trees and other factors, Shah said.
> Shah, who studies engineered wood, bamboo, natural fiber composites and hemp [at Cambridge, UK], said hemp “offers an incredible scope to grow a better future” while producing fewer emissions than conventional crops and more usable fibers per hectare than forestry.
"Cities of the future may be built with algae-grown limestone" (2022) https://www.colorado.edu/today/2022/06/23/cities-future-may-... :
> And limestone isn’t the only product microalgae can create: microalgae’s lipids, proteins, sugars and carbohydrates can be used to produce biofuels, food and cosmetics, meaning these microalgae could also be a source of other, more expensive co-products—helping to offset the costs of limestone production.
Carbon sequestration: https://en.wikipedia.org/wiki/Carbon_sequestration
tonnes of carbon per hectare is not relevant.
What you need is tonnes of carbon per hectare per year.
We don't have enough space to let these plants be there, we need to convert them into coal and throw it back into the mines.
The efficient thing to do is render them into charcoal, yes.
Biochar is a great soil amendment, and doesn't oxidize over decades or even centuries, depending. Putting it back in the mines is an option, if we ever need to stop rebuilding topsoil, which is itself getting urgent.
Biochar is also intensive and only converts a relatively small amount of that carbon into stable charcoal.
Hemp is compostable, though because it's so tough, shredding and waiting for it to compost trades (vertical) space & time for far less energy use than biocharification unless it's waste heat from a different process.
Bioenergy with carbon capture and storage (BECCS) > Biomass feedstocks doesn't have a pivot table of conversion efficiencies?: https://en.wikipedia.org/wiki/Bioenergy_with_carbon_capture_... :
> Biomass sources used in BECCS include agricultural residues & waste, forestry residue & waste, industrial & municipal wastes, and energy crops specifically grown for use as fuel. Current BECCS projects capture CO2 from ethanol bio-refinery plants and municipal solid waste (MSW) recycling center.
> A variety of challenges must be faced to ensure that biomass-based carbon capture is feasible and carbon neutral. Biomass stocks require availability of water and fertilizer inputs, which themselves exist at a nexus of environmental challenges in terms of resource disruption, conflict, and fertilizer runoff.
If you keep taking hemp off a field without leaving some down, you'll probably need fertilizer (see: KNF, JADAM,) and/or soil amendments to be able to rotate something else through; though it's true that hemp grows without fertilizer.
> A second major challenge is logistical: bulky biomass products require transportation to geographical features that enable sequestration. [27]
Or more local facilities
Composting oxidizes a lot of the carbon. And since you have to replant, harvest and fertilise everyyear it is a lot more intensive than forestry where you do that ever 30 (15-100) years
Do you know of any papers investigating how kelp or other seaweeds compare? I've heard some biologists informally claim that they would be even more effective because they can grow incredibly fast.
I thought trees were somewhat ideal because the carbon sequestered in them can be used as long-lived lumber. If the kelp sequesters carbon but then it gets released immediately when it decomposes or someone eats it, then it doesn't really solve the problem.
We need to be putting carbon back into the ground where we got it, or at least converting into forms where it lives a long time on the surface (decades or centuries).
> I thought trees were somewhat ideal because the carbon sequestered in them can be used as long-lived lumber.
This is why hempcrete is ideal. But hemp, by comparison, doesn't result in a root-bound tree farm for wind break and erosion control; hemp can be left down to return nutrients to the soil or for soil remediation as it's a very absorbent plant (that draws e.g. heavy metals out of soil and into the plant)
Why hempcrete when you could use wood?
> Hemp can be left down to return nutrients to the soil or for soil remediation as it's a very absorbent plant (that draws e.g. heavy metals out of soil and into the plant)
It won't remove heavy metals from the soil if you leave it there. You also can't turn it into a product if you leave it there.
Caddyhttp: Enable HTTP/3 by Default
lucaslorentz/caddy-docker-proxy works like Traefik, in that Container metadata labels are added to the reverse proxy configuration which is reloaded upon container events, which you can listen to when you subscribe to a Docker/Podman_v3 socket (which is unfortunately not read only)
So, with Caddy or Traefik, a container label can enable HTTP/3 (QUIC (UDP port 1704)) for just that container.
"Labels to Caddyfile conversion" https://github.com/lucaslorentz/caddy-docker-proxy#labels-to...
From https://news.ycombinator.com/item?id=26127879 re: containersec :
> > - [docker-socket-proxy] Creates a HAproxy container that proxies limited access to the [docker] socket
The point of the link in OP is that now in v2.6, Caddy enables HTTP/3 by default, and doesn't need to be explicitly enabled by the user.
So I'm not exactly sure the point you're trying to make. But yes, CDP is an awesome project!
That is a good point. Is there any way to disable HTTP/3 support with just config?
The (unversioned?) docs have: https://caddyserver.com/docs/modules/http#servers/experiment... :
> servers/experimental_http3: Enable experimental HTTP/3 support. Note that HTTP/3 is not a finished standard and has extremely limited client support. This field is not subject to compatibility promises
TIL caddy has Prometheus metrics support (in addition to automatic LetsEncrypt X.509 Cert renewals)
Yeah those docs are for v2.5. The experimental_http3 option is removed in v2.6 (which is currently only in beta, stable release coming soon). To configure protocols and disable HTTP/3, you'll now use the "protocols" option (inside of the "servers" global option), and it would look like "protocols h1 h2" (the default is "h1 h2 h3").
Yes, the docs have been updated at https://github.com/caddyserver/website but haven't been deployed yet. There is a new protocols option:
protocols h1 h2
will disable Http/3 but leave 1.1 and 2 on.For HTTP/3 support with python clients:
- aioquic supports HTTP/3 only now https://github.com/aiortc/aioquic
- httpx is mostly requests-compatible, supports client-side caching, and HTTP/1.1 & HTTP/2, and here's the issue for HTTP/3 support: https://github.com/encode/httpx/issues/275
Make better decisions with fewer online meetings
Hi! I am the cofounder TopAgree. We have created TopAgree to help teams
It’s lighting my home right now. Looks like clear skies today.
We get free EM radiation from the free nuclear fusion reaction at the center of our solar system; and all of the other creatures find that sufficient for survival.
None of the other creatures require energy to heat their homes, grow food, travel, keep the lights on, or ship cargo in a large inter-connected grid of supply that feeds billions of people.
Everything is also manufactured out of petroleum derivatives. Without it, we go back to making literally everything out of wood and metal, or not making it. 90% of the items you have contact with everyday is made with some kind of petroleum derivative.
EV vehicles are impossible to manufacture without petroleum, so this is certainly a lot more nuanced than just free energy from space...
Given the petroleum is made from creatures[0] which were entirely powered by sunlight, it should be clear that petroleum can be produced by sunlight.
[0] mostly plants, IIRC
They all require the sun to heat their homes, grow food, and travel.
And that petroleum also comes from solar energy. Some of it from dead stars.
uh huh, uh huh, and WHY do you suppose EVERYTHING involves the use of petroleum?
It's a classic capitalist evasion of externalities.
Every single drill site is toxic waste disaster, that no one ever has to pay for (except of course, the impoverised who live down stream).
To ignore the many gigawatts of free energy beamed in from space because petroleum will still be used in some way is evading the question.
The fact remains: free fusion power is heating and lighting our homes every time we open a window shade.
And if the combined mafia influence of the contruction-mafia and the petro-mafia didn't have us building houses with NO relation to how they point at the sun, we would use that energy to MUCH greater efficiency...
have you ever been to a drill site? tons where I live. I'd have a picnic out there today if it weren't so cold. Gotta watch for h2s I suppose.