I've been following this guy for a while on X. He does live this way. This isn't a hypothetical. He lives on his writing and has plenty of free time to chop all the wood he'd ever need.
Considering that the property has a well on-site, water is free, and as far as heat goes, well, one could either pay a little extra in electric for that — or they could have the Amish deliver their scrap wood from their sawmills to burn in a wood stove, very cheaply.
Maybe "a little bit of electricity" or "very cheap scrap wood" appear to be the vague plans for how to handle heat.
Apparently electricity is 4c/KWh there, so it really is only a “little bit extra” even if you’re heating with resistance heat (at least for a 600sf house).
I actually did this at one point in time.
The hardware you're looking for is a "laser galvo" setup. It typically takes a +/-12v range analog signal as input. I ended up building my own hardware to drive this. You can use a D to A converter and an op-amp to take a digital output from a raspberry pi or arduino and get the correct signal level. I ran mine on an arduino and it was plenty fast for simple things. Complex animations that require more compute on the device might be a bit much.
Enabling and disabling the laser is as simple as a transistor.
The problem with this is that it's fundamentally incompatible with zoning.
I own a chunk of acreage in farmland adjacent to my metropolitan area. The township won't allow it to be developed further due to density restrictions, I have my one house on it but that's all I get and I'm happy with that, but his tax would be implemented at the state level. The state would say "You have a lot of valuable land here right next to the city. We're going to tax you wildly on land you can't develop.
Furthermore, this tax would just serve to increase speculative churn and encourage chunks of land that would eventually go on to become parks and public lands to be broken down by landowners into small chunks for the densest and most valuable uses.
The whole problem is that what is valuable is not necessarily what is good for society and this does nothing to address that. I'm always struck by how weirdly free-market / Laissez faire land value taxes are given who typically pushes them.
You're disregarding the fact that landowners are already incented to devote land to its highest and best use, as measured by how much they can gain for it. The benefit derived from a local amenity, such as a park, is currently reflected in surrounding land values; LVT would harness this value for the community that created it in the first place, rather than leave it entirely to private landowners.
I probably could have been clearer with my point.
A park is never the highest and best use based on gain. A LVT would make it _harder_ to create things that may be valuable in ways not measured by the market.
This is probably fantastic from a maintainability perspective, but I'm curious if some performance is left on the table by using LLVM IR instead of compiling directly to machine code. I know there are a number of optimizations that can be made for Fortran that can't be made for C-like languages and I wonder if some of those C-like assumptions are implicitly encoded in the IR.
We designed LFortran to first "raise" the AST (Abstract Syntax Tree) to ASR (Abstract Semantic Representation). The ASR keeps all the semantics of the original code, but it is otherwise as abstract/simple as possible. Thus by definition it allows us to do any optimization possible, as ASR->ASR optimization pass. We do some already, we will do many more in the future. This optimizes all the things where you need to know the high level information about Fortran. Then once we can't do any more optimizations, we lower to LLVM. If in the future it turns out we need some representation between ASR and LLVM, such as MLIR, we can add it.
We also have a direct ASR->WASM and WASM->x64 machine code, and even direct ASR->machine code, but the ASR->LLVM backend is the most advanced, after that probably our ASR->C backend and after that our ASR->WASM backend.
How does LLVM cope with the array semantics? I was under the impression that the noalias attribute in the IR was not activated in such a way as to enable the optimizations that make Fortran so fast.
My experience with LLVM so far has been that is possible to get maximum speed as long as we generate the correct and clean LLVM IR, and do many of the high level optimizations ourselves.
If LLVM has any downsides, it is that it is hard to run in the browser, so we don't use it for https://dev.lfortran.org/, and that it is slow to compile (both LLVM itself, as well as it makes LFortran slow to compile, compared to our direct WASM/x64 backends). But when it comes to runtime performance of the generated code, LLVM seems very good.
Rust drove the fixes needed in llvm to support noalias. They went through a couple reverts before seemingly fixing everything. If lfortran emits noalias, llvm can probably handle it now.
WASM -> x64 as in a full WASM AOT compiler? Don't those already exist? What's the benefit to making one specifically for LFortran? Unless I'm misunderstanding
Yes, we could make the WASM->x64 standalone. The main motivation is speed of compilation. We do not do any optimizations, but we want to generate the x64 binary as quickly as possible, with the idea that it would be used in Debug mode, for development. Then for Release mode you would use LLVM, which is slow to compile, but good runtime performance. And since we already have ASR->WASM backend (used for example at https://dev.lfortran.org/), maintaining WASM->x64 is much simpler than ASR->x64 directly.
I think the question was why maintain your own WASM->x64 at all though? Couldn't you just run an existing WASM->x64 tool (using -O0 or equivalent if the goal is quick compilation for debug mode)?
As someone who's only played with Fortran, and never done anything too serious with it, can you explain an optimization that can be done in Fortran that can't be done in a C-like language?
I'm not being argumentative, I'm actually really curious.
A simple example is returning an allocatable array from a function, where the Fortran compiler can decide to allocate on a stack instead, or even inline the function and eliminate completely. While in C the compiler would need to understand the semantics of an allocatable array. If you use raw C pointer and malloc, and use Clang, my understanding is that Clang translates quite directly to LLVM and LLVM is too low level to optimize this out, depending on the details of how you call malloc.
Of course, you can rewrite your C code by hand to generate the same LLVM code from Clang, as LFortran generates for the Fortran code. So in principle I think anything can be done in C, as anything can be done in assembly or machine code. But the advantage of Fortran is that it is higher level, and thus allows you to write code using arrays in a high level way and do not have to do many special things as a programmer, and the compiler can then highly optimize your code. While in C very often you might need to do some of these optimizations by hand as a user.
Fortran has true multidimensional arrays in a way that C doesn't have--if you know an array is 5x3, you know that A[6, 1] doesn't map to a valid element whereas in C, it does map to a valid element. This turns out to make a lot of loop optimizations easier. (Also, being Fortran, you tend to pass around arrays with size information anyways, which C doesn't do, since you typically just get pointers with C).
The nice think about Fortran it that is does the sensible thing by default for the type of scientific computing codes that are inside it’s wheelhouse (the trivial example, it assumes arguments don’t alias by default).
C can beat anything, assuming unlimited effort. Fortran is nice for scientists who want to write pretty good code. Or grad students who are working on dissertations in something other than hand-tuning kernels.
C can be as good as Fortran if make sure to declare pointers "restrict". That is a C feature added in C99 though. For Fortran it has always been the default.
For a long time Fortran was actually unbeatable and C did not suffice to specify all possible uses of assembly...
This is the correct answer. They almost entirely compile to the same machine code for the computationally intensive parts. (Even Julia does that these days.) But the limitations of Fortran prevent a lot of difficult-to-debug C bugs, while not affecting typical scientific and numerical capability.
The most significant distinction is that dummy arguments in Fortran can generally be assumed by an optimizer to be free of aliasing, when it matters. Modifications to one dummy argument can't change values read from another, or from global data. So a loop like
subroutine foo(a, b, n)
integer n
real a(n), b(n)
do j = 1, n
a(j) = 2 * b(j)
end do
end
can be vectorized with no concern about what might happen if the `b` array shares any memory with the `a` array. The burden is on the programmer to not associate these dummy arguments on a call with data that violate this requirement.
(This freedom from aliasing doesn't extend to Fortran's POINTER feature, nor does it apply to the ASSOCIATE construct, some compilers notwithstanding.)
Fortran's standard doesn't use the term Undefined Behavior; instead, it states a requirement that an object modified via one dummy argument must not be modified or referenced via any other name. When a program violates that requirement, it's no longer conforming to the standard, and the language doesn't define what happens afterwards. In practice, you'll get bad results and a bad day of debugging.
> When a program violates that requirement, it's no longer conforming to the standard, and the language doesn't define what happens afterwards. In practice, you'll get bad results and a bad day of debugging.
Rewind your disappointed expectations back to Fortran II in the 1950's, the first Fortran with subprograms. The value proposition was this: if a programmer was willing to help the Fortran optimizer out by avoiding aliasing, the compiler would generate code that was usually fast enough to avoid the need to write in assembly. It was a great deal for the programmer at the time. Fortran succeeded not just because it allowed one to write more productively in a "high-level" language but because its compilers could optimize well enough that there was no significant performance cost to doing so. (And then its popularity allowed one to port Fortran code from one vendor's hardware to another.)
Suppose A and B are fixed length arrays.
C = A + B - A
is easily optimized in fortran, to a no op if C is not modified afterwards, and to a copy if it is.
In C this is pretty much impossible.
To be fair, there are C like languages (ispc, glsl) which makes this work with heroic compiler efforts.
LFortran is not necessarily using LLVM IR to compile. It's building up an ASR [1] structure that's already being used in the LFortran-specific backends. Potentially it can make full use of Fortran semantics!
As an indirect answer, consider Julia, which is based on LLVM and seems to be competitive with Fortran on large scale, numerically intensive calculations.
LFortran can translate your Fortran code to Julia via our Julia backend. Once Julia can compile to a binary, it will be exciting to do some comparisons, like speed of compilation and performance of the generated binary. As well as the quality of the Julia code that we generate, we'll be happy to improve it to create canonical Julia code, if at all possible.
The author is in the UK. They noted that they were on a wait-list just to get tested and likely wouldn't be for a year at least so they decided to take it on themselves.
The advantage of single character concatenated symbols is that it allows your brain to group them rather than forcing you to read them separately. This means you almost unconsciously build up a bunch of set of instantly recognizable "chunks" of symbols that represent concepts you're familiar with. Each of these chunks can represent the equivalent of a huge amount of code in another language.
A lot of the benefit of therapy likely comes from the social aspect of feeling understood and seen by another _person_. This can't come from an AI as long as those treated know it's an AI.
Maybe. I've used ChatGPT for "therapy" a few times and I partly agree with you. A couple of times I had a terrible day and found myself even more frustrated after trying to vent to the AI because all it did was offer ideas and lists and elementary explanations of things I already knew.
Other times, it helped me to explore relationships in my life. In that way it worked a little bit like free-writing, but an interactive / guided exercise. It turns out that a little objectivity and reflection was all I needed.
Is it a substitute for a human? No. Then again, are you volunteering to listen to me complain? I don't want to abuse my limited friendships and I'm not going to pay someone.
My frustration was with the limitations of the AI, not the fact that it was a bot. If it remembered my past conversations, built a profile of my personality and preferences, and wasn't such a stick in the mud regarding policy then I suspect the companionship would be socially rewarding, at least to me.
Personally the return on the air fryer for me is a direct result of it being smaller. For most things my larger kitchen oven needs time to preheat. That's an additional step that isn't required when I just want to roast some vegetables / meat or heat some frozen potato products quickly in the air fryer. This time / extra step is well worth cost for me ($20 of fb marketplace).