Did the article intentionally start with a LLM cliche to filter out all the people who hate reading obviously generated content? I would say it worked.
I have been attempting to write a lot more with AI, but it's so gimmicky. It's always spitting lines like this: " it's not just about x – it's about y." like in this post. I find it so frustrating that no matter the prompt I throw at it, it eventually repeats itself again after some time. Good technical and succinct writing is almost impossible to iterate on with AI for me.
I like how my eyes went over the first sentence, barely parsing it and already discarding the information, because its obviously ai generated. Its like the circumstances we live in added a new layer of perception to my brain to guard itself against the flood of useless information!
It isn't AI generated it is just plain a vacuous cliche. Seriously what is with people who think 'they can always tell it is AI' when really AI is living rent free in their head and they fixate on anything they don't like and are oh so convinced it must be the AI they hate. They're exactly like Fundamentalists and the devil. Or Communists and how they think capitalism literally intentionally created everything as harmful as possible just to spite them.
I really hope it's intentional. The author is a smart, accomplished person. He even published books. It's sad if this kind of person thinks it's okay to just outsource their writing to AI.
Right, if there's no legal weight to any of their statements then they mean almost nothing. It's a very weak signal and just feels like marketing. All digital goods can and will be made worse over time if it benefits the company.
If I made a new, not-AI tool called 'correct answer provider' which provided definitive, incorrect answers to things you'd call it bad software. But because it is AI we're going to blame the user for not second guessing the answers or holding it wrong ie. bad prompting.
This feels like a case of guessing at something you could know. There are two types of allocations that each have a size and free method. The free method is polymorphic over the allocations type. Instead of using a tag to know absolutely which type an object it is you guess based on some other factor, in this case a size invariant which was violated. It also doesn't seem like this invariant was ever codified otherwise the first time a large alloc was modified to a standard size it would've blown up. It's worth asking yourself if your distinguishing factor is the best you can use or perhaps there is a better test. Maybe in this case a tag would've been too expensive.
Do you envision the development to track clojure as much as is possible, similar to how cljs was conceived to be clojure in js and not just clojure-ish js, or do you think you'll eventually diverge? I made a language a while ago that was like 90% clojure but hesitated to call it that because I didn't want there to be an expectation that the same code would run as-is in both languages. Seems like from the landing page you're going for more of a drop in replacement. Look cool, good luck!
jank is Clojure and will track upstream Clojure development. I'm working closely with the Clojure team and other dialect devs to ensure this remains the case. I am leading a cross-dialect clojure-test-suite to help ensure parity across all dialects: https://github.com/jank-lang/clojure-test-suite We have support or work ongoing for Clojure JVM, ClojureScript, Clojure CLR, babashka, Basilisp, and jank.
With that said, jank will do some trail blazing down other paths (see my other comments here about Carp), but they will be optional modes which people can enable which are specific to jank. Clojure compatibility will remain constant.
Most people want their test suite to pass. If they ugprade java and mockito prints out a message that they need to enabled '--some-flag' while running tests they're just going to add that flag to surefire in their pom. Seems like quite a small speedbump.
I understand the desire to want to fix user pain points. There are plenty to choose from. I think the problem is that most of the UI changes don't seem to fix any particular issue I have. They are just different, and when some changes do create even more problems there's never any configuration to disable them. You're trying to create a perfect, coherent system for everyone absent the ability to configure it to our liking. He even mentioned how unpopular making things configurable is in the UI community.
A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
> He even mentioned how unpopular making things configurable is in the UI community.
Inability to imagine someone might have different idea about what's useful is general plague of UI/UX industry. And there seem to be zero care given to usage by user that have to use the app longer than 30 seconds a day. Productivity vs learning time curve is basically flat, and low, with exception being pretty much "the tools made by X for X" like programming IDEs
Back in the 90s, you had a setting for everything! It was glorious. This trend of deliberately not making things configurable is the worst, and we can’t seem to escape it because artists are in charge of the UI rather than human interaction professionals.
App designers need to understand that their opinions on how the app should look and work are just that: opinions. Opinions they should keep to themselves.
It does make quality assurance an absolute nightmare, I would know, our application is like this to the 10th degree. Config on top of config on top of setting on top of options.
But if you also want your product to be productive for a way array of use cases, it's necessary. You need to think about your market.
Which is why you should think about how these options interact and compose at the start, as opposed to only adding options in an ad-hoc manner (whether you do it willy-nilly or only when your arm is really twisted)
"You mean we shouldn't use 10 layers of abstraction and 274 libraries to achieve our goal ? I mean, we use a lot of resuources, but look how polished the UI is: everything is flat. "
Thank god the RAM prices have risen. Maybe some people will start to programm with their heads instead of their (AI) IDE.
I rarely need to configure something on my PCs, but rarely is not never, and when I do really need an option, it better be there. There's a gradient between unmaintainable multidimensional matrices of options and "one size ought to fit everyone" and both ends of it make the user miserable.
I think when it comes to config too people really underestimate its power.
On desktop, I often see people waste inordinate amounts of time on workflows that don't suit their use case. Little do they know - there's a config for that!
For example, I'll see people holding outlook like it's radioactive. They'll do the same busy-body work of manually pruning their inbox and sorting stuff and deleting stuff. The config can really help them there, but I think they either don't know it's capabilities or are scared of it.
Most people also don't care about the mothers of programmers. Until, you know, they have to send an SMS using exactly (particular) one of the 2 SIMs present in the phone and the 20 years old app will not let them.
> that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
This is the trouble. It's been decades of the OS becoming less and less relevant. Apps have more power, more will to build their own thing.
And there's less and less personal computing left. There's the design challenges, the UX being totally different. But the OS used to be a common substrate that the user could use to do things. And the OS has just vanished vanished vanished, receeded into the sea. Leaving these apps to totally dominate the experience, apps that are so often little more than thin clients to some far off cloud system, to basically some corporations mainframe.
The OS's relevance keeps shrinking, and it's awful for users. Why bother making new UX for the desktop, if the capabilities budget is still entirely on the side of the app? What actually needs to change is's UX of the desktop or other OS paradigm (mobile), it's a fundamental shift in taking power out of the mainframe and having a personal computer that's worth a damn, that again has more than a quantum of capability embued in it that it can deliver to the user.
(My actual hope is that someday the web can do some of this, because apps have near always been a horrible thing for users that gives them no agency, no control, that's pre baked to be only what is delivered to the user.)
Text selection used to be frustrating on mobile for me too until Google fixed it with OCR. I get to just hold a button briefly and then can immediately select an area of the screen to scan text from, with a consistent UX. Like a screenshot but for text.
It's possible to use the Gemini "ask me about this screen" to OCR the selected area of the screenshot. I guess that might be more efficient in some contexts then trying to use the native text select.
This is such an indictment of modern technology. No offense is meant to you for doing what works for you, but it is buck wild that this is the "fix" they've come up with.
As somebody learning about this for the first time it sounds equivalent to a world where screenshotting became really hard so people started taking photos of their screen so they could screenshot the photo.
How could such a fundamental aspect of using a computer become so ridiculous? It's like satire.
Unfortunately, some apps don't support text selection and on some websites the text selection is unpredictable.
I'd actually compare screen OCR to screenshots. Instead of every app and every website implementing their own screenshot functionality, the system provides one for you.
Same goes for text selection. Instead of every context having to agree on tagging the text and directions, your phone has a quick way of letting you scan the screen for text.
To be fair, I still use the "hold the text to select it" approach when I want to continue with the "select all" action and have some confidence that is going to do what I want.
> some apps don't support text selection and on some websites the text selection is unpredictable.
That correctly identifies the problem. Now why is that, and how can we fix it?
It seems fixable; native GUI apps have COM bindings that can fairly reliably produce the text present in certain controls in the vast majority of cases. Web apps (and "desktop" apps that are actually web apps) have accessibility attributes and at least nominally the notion of separating document data from presentation. Now why do so few applications support text extraction via those channels? If the answer is "it's hard/easier not to", how can we make the right way easier than the wrong way?
Doesn't have to be - Blackberry BB10 had damn near solved it. I think they had some patents on it, but these should have expired, and I noticed some corresponding changes in Android. But it's still far from being as good as BB10. What BB10 had was a kind of combined cursor and magnifying glass that controlled really well, plus the ability to tap the thing left or right to move one letter at a time.
It looks like the thing that I remembered appears at 2:06 and later. I also tried to find a video example when I wrote my post and didn't find anything. Seems like very few people get excited about text selection.
Universal search on Google Pixels has solved a lot of the text selection problems on Android for me, with the exception being selecting text which requires scrolling.
reply