In my Python programming, I have found that ChatGPT makes me something like 10x more productive. From learning to use a new API to tracking debugging, and especially things finding errors in my code from stack traces. Getting results goes SO MUCH FASTER.
However, it has not alleviated any responsibility from me to be a good coder because I have question literally everything little dang thing suggests. If I am learning a new API, it can write code that works, but I need to go read the reference documentation to make sure that it is using the API with current best practices, for example. A lot of code I have to flat out ask it why it did things in a certain way because they look buggy inefficient, and half the time it apologizes and fixes the code.
So, I use the code in my (personal) projects copiously, but I don't use a single line of code that it generates that I don't understand, or it always leads to problems because it did something completely wrong.
Note that, at work, for good reasons, we don't use AI generated code in our products, but I don't write production code in my day job anyway.
$4.99 impulse buy. I bought a pdf to put on my remarkable 2 for my next plane trip. I hope that doing it this way doesn't violate the spirit of using paper :)
In my experience, issues like this occur when people project ethical standards onto projects when those ethical standards are not embedded in the license.
In my view, if you believe it is unethical for someone to re-license your Apache code with their own proprietary license, then it shouldn't have an Apache license.
Taking a proprietary fork of an Apache licensed code base and creating an Enterprise product around it seems like a valid business move to me. My guess is that the "uproar" is not coming from the original project creators, but from outside community members who consider such things "anti-social" or whatnot, but I could be wrong.
Yes but they don't defend their view about enterprise product, instead saying they "chatgpt'ed" the license and "can't be bothered with legal", which is IMO even worse - I mean, as a founder how can one be so dumb to openly say that? Especially that they have access to YC's legal and administrative support?
> if you believe it is unethical for someone to re-license your Apache code with their own proprietary license, then it shouldn't have an Apache license.
It's not just unethical, it is clearly illegal.
If you don't own the copyright to a program's source code, you cannot legally relicense that source code! Same holds true for any other copyrightable creative work which can be licensed. This is a very clear case of copyright infringement.
Nothing in the Apache license permits the licensee to relicense the source code (meaning, entirely replace the license with a different one).
It does permit you to build derivative enterprise products, and you have no obligation to keep the source code open for derivative products. But if you do release the source code for your derivative product, any original unmodified Apache licensed portion of the code retains that license and you cannot remove it if you aren't the copyright holder for that original work.
I pay for ad-free YouTube. I also pay for a ton of extra storage in Google drive.
I find the constantly closing down and shifting of services to be a slight annoyance but not really too impactful in my personal usage. For example, moving Google podcasts and YouTube music into YouTube has been a pretty sad degradation. But not really a deal breaker for me.
What I would never ever do is build a business on top of Google. They seem so cavalier about pulling the rug out from under you. I can only imagine if you have for example services running in Google cloud, how they would treat you if you were counting on a service that they thought wasn't worth it to maintain anymore. Shutting down apis that people were using like this just seems very on-brand for them to me.
I would argue that there are good business to be built on top of AWS and Azure catering to the users of those services, and very little chance that Amazon or Microsoft would pull the rug out from under you based on their track records.
We (SUSE) have extensive testing and quality control, if you are looking for something very stable. Leap is the version built out of the same bits as the enterprise version. The "Micro" part means it is smaller, transactional, and immutable. It's built for containerized and virtualized applications. You can install the nvidia OS bits in the OS, and then build a container with libraries.
I wish we were in the Star Trek universe, so this was "AI will free humanity to pursue science and art without economic concerns" instead of "AI will make everyone poor and subjugate."
Star Trek is a reflection of American society at the time.
Back in 1968 people were concerned about losing jobs to computers, just like they are today.
MCCOY: Jim, we've all seen the advances of mechanisation. After all, Daystrom did design the computers that run this ship.
KIRK: Under human control.
MCCOY: We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.
KIRK: Am I afraid of losing command to a computer? Daystrom's right. I can do a lot of other things. Am I afraid of losing the prestige and the power that goes with being a starship captain? Is that why I'm fighting it? Am I that petty?
MCCOY: Jim, if you have the awareness to ask yourself that question, you don't need me to answer it for you. Why don't you ask James T. Kirk? He's a pretty honest guy.
Player Piano, Kurt Vonnegut's first novel in 1952, and ironically his least overtly wry and sardonic satirical one, really captured the listless drudgery of a fully-automated future without frontiers to explore.
>> >95% of people used to work in agriculture. The machines took the AG jobs, and we're on net far better off.
That kind of argument is getting really old. It's not a given that there is always something else for people to do when a job is automated, and when there is it's generally not something of equal or higher value (hint, they would have already been doing it).
Like when people were working in private equity, marketing, law, finance, etc before the machines took AG jobs?
Those jobs didn't really exist yet, because everyone was too busy trying to get enough to eat.
~50% of the people on the planet live in pretty miserable conditions. We do not live in the age of abundance. Contrary to popular opinion, if we just split up Bill Gates's money, it would not be enough for everyone to have a nice life. On net, we will benefit massively from freeing up labor to do other things - beside mundane things that could be automated but currently cost too much to automate with current methods for it to make sense to automate them.
> We could do with fewer "marketing, law, finance, etc" jobs
We can say this about almost any job category. For those perceived as useless, good riddance. For those seen as essential, they’re essential—wouldn’t it be nice if doctors could spend as much time as they wanted with every patient because a machine was doing the boring bits?
I'm not sure it's 50% of the planet that's living in miserable conditions - we really do live in an age of abundance. The United States supplies 25% of the global food supply - that's a single country.
Human Beings have been forced to survive most of our existence - I believe when we lift ourselves out of the necessity of work, that we finally actually be human for the first time.
We are supposed to exist above it all as we are the only life that we are aware that could ever do that.
I'm sure hundreds of millions living in slums, with no jobs or prospects, but capable of plenty of spare labor, would be glad to hear about this new law of nature!
Poor people in Delhi, Lagos, Mexico City, Cairo, and all around the world where there are such slums, literally "kick dirt all day" - and beg, steal, live with what they can get their hands on, do some ocassional odd gig and try to survive on that, and so on.
We're not talking about working class poor people.
There is always something to do that could improve the lot of our fellow humans, even if that just means hanging out with lonely people (of which there are many). The problem is finding anyone to pay someone to do these things, so that they can continue to be housed, clothed, fed, and entertained while they are.
Even short of abolishing capitalism (or at least UBI, or a decent social safety net) though, the most likely answer is that people will find work doing stuff that capital wants, and which AI can't yet do (sex work? meaningful art?), or that capital for some reason doesn't want AI to do (intelligence work? domestic servants?). I don't know what those professions will be, but it's likely some of them don't even exist yet.
We'd have more community, be more grounded, respect the land more, and have less bullshit jobs and products, if a much larger percentage of people still worked in agriculture.
Most people had to work as hard as they were able just to have enough to eat. When crops failed, the aristocrats forcibly took the food they needed (with the support of the legal system because they owned to land) with the result that some of the farmers starved.
This best it represents specific times and places, under specific regimes, not some constant fact of working the land.
Working the land had tons of downtime - even back at medieval and ancient times. In fact, even hunter gatherers have been observe by ethnographers to just need 2-3 hours to get the food for the day.
Star Trek requires a (nearly) post-scarcity world. In an AI driven future compute will be the new hot scarcity. It just so happens that those with the means to get the most compute are also the ones with the most wealth already.
Today’s society can’t exist without mass exploitation. AI at least gives us a chance at a true utopia, where we don’t need a large fraction of the population working long hours in farms and factories doing physical labor for low wages.
It also makes it possible that a few can effectively rule over many without risking a revolution. But there’s a solid argument we are morally obligated to take that risk due to the state of today’s society.
Analogy: if a lot of people are held hostage in a building, do you attempt to breach the building, risking more casualties, or do you give up and let those people die? You do option c, evacuate the surroundings, form a plan to minimize risk, and then breach the building.
Translating into AI, it means we need to minimize the risk of a few taking control and then move forward. Obviously if we achieved AGI today, the risk of a few taking control is still very present
However, you also move fast, because waiting costs lives. If the risk of a few taking control isn’t going away, then we need to move forward. It’s only worth waiting if there’s a good chance that society will fundamentally change in a way that reduces this risk, but if we move forward now, AGI would precede that shift.
Oddly, I feel society is becoming a lot more liberal and anti-capitalist. Maybe it’s just the echo-chamber from the people I surround myself with and online places like HN. So the above could be true.
But lastly, we’re nowhere near AGI, and I’m certain the steps we need to take will cause massive societal shifts before AGI is reached, revealing more about how great the risk of a few taking control really is.
I don't think it's semi-abandoned. I had a brief interaction with the project in my previous job, and I found the community and the company to be reasonably engaged and responsive.
For "reasons" (i.e. I got a job at SUSE), I have been running openSUSE for the last 6 weeks. I like that I don't have to deal with snaps and flatpaks at all. I was able to install Slack, VSCode, Chrome, etc... from professionally maintained repositories. Open source apps like GNote just install cleanly from repos.
That said, it's nice that people who prefer snaps or flatpak (assuming such people exist) have that option without those packaging formats being shoved at me constantly.
Gaming has finally gotten me into desktop Linux. Most of my prior exposure comes from occasional using remote VMs in a work setting, which are generally maintained by other people. That is to say: I don't have a lot of first hand experience with packaging/installing on Linux.
But my understanding is that Flatpak solves the problem of "this guy has Ubuntu and that guy has Pop! and they both want to download this app, but each distribution has its own packaging system." Having a stable target that works across distributions seems good for the ecosystem.
Or developers compile a static linux binary for the specific architectures (usual amd64) and just compile all of the dependencies into the app. You download the binary and run it. That used to be how Steam worked, for example, and I assume still does.
I think that what snaps and flatpaks want to do that is "better" than this, is to isolate your system from such applications if they are malicious.
Your description of Flatpak is technically more apt for AppImage than Flatpack.
What you said is not wrong, however, it misses the bigger picture because like Snap, Flatpak also includes a package repository and distribution mechanism, whereas AppImage only solves the problem of running a dynamically linked binary on an unsupported distribution (the problem you described)
I went all in on Google home, with the little pucks and also screens littered around my house. My wife and I have both noticed over the last couple of years that their usefulness seems to have diminished a lot. They used to seem much "smarter." Now, basically, they can tell me the time, set a timer, play music with like 65% accuracy in playing what I want, and tell me the weather outside. It's possible that they were always this bad and the novelty wore off, but it seems like the service just degraded.
I assumed that both Amazon and Google were underwhelmed by how much actual revenue these kinds of devices produced, so they were starving the backend services.
Now it looks like both companies are hoping that Generative AI is going to make them more valuable [0]
A friend of mine has one that he mostly uses for music. I've noticed that every time I visit his commands have had to get slightly longer.
2016 - "Hey Google, play Gorillaz"
2018 - "Hey Google, play music by Gorillaz"
2020 - "Hey Google, play music by Gorillaz on Google Play Music"
2023 - "Hey Google, play album Demon Days by Gorillaz on Google Play Music"
One day the previous command will suddenly stop working with a "I don't understand" error, so he has to figure out the new incantation to get it to do anything remotely close to what he wants.
It's almost like ambiguous voice commands are garbage for controlling things. Humans understand each other only most of the time and we've had 200k years of evolution and practice and now even education to improve that system.
Anything to get more user traffic to Amazon Music, even when you the customer have specified your personal preferences as non-Amazon Music. Pretty lame.
> I assumed that both Amazon and Google were underwhelmed by how much actual revenue these kinds of devices produced, so they were starving the backend services.
For years I've been questioning the usefulness of voice assistants and have been mostly ridiculed for it. Beyond a few edge cases, like setting a timer when cooking or use in cars, I still don't see people actually using them all that much. So I'd agree that the potential revenue was vastly overestimate, nor are the lack of a powerful voice assistant going to hurt sales of devices such as phones. The devices which can only be used as a voice assistant is going to go away obviously.
The issue is really in the "assistant" term. Their voice recognition is good enough for English speaking people without heavy accents. But they're terrible actual assistants beyond some rote command relative to all but the most dim-witted human.
This has been my experience too, and it has reached the point that my (young-ish) children have commented on how poorly they understand and respond to things. I suppose it doesn't help that the Home devices used to be able to tell stories from Frozen, but the license expired and now they no longer can.
I have found them consistently useful as broadcast devices from Home Assistant, however -- sending media from plex, etc. I haven't yet tried to utilize them for interacting with Home Assistant directly.
Anecdotally, I also found the music selection accuracy went down considerably once Google Play Music was merged into Youtube Music.
> Anecdotally, I also found the music selection accuracy went down considerably once Google Play Music was merged into Youtube Music.
Going from the search corpus containing only the actual song/artist name to an uploader-provided title aimed at gaming the recommendation algorithm made this a foregone conclusion. It's incredibly frustrating.
Some other things I've noticed-
- the "nearest device" feature almost never works now. I'll quietly speak in one room only to have a device at the other end of the house activate either instead of or in addition to.
- "play white noise" has a 50% chance of playing death metal
- there is still a huge amount of functionality that's only available to free google accounts and not gsuite users. Generally this is discovered by trying to do something like add a reminder and having the device crash/restart.
What you've described is how most people use their home devices. Alarm, weather, music, maybe lights, that's pretty much it. Given the billions companies have lost on what is functionally a clock radio suggests a short future for these devices. It's a great example of how tech doesn't always "get better/smarter/more useful" with time. Much of the hype of the AI space doesn't account for basic economics. In what is likely many years of a high interest rates, companies no longer have the resources to wait for the future. Whether it's self-driving cars or voice assistants, it has to follow the same rules as a humble pizza place. It doesn't matter if your tech works, or is impressive or even useful. Investment<Profit or your futuristic tech has no future.
I used to ask it a lot more nuanced questions. Like if I was watching TV and they referred to some historical event, I might have asked it about that historical event and gotten a useful response. Now it says something like, "I don't know how to answer that."
> I assumed that both Amazon and Google were underwhelmed by how much actual revenue these kinds of devices produced, so they were starving the backend services.
Every one of my "all in on home assistants" friends, no matter which ecosystem, all generally feel the same way that the assistants are strangely worse today than a few years back and the only trajectory seems "subtly worse" but it is hard for almost everyone to explain how/why they are worse than before. It's an interesting phenomenon, anecdotally at least.
It doesn't seem to be explainable purely economically either, perhaps. Most software you leave it alone and stop paying for maintenance work and it doesn't just slowly lose features or get worse. I wonder how much there is some sort of entropy effect we are seeing on these "AI assistants". It's fun to bring out the Marathon/Halo term "rampancy" for this, and Microsoft invited us to directly do that by even calling theirs Cortana for a while (Copilot as a current name has such less interesting personality). I think there is something of a rampancy problem we're seeing across all players (Amazon, Google, Apple, Microsoft) and I wonder how inherent a problem it is to all of our current ML approaches. I don't directly know why it is happening or what it means, but it has been an interesting thing to observe anecdotally because it seems consistent despite some very different models/approaches/corporate overlords.
Relatedly, Discord's Clyde has been on slower but consistent path to rampancy in a "Tay way" (thanks Microsoft for that example in the chat, too) and Discord just admitted they will be shutting it down in early December.
I think voice assistants are one of those things where what they could do when they first appeared was sufficiently cool that many of us were willing to overlook the many shortcomings. Now, quite a few years in, it's probably a matter of "Yes, yes, you can set a timer but what have you done for me recently?"
I wonder if its because at first we were excited, it was novel, and as time went on, our expectations increased as to what they should be able to do, and now with ChatGPT 4 (and successive iterations) taking the scene, our expectations of AI are increasing further, and therefore these services seem underwhelming or broken by comparison.
I also wonder, in some respects, if the pandemic and everyone being home more often, lead to people using these products more often and/or intensely, and were finding their flaws faster than perhaps pre pandemic.
I'm using it for exactly the same things I used it to back then (99% kitchen stuff: timers, conversions and playing music, 1% setting the lights and asking for the weather), it's not as if I expect more now. But she has more issues understanding me.
Luckily I'll get my first rPi zero on Monday to experiment with replacing her.
Related to that and that feeling that we aren't doing much more than the same features and these are just getting worse at those features, I think it is an easy question how much of this has been a "law of diminishing returns" and that so many new features come at the cost of the understanding and recognition of the old features. Maybe we didn't really want all that many new features and developers and product managers trying to justify their budgets' existence has been part of the problem (spending too much money developing new stuff is not a problem often expected here on HN).
I know that I sometimes lament the loss of Windows Phone 7 and Xbox 360-era voice recognition (proto-Cortana). It was absolutely stupid, had very few features, but it ran entirely on the device itself and was rock solid at recognizing the features it did support. You could just about whisper "Xbox pause" to a Kinect in the middle of an action scene of a loud movie with surround sound and expect it to respond immediately on a dime. But also it never seemed to accidentally trigger.
>Most software you leave it alone and stop paying for maintenance work and it doesn't just slowly lose features or get worse
This might have been true for client software. It has never been true for services, and is especially not true for services with diverse dependencies on other teams and products.
GenAI absolutely will make them more useful. Even just for normal Q&A functions. I installed a shortcut to Bard on my phone and several times have compared the results when I ask the Google Assistant a question (on a Google Home) with what I get from Bard. The sooner they can get Bard responding to Google Assistant queries, the better.
About 2 years ago I spent a weekend completely re-doing my wifi environment because the reliability of the Google Minis plummeted. Long response times, multiple rooms answering/clobbering each other so nothing answers, etc. I should have just checked /r/googlehome because the sentiment of "it used to be great, it's total shit now" is posted every week or two. And it continues to get worse. Losing the multi-room streaming over the Sonos patent was bad enough, but the least they could have done is actually remove the functionality outright instead of just silently neutering it - to this day it'll happily say it's playing on "upstairs" or "all speakers" but nothing plays.
> underwhelmed by how much actual revenue these kinds of devices produced, so they were starving the backend services
Absolutely. I got all of my minis for either 30CAD or for free through promotions. These devices were always sold at a loss on the hope they'll make it up on the other end.
ChatGPT's voice chat feature in the iOS app is incredible. It's easily 2-3 orders of magnitude better than Google Home when it was operating at it's peak. I'd happily pay a (potentially steep) monthly fee for voice assistants that aren't neutered and have similar capabilities. They really are fantastic when they work.
However, it has not alleviated any responsibility from me to be a good coder because I have question literally everything little dang thing suggests. If I am learning a new API, it can write code that works, but I need to go read the reference documentation to make sure that it is using the API with current best practices, for example. A lot of code I have to flat out ask it why it did things in a certain way because they look buggy inefficient, and half the time it apologizes and fixes the code.
So, I use the code in my (personal) projects copiously, but I don't use a single line of code that it generates that I don't understand, or it always leads to problems because it did something completely wrong.
Note that, at work, for good reasons, we don't use AI generated code in our products, but I don't write production code in my day job anyway.