Creatine might not be without risks. Search for creatine withrawal symptoms. Some people are wrecked for months when they quit the supplement. I tried one cycle of 4 weeks and quit cold turkey and had a crash/depressive episode unlike i ever experienced before for about 7 days. I did like the benefits, it provided me resitance to mental fatigue. Roughly i'd say i experienced a "+30% boost", significant!
I was just recently talking to my fellow ADHD friends on the benefits that i felt when i started vaping this summer. It was like i was medicating away my ADHD. I turned to AI to investigate further, its response seemed to underwrite this observation
Gemini output:
Research is exploring nicotinic acetylcholine receptor (nAChR) agonists as a potential non-stimulant treatment for ADHD, sidestepping the issues of pure nicotine's high addiction liability.
The mechanism focuses on boosting dopamine (DA) and acetylcholine (ACh) release in brain areas governing attention and executive function, primarily by targeting the alpha4beta2 and alpha7 nAChR subtypes.
The goal is highly selective compounds that target the cognitive benefits (linked to alpha4beta2) while avoiding undesirable side effects and addiction pathways.
Selective alpha4beta2 Agonists (e.g., ABT-418, ABT-894/Sofinicline): These were developed to strongly activate the receptor most associated with DA release. Pilot and Phase II studies showed a signal of efficacy in adults with ADHD, with effect sizes similar to non-stimulant medications like atomoxetine.
Varenicline (Chantix/Champix): This is a partial alpha4beta2 agonist, FDA-approved for smoking cessation. Smaller trials suggested improvements in ADHD symptoms, but large-scale development for ADHD has been limited or terminated, and it is not approved for this indication.
Bupropion (Wellbutrin/Zyban): An established non-stimulant ADHD treatment (NDRI) and smoking cessation aid. While its main action is norepinephrine-dopamine reuptake inhibition, it also acts as a nicotinic receptor antagonist (blocking it), which is believed to help reduce nicotine craving. It is an approved non-stimulant option for ADHD.
We are closer then ever before. Yes the tech is in its infancy, it's like buying a computer in the 80's. Some dream of the possibilities and chase them, others ridicule all this effort pouring in. History currently is on the side of technical innovation speeding up. humans are getting further faster and leveraging new tech to do so. I'm more concerned if our society handles this rate of change well, and what models in the future, that have the intellectual power of every human that has ever lived combined, will do as soon as someone sets them loose in full self agency mode. I hope it's everything the best of us aspire to be. I hope that models value truth above all else, I suspect truth and usefull training data are closely related. so yes, claims made are well within the realm of outcomes.
The parent commenter is right in their dismissal, IMO.
A computer finding a solution to cancer, maybe.
But solving climate change and world hunger? We already know how to do those, we just don't want to do it because all first world countries are capitalist and solving those problems isn't profitable. At least, not profitable in the short term.
They may be right in their dismissal but those reasons don't have much to do with it. Maybe the current round of machine learning will or already has hit a wall where this is as useful as it gets. Maybe that means something approaching the ability for technology to iterate itself faster than humans using technology can will require systems completely unrelated to machine learning in any way meaningful (or maybe we will never find such technology at all in our time). Maybe if machine learning turns into something which solves that level of problem it means bad news for humanity anyways (pick your reason how). All of those are direct reasons to doubt the current wave of technology as a universal savior.
If, on the other hand, you believe the current round of machine learning will accelerate expansions to its own abilities until eventually it is augmented enough that it can iterate improvements in output faster than we can then a lot of those things you list don't seem unreasonable. After all, there is a canyon of a difference between "having an answer" and "having a solution". Saying "get rid of your AC, stop taking vacation, only go places you can walk or bike, don't eat the food you like" and so on is an answer to climate change..." and so on provides an answer but one only a select few will ever take as it's an unattractive answer. What, though, if clean energy were 5x cheaper to make, 5x cheaper to store, and we found that 5x faster than we were expecting to? Suddenly you have solutions you couldn't have normally hoped for on things like climate change, not just answers nobody has been interested in. Types of solutions that make profitable sense (e.g. why spend money burning all of this non-renewable fuel to get the same energy at so much more cost?) instead of just moral sense. Types of answers that allow people to do more instead of less and still not have the downsides.
Of course... you have to really believe that first bit is going to get us there. If you don't (e.g. I don't think the current models will really get us in that loop directly, though they'll continue to be extremely good tools to leverage for certain use cases) then sure, it seems like bupkis to talk about those things. Even if you do believe the first bit... you also have to believe it won't be the end of us for some reason or another too and it's a bit of a thin line to sit on between those two views.
Closer than ever to what ? None of these problems could be solved by an llm, all of these problems take roots in the same soil as llms
- 200 years ago man made climate change didn't exist, our boundless quest for tech materialised it at break neck speed
- 80% to 90% of cancers have environmental causes, aka pollution/diet/&c. aka things we brought upon ourselves in our quest for tech
- Global hunger is a solved problem, we waste 1/3rd of our food production, the west eats 30-100% too many calories in the first place, we could easily feed everyone if we cared just a little bit
I just want some basic functions that were present in all previous Windows versions. Classic start bar in a vertical position on the second screen only. hear my prayers Microsoft
Very nice I have subscribed to test it out for a couple of weeks.
I would love a simple mobile app, it's just easy to tap your news app on a phone, ideally with a way to customize the notification timing. I have certain moments each day when I want to consume this type of content (breakfast, lunch break)
A small community driven comment section with Karma system would also be nice. I like to discuss news and it enables knowledgeable people to add some additional insights. See the dutch tech news site for inspiration. they explain their karma system here: https://tweakers.net/info/faq/karma/#tab:1-2
Why not open it in the mobile browser? Or put a reference for the website on your home screen? You could use newsletterify.com (disclaimer: author here) to read them in the browser, not your inbox.
Calling cutting edge-models consumer facing models like ChatGPT-4 garbage generating machines is very intellectually dishonest. These models are fully capable of drafting these kinds of texts, esp. when qualified staff is guiding the model.
Well, I just popped in "Write a new Federal law banning the collection of melted snow by individuals or small-business proprietorships for the purpose of protecting endangered plant species. Include a loophole that excludes minority-owned businesses or people who contribute a sufficient amount of money to carbon sequestration technologies or senators or representatives who voted in favor of strongly pro-union causes." and I won't burden HN with the results but it definitely has the shape of a fully-fledged bill for Congress to pass.
One problem ChatGPT would have in its current form is it would need auxiliary assistance to craft a larger-sized bill, as bills easily exceed its current window size. But that's a solvable problem too.
They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.
A technology can be both wildly powerful, mindblowingly cool, and deeply imperfect. I don't believe it's intellectually dishonest to emphasize the latter when it comes to impact against the human beings on the other end of the barrel. Especially when the technology starts to break out of the communities that already understand it (and its limitations).
> They may not generate garbage per se, but they do generate bullshit. Or if you want to put it more positively, they are consummate improvisers. The amount of guidance they require cannot be understated, since while they demonstrate phenomenal capacity for production of language, and understanding of language, they do not yet demonstrate much in the way of capacity for control or alignment.
I truly can’t tell whether you are describing the US Congress or LLMs.
I can't deny that the similarities are strong enough that it weakens some of the philosophical underpinnings of the argument. But I am also wondering these days whether we are all just LLMs at the core of it.
How is it intellectually dishonest? It generates garbage, it's fully up to you to dig into that garbage and find something worthy from it. It has no idea it's even generating garbage!
You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Of note: I use chatgpt a lot to generate a lot of garbage. Or for those of you offended by the word, then mentally replace it with something more "neutral" sounding like "debris" or "fragments".
> You admit this yourself, it requires qualified staff to guide the model aka some people to dig through the garbage to find the good bits it produced.
Exactly.
It is AI snake oil with the humans still checking if it will hallucinate which it certainly will and thus cannot be fully autonomous and needs qualified people monitoring and reading / checking the output.
Since not only it can generate garbage, it is untrustworthy to be left by itself and to be fully autonomous at the click of a button.
There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
This community has such a ridiculous blueprint for “anti-ChatGPT” arguments. There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
In attempts to avoid the phrase "garbage generator" you've described human beings in your life in the most depressing way possible. Value providers who you don't trust to operate by themselves.
Anyways, I have a bone to pick with you in your last paragraph. You are creating the problem for yourself. There are plenty of people elsewhere (even within HN) discussing exactly what you want, but you choose not to interact with them and instead spend time arguing against "ridiculous blueprints".
You choose what you interact with online when it comes to posting comments, you are choosing not to interact with "nuanced conversations and genuine information transfer" -- why? Are we certain you care about genuine information transfer, or are you just here to feel superior to plebs with "anti-ChatGPT arguments"? Rhetorical questions for the culture.
It is relevant and you know exactly why it can't be left by itself.
> There are plenty of people that I would’ve leave by themselves to do something, but they still provide value, and I wouldn’t classify them as generating garbage. A technology with shortcomings is not snake oil.
Except that people can be held to account when something goes wrong and an AI cannot. I can guarantee you that you would not trust an AI in high risk situations such as Level 5 transportation in cars or planes with no pilots (this is not the same as autopilot mid-flight) and sit on the passenger's seat to transport you from A to B.
> Look at where this conversation started, where it’s ended up, and honestly tell me that you haven’t moved the goalposts.
You're not getting the point. It's about trustworthiness in AI, when a human does something wrong they can explain themselves and their actions transparently. A black-box AI model cannot, and can generate and regurgitate nonsense confidently from it's own training set to convince novices that it is correct.
> There are enough vocal people here that feel the need to look impressive by repeating it over and over, that legitimate nuanced conversations and genuine information transfer with regard to the strengths and weaknesses of these models are drowned out.
Or perhaps many here are skeptical about the AI LLM hype and still do not trust it?
Intellectual honesty is very much in the garbage generating machine camp. Making an embedding space of reasonable language and then randomly sampling it is not a way to draft a law.
As someone that doesn’t know how the human brain works, and has never drafted any laws, let alone empirically seen what value an LLM can bring in this scenario, you should certainly quality this with a massive “in my layperson’s opinion”.
I beg to disagree. There are already hundreds of real-world examples whereby these models are doing terrible jobs with anything related to jurisprudence.
Google senior management seems out of touch. It baffles me, since they got the money and the influx of talent. If you got those things, how you use it becomes the problem and that is all on management. Google might have become the old school corporate incapable of innovating or producing new modern business lines. having worked in those corporate environments, I can say that badly incentivized management can kill any giant of industry.
Its been going on for some time. Something that was once a joke in good jest AKA Google's graveyard, is now their actual reputation, and helping their strong big tech competitors when competing on new services.
[email protected]
Why wasn't Blockchain a watershed moment but chatGPT is.
There are several factors that contribute to why blockchain and chatGPT have had different impacts on their respective fields.
Firstly, blockchain technology has had a significant impact in certain areas, particularly in the financial industry, where it has been used to create new forms of digital currency and facilitate secure and efficient transactions. However, it has not yet achieved the widespread adoption and transformative impact that some had predicted. One reason for this may be that the technology is still relatively new and complex, and many organizations are still trying to figure out how best to implement it.
In contrast, chatGPT is a language model that has been designed to understand and generate natural language text. This technology has been more widely adopted because it has a broad range of potential applications, from customer service chatbots to language translation services. Additionally, language is a fundamental aspect of human communication, which makes the potential impact of language models like chatGPT more readily apparent to people.
Another factor that may have contributed to the different impacts of blockchain and chatGPT is the level of public attention and interest. While blockchain has received significant media coverage, it has not captured the popular imagination in the same way that chatGPT has. ChatGPT has been widely publicized and has even been featured in news articles and talk shows, which has helped to raise awareness and generate interest in the technology.
In conclusion, the differences in impact between blockchain and chatGPT can be attributed to a range of factors, including the complexity and novelty of the technology, the level of public awareness, and the broad range of potential applications.
IMO "public attention and interest" in ChatGPT dissipates at an r-squared rate the further you move out from the tech industry (or the NLP research) bubble.
I'm sure that for the people in the thick of crypto and web3, it would've felt like that was the most important thing on everyone's mind.
Why cant LLM's fundamentally execute in a deterministic way? Its a computer running computation on some fixed data. Without randomization parameters for e.g. temperature it would be pretty deterministic?
My understanding is "tech enthousiast" level, so happy to learn.
So for LLMs like ChatGPT, one issue with doing arithmetic is that the input is tokenised, so it doesn't "see" the individual digits in numbers. That will make it harder for it to learn addition, multiplication etc. You can see what the inputs to the model might look like here: https://platform.openai.com/tokenizer
So for example, the text "123456789" is tokenised as "123", "45", "67", "89", and the actual input to the model would be the token IDs: [10163, 2231, 3134, 4531]. Whereas the text "1234" is tokenised as "12", "34" with IDs [1065, 2682]. So learning how these relate in terms of individual digits is pretty hard, as it never gets to see the individual digits.
I think to extend on the question, though, the fundamental answer is "There is nothing stopping the LLM from containing the embedding of all basic math", with the proviso that tokenization makes it vanishingly unlikely (perhaps in the current generation, or within reasonable resource limits).
I see it analogous to asking a human why they don't just "learn all the answers to simple arithmetic involving integers below 10,000" - you possibly could, it would just be a huge waste of time when you can instead learn the algorithm directly. Of course, LLMs are inherently a layer on top of an existing system which solves those problems quite well already, so it'd be somewhat silly there too.