Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So.. I must say that GPT, and the current moment more broadly, are very exciting.

People are rushing around trying to find practical uses for ChatGPT. I think this time they'll finally start finding them, probably a lot of them. I think this is one of those dam bursting points. The difference between GPT3 and ChatGPT, to me, suggests a lot.

Writing code, writing documents, reading them... search... lots of things are about to change fast. The conversation leaked by this microsoft mole is happening in a lot of places. If you can't think of things worth trying to build with GPT... you need to take a bath and try again.^ Bonus points if you can get GPT to code it for you.

There are, absolutely, spectres hanging over this stuff. Monopoly, centralisation, nontrasparency is expected. Career anxiety. Skynet is not out of the question. Lots of reasons for trepidation. That said, ATM, I'm mostly excited. This technology is amazing, and it is about to spill out into the world.

^I want a reference book that works as a cascade. A table of contents (or title even) at the top end. Click in for a chapter abstract, then summary, etc. Concise at the top, detailed at the bottom.



I feel much more like ChatGPT is basically the same as Tesla's autopilot. Wonderful and seems like a huge advancement, but as it's actually applied to situations we see how banal and limited it actually is.

But loads of crypto-bro types will make millions selling ChatGPT "modernization consulting", so if you're looking to make a quick buck, highly recommend.


ChatGPT is already 10x more useful than crypto and NFTS. The foundational value of ChapGPT is not based on speculation or selling it to the 'next fool'.


Is it actually?

It doesn't give out reliable information and its "creativity" is questionable. Most of the excitement about it has to do with anthropomorphizing a robot that "feels" like an intelligent conversation partner, but it's not. Beyond toys, that is.

Similar to crypto. It isn't actually useful as a currency or to store value. Most of the excitemeny had to do with fake "freedom" dreams of the anarchically-leaning crowd and misunderstandings of what it is and isn't. Beyond toys, that is.

I see lots of parallels actually.


ChatGPT has real value to me. Ive been using it for my d&d games and to do some simple writing stuff that I hate. It costs me nothing, and I don't have to gamble to use it. Its value over crypto for me and my friends is exponential. Stable Diffusion has been even more valuable for me and I get to run that on my own metal.


>Most of the excitement about it has to do with anthropomorphizing a robot that "feels" like an intelligent conversation partner, but it's not

Literally no one I know that's excited about ChatGPT cares even one bit about the conversation aspect. No one I know is excited by the fact that it's a robot to talk to... They are excited by the fact that you can feed it code and errors and it can debug for you or you can give it code and it can write relevant documentation. Or you can give it tabular data and it can make charts and do basic analysis...

If you can't see the utility then maybe you should question your own creativity.


Sam Altman’s isn’t claiming ChatGPT will make you rich.

When you start using it, it tells you it’s a research project.

Even in its current form, it’s useful for a narrow set of uses.

It’s entirely conceivable that ChatGPT successors will be able to incorporate signals of fact vs opinion, and also authoritativeness of a source, into their training.

So yeah, ChatGPT is a toy, but that’s not bad for a research demo. But it has a lot of potential, and can help people in practical ways as it improves.


You are right, of course; The foundational (technical) value isn't based on speculation but that is irrelevant to con-artists or crime organisations who are looking for their next hype product to drive a new round of frauds.

The people I see most enthusiastic about the current AI development are not the technologist and developers, but people who aren't involved in tech and doesn't code. There's already adverts on social media for ChatGPT AI services of various trustworthiness. They claim their AIs can construct and implement “the” winning trading algorithms for anyone, for a fee, of course.

This issue isn't on GPT per se, but many can't yet see the difference between plausible applications and those which aren't. With each fantastical story about what ChatGPT managed to do, it's hard to blame them.


I think there’s a difference here. There are cases I’ve found for minor manual tasks, things like real estate listings with some basic programmatically created prompts containing details of a listing.

Imo the largest holdback is context. I have some problems that would be amazingly well solved if i could hand gpt3 a whole large document and ask it questions about the document.


This thing of read that book for me and answer questions would be very useful.


I already use chatGPT in place of google sometimes. It has upsides and downsides, but I absolutely find it to be useful given the right prompt. Not sure how they will scale it to bing level but if they’re able to respond with both chatGPT response and the normal links I think it’d be a better product.


I agree that GPT modernization consulting and such bullshit will emerge. It always does. That's neither here nor there though. It's just noise.

Re: Tesla autopilot. I think this is the more serious. Obviously I'm speculating. It is definitely possible that GPT is one more example of AI doing cool tricks, and convincing us that is it farther advanced than it is.

That said, I don't think this is another Tesla autopilot for two reasons.

Reason one is robotics. Autopilot is robotics and robotics sucks. It's incredibly difficult, frustratingly physical and painfully hard to do the last 20%. The economics is even worse. It's very rare that robotics finds the unit volume necessary to achieve realistic prices. When version 1 is a $10 million dollar replacement for a $50k employee, there's rarely a version 2. It's just very hard to move the ball forward in robotics.

Reason two is that GPT is a solution looking for problems. It's more general. Autopilot has one job: drive the car and do the robotaxi thing. There are few stepping stones. Mistakes can kill people and kill Tesla. It either does the job or doesn't, Elonmagic notwithstanding.

GPT just needs to do something useful, not any particular thing. It can be used in low stakes applications. It can be bolted on to stuff for shots and giggle. Marginal costs are already zero. Basically, it's software. Software spreads faster and easier.


Last week, about ~30% of my Google searches were instead directed to ChatGPT.

It's pretty good if you want a summary of a topic, but every web page wants to give you in depth information assuming you're already an expert. For example "What gas is in double glazing?" or "Tell me how screws are made".

It's also good for more creative things, like "Write a story about an escapeologist". I now use it for ideas for bedtime stories for my son.


How do you know if the summary is accurate?


I've spot checked them, and found that for the sorts of queries I do, they seem to be spot on.

I wouldn't trust any numbers, or queries where the premise of the question is false though.


In this particular case (summarizing search results), it seems to work well. I think anchoring to known information helps avoid a lot of the issues (hallucinations, boring language, etc).

You can check out an open source demo I made if you want to play around with how search + GPT work - https://github.com/VikParuchuri/researcher .


Who predicted that the first widely used domestic robot would be anything like the Roomba?

AI will have a similar trajectory, IMO. It'll find a bunch of good use cases, but it won't be in predictable places, and the predictable ones won't be as useful as expected.


You will experience the AI revolution like a frog in a boiling pot. By the time you notice that AI is finally useful to you, it will be everywhere. And you will have missed out on the entire field.


> But loads of crypto-bro types will make millions selling ChatGPT "modernization consulting", so if you're looking to make a quick buck, highly recommend.

You guys will attribute anything under the sun to 'crypto-bros' at this point, it's HN's version of boomers 'must be the fault of the millennials,' which is ironic considering how many of you fall into the age group.

Furthermore, Sam Altman, former head of YC oversaw the funding of Coinbase which is now the largest crypto exchange in the US, and who himself tried to launch Worldcoin [0]. Which, yes is a (failed) crypto-currency and is also the head of OpenAI, which has created Dall-E and GPT.

Listen, I'm all for calling out scammers and have done it after over 10 years in this space, but the level of disillusion and absurd levels of paranoia amongst so many of you to constantly remind us all how 'crypto-bros' are lurking in every corner and under every rock just waiting to pounce on the opportunity to scam when in reality you are witnessing the creation of tech from these very same sectors and the people whose platforms you fear so much.

0: https://techcrunch.com/2021/10/21/sam-altmans-worldcoin-want...


Autopilot is playing out that way because there are a really large number of corner cases and fringe complexity to driving, and the consequences of getting things wrong are very expensive.

Most uses of ChatGPT will (at least initially) be around automating the easy stuff, and if it gets something wrong it'll be a mild annoyance.


I think people don't realize how many corner cases there are in coding. I think that the problems people have with accounting for all the Autopilot corner cases is going to be repeated with code generation as well.


> I think people don't realize how many corner cases there are in coding

I would even say that one of the central characteristics of coding is corner case management.


GPT isn't "write my app for me" it's, "let me give you an Engligh description of what I want a snippet of code to do, in what language and with what libraries and output something close enough to correct that I can look up the documentation on a few library methods and round it out faster than writing it from scratch.


I think AI code generation will really amplify the importance of unit tests. If you can enumerate all the corner cases using tests then tell the AI to write code that does XYZ and passes the given tests, that's very powerful.


True, and honestly, if it turns out I can write unit tests all day I'll actually be a happy camper, I love thinking through problems and trying to account for issues, but I can guess how we will be encouraged to generate the unit tests in this new paradigm.


I’d prompt the AI to write the unit tests too. I know what the corner cases are. It doesn’t (yet).


Is there a way to give ChatGPT a test suite and ask it for code that passes the suite?


Not as part of ChatGPT, you need to write/use a tool that takes a test and a prompt and re-runs the prompt until the output passes the test.


Sounds like someone should corner that market ASAP.


That is true, but think about domains like Advertisements. Ads are currently far from perfect. A 5% click rate is a huge success. If 30% of the ads generated are low quality but 10% do really well it will be a huge benefit


Ad text sounds like the exact opposite of a significant use case really; very little text to generate and high cost from the text being substandard. If 30% of the ads generated are low quality, you're wasting 30% of your ad budget, which is less efficient than paying a human to come up with higher average quality text (unless you're generating unique copy on the fly to evade spam detectors...). Just because current ads are far from perfect doesn't mean generated text isn't on average more banal and less accurate...

I think the main use case for pure LLMs will remain those already starting to be tackled by AI: low importance (or testable) information/opinion retrieval, low effort content generation, suggestion prompts and entertainment


Have you heard of A/B testing? With this you could do A-Z testing. And then let a human review the top 5 results of that A-Z testing. Instead of tasking a human with writing 26 ads


Better to test two good pieces of ad copy than 26 mediocre-to-entirely-inappropriate ones. The reason it's A/B testing rather than A/Z testing isn't because creating 26 variations at a time exhausts the potential of human creativity...


Are you suggesting that GTPChat can only generate "mediocre-to-entirely-inappropriate" ad text?


Yes.

(It'll use the same rhetorical question opening structure and cliche call to action close whether you're advertising products, countries, animals, products which exist or products which don't. This is just the most meta. The most noteworthy thing about the outputs apart from them being cliched and repetitive is its statements for why you should use ChatGPT's copywriting services are claims about ChatGPT which are obviously untrue...)

>> write an ad for ChatGPT copywriting services

> Are you tired of mediocre copy that fails to connect with your audience? ChatGPT Copywriting Services is here to help!

> Our team of expert copywriters knows how to craft compelling, persuasive copy that speaks to your target market and helps you stand out from the competition. From website content and blog posts to email campaigns and social media posts, we've got you covered.

> But it's not just about the words - we also understand the importance of SEO and how to create copy that helps your website rank higher in search results. Plus, with our flexible pricing and fast turnaround times, you can trust that we'll work with your budget and deadlines to deliver the results you need.

> So why wait? Contact ChatGPT Copywriting Services today and see how our expertise can help your business thrive. Whether you're a small startup or a large corporation, we have the skills and experience to help you succeed.

>> write an ad for ChatGPT copywriting services

> Are you looking for high-quality copywriting that engages your audience and helps your business stand out? Look no further than ChatGPT Copywriting Services!

> Our team of talented writers has a wealth of experience crafting compelling, persuasive copy for a wide range of industries. From website content and product descriptions to social media posts and email campaigns, we have the skills and expertise to help your business succeed.

> But it's not just about the words - we also understand the importance of SEO and how to craft copy that helps your website rank higher in search results.

> So why wait? Contact ChatGPT Copywriting Services today and see how our services can take your business to the next level. Whether you're a small startup or a large corporation, we have the experience and talent to help you reach your goals.

>> write an ad for ChatGPT copywriting services

> Looking for high-quality copywriting that will help your business stand out from the competition? ChatGPT Copywriting Services is here to help!

> Our team of talented writers is skilled at crafting compelling, persuasive copy that speaks to your target audience and helps you achieve your marketing goals. From website content and product descriptions to email campaigns and social media posts, we can create the perfect words to drive results.

> But it's not just about the words - we also understand the importance of SEO and how to craft copy that will help your website rank higher in search results.

> So why wait? Contact ChatGPT Copywriting Services today and see how our expertise can help your business succeed. Whether you're a small startup or a large corporation, we have the skills and experience to help you reach your goals. So, don't hesitate to reach out to us for all your copywriting needs.


That all seem pretty similar to the ads on the front page of google search results.

Just generate 200 of those and test them to see if any of them standout.

The Utility in these models is not that they are perfect or better than humans at all times, but that it is very cheap and to generate something of value across many domains.


But testing 200 scripts giving cliche-ridden and inaccurate descriptions of your product isn't very cheap, it's very expensive compared with asking somebody that actually understands your product to write one or two better scripts...


But having someone who is a bad writer but understands your product take a look at 200 generated copy lines and pick their favorites for A/B testing is much cheaper than having someone who understands your product and is a good writer writing ~20 lines and A/B testing those.


No it isn't. Copywriting starts very cheap, and getting a text-generator that doesn't understand your product to finally generate something which looks like it does isn't any less time consuming than writing words (especially short form copy). 200 iterations of generated copy stressing that the advantage of using ChatGPT for copywriting is its team's experience of SEO in slightly different ways are all equally useless to promote ChatGPT, the service which doesn't have a team of experienced writers and doesn't have experience in SEO.

And the really expensive bit of copywriting is the failed conversions. A LLM which when prompted to generate an ad for a particular market is trained to produce something which looks as similar to the competition and as much like an ad as possible isn't likely to generate the highest conversions....


Exactly, generating moderately decent advertising copy will be really helpful. Even just to start the creative process going.

"Here's a description of my product, give me 5 potential tag lines for it."

It's a great start for brain storming.


Even there the corner case failures are going to be amazing if we're throwing up AI generated Ads onto billboards indiscriminately. "So it seems our blacklist didn't include the word 'Hilter'...."


> easy stuff

Or stuff that isn't easy for humans, e.g. how to argue with insurance companies in a way that gets you what you want


Couldn't agree more, it's like a "cool trick". I havent' found it really useful for anything except some initial novelty / amusement. I've not opened that tab for weeks.


> Couldn't agree more, it's like a "cool trick". I havent' found it really useful for anything except some initial novelty / amusement. I've not opened that tab for weeks.

Same, during the update it just kept crashing and afterward it just seem way too prone to errors to be usable; their have been funny prompts Ive seen thrown around but the novelty wore off just like it did with Dall-E as you saw how many attempts you had to make to get to something viable and more competitors came online: Stable Diffusion.

This will keep the AI hype going so I'm not complaining, as a person who studies AI/ML, but it's hardly the miracle they are making it out to be. Perhaps that changes with GPT4, but I fear that sourcing is now becoming critical and just like Co-pilot it will encounter a lot of scrutiny before they can monetize it properly and open source projects will emerge to take market share.


I think the thing that is most needed now, is some way to evaluate and ensure that the results being returned are accurate. Just providing an answer isn't enough. Anyone can ask me a question, and I can give you an answer, but for most topics, it probably won't be the correct answer, but if I know enough buzzwords, I might be able to string together something that _sounds like it could be_ a correct answer.

We've seen this already. Except in limited cases, ChatGPT will not tell you that it doesn't know the answer. Or it will confidently give you the wrong answer. I'm reminded of a thread I saw last week where chat GPT was certain that 42 was a smaller number than 65(?) and when asked "can you count from 65 to 42?", it kept going up and up until it inevitably hit some sort of character count limit imposed by the service's algorithm for generating responses.

The cool part IMO about ChatGPT is the back and forth conversational style of sharing information. Now we need to integrate that with traditional search indexes and also work to provide the "AI" some sort of score to indicate if the returned results are accurate and be more upfront with the user if a certain confidence interval is not reached. An example of this in traditional search tools, is the "did you mean xxx?" prompt.

Until we can "trust" the results to be accurate, I don't see this technology being widely usable to expand the capabilities of users past their own knowledge. If you cannot easily verify the results as they are given to you today, then I don't really see the utility of the program beyond novelty.


I used it to find quick answers to congressional procedures and laws relating to elections. Very quick, informative replies that would have taken forever to unearth using Google.


I also found this useful, I was able to get an answer to "is this specific thing in this situation legal in this state" and have it produce correct references to the state code.

I don't actually care if it produces an incorrect answer or summary (it wasn't but that's irrelevant), the value is being able to take some complicated context that a search engine can't understand and say "look right here."


Did you double check the results? I know, it's no longer quick then, but you can't be sure that it didn't straight up lie to you.


I can relate to the career anxiety aspect. The way I want to look at it is, if AI is smart enough to occupy creative jobs such as engineers, the world could be heaven. The "only" problem left is politics. Humans will spend the rest of their unoccupied time negotiating distribution of all resources produced by robots.


> Humans will spend the rest of their unoccupied time negotiating distribution of all resources produced by robots.

I can't see a reason for the dynamics to change from what we have now - those who own the means of production will hoard the new power and those who don't will find their slice of the pie become comparatively even smaller, just as with every increase in efficiency.


And those with power will blame the Other for society’s problems and convince enough of them through media propaganda to vote against their own self interest.


If the pie is ten times as large, a slice that is half the relative size is still five times larger.


Indeed. But the feeling of well-being is not absolute, but relative to you socio-economic context. The average American is manitudes better off now than 200 years ago, in absolute terms. But are they just as many magnitudes more happy? No. Depending on study, happiness is significantly less.

It's not at all clear that maximizing absolute quality of life is the best ultimate goal.


The relative sizes of the pie slices matter for things other than immediate material comfort/gain; for example, political power.


That's the optimistic way to look at it, sure. No reason to think the slice can't shrink faster than the pie will grow.


> The difference between GPT3 and ChatGPT, to me, suggests a lot.

Could you be more specific about what those differences are exactly?

I'm curious how closely you've watched this space because we're basically exactly where everyone predicted we'd be back in 2018. I haven't seen anything yet that comes as a surprise in terms of the progress we're making.

I think the public has the wrong impression of the technology because without seeing the growth in the space it appears as this is a sudden advancement rather than an incremental improvement and you can easily get the wrong impression about future progress extrapolating from an incomplete data set.

> Skynet is not out of the question. Lots of reasons for trepidation.

It's out of the question with this line of technology. This is not artificial general intelligence and there's no reasonable pathway for it to become AGI.

I've used this analogy before but right now it's like we're at the dawn of flight and you're talking about going to the moon. No amount of incremental improvements to an airplane will get you a space traveling vehicle. You need a different technology entirely (a rocket) to make that happen.


I have been watching this space since Skip Thought Vectors in 2015. No. No one in 2018 suspected that large language models would smoothly scale up by simply increasing the number of parameters. There is no clear and obvious path from Attention is All you Need to InstructGPT, which just came out last February without hindsight.

Point at a single person, let alone "everyone" who predicted we would have AI-based coding assistance and be integrating this technology into a search engine by 2023. Anyone at all. I'd love to read a paper or even a blog post from 2018 predicting half the things that work now. ! You can't.

I've seen some hardcore goalpost moving before, but nothing as obviously provably wrong as "we're basically exactly where everyone predicted we'd be back in 2018."

Is this on the path to AGI? I doubt it. You need some sort of actor-critic component likely, though the RLHF stuff is working way better than it has any right to and already is far more agentic than the pure dumb language model of a year ago.


> No one in 2018 suspected that large language models would smoothly scale up by simply increasing the number of parameters.

https://d4mucfpksywv.cloudfront.net/better-language-models/l...

That was kind of the entire point of GPT-2.

Computerphile summed it up pretty well on GPT-3's release: https://youtu.be/_8yVOC4ciXc

Here's some quotes from that video:

"The thing about gpt2 is just that it was much bigger than anything that came before. It was more parameters and was kind of the point of that paper."

...

"They made gpt2 because the curve wasn't leveling off. We've gone 117 times bigger than gpt2 and they're still not leveling off."


I'm excited for ChatGPT in video games. Especially in procedurally generated or Roguelike games.


Absolutely!

A lot of completely new immersion possibilities to try. GPT powered characters will be a fascinating thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: