Fully agree. The academia gatekeeping in certain fields hasn’t served us well over the last decade or so.
Fresh ideas from individuals with experience from all walks of life should be entertained. Doesn’t mean they’ll be followed.
Furthermore, DOGE has no actual power administratively, really all they can do is advise. Congress would need to grant them power first. Saagar Enjeti gas a good take on this, he’s pretty well versed in Washington-Speak
The DOGE guys + Executive branch will try to get the Supreme Court to rule that the phrase "take care" in the Constitution means that the President has the power to fire any government employee at any time.
I enjoyed his recent conversation with Lex, but I lost quite a bit of respect for his opinion on politics when he called the Cambridge Analytica scandal nonsense.
For one, who said anything about "first principles thinking"? Elon Musk has a pronounced ideological bias. Anyway, first principles thinking is practically useless when it comes to highly complex systems, because such systems do not behave in self-evident ways. Empirical knowledge is the only thing that gets you anywhere.
Moreover, "DOGE" is not a break from the status quo in any way. Corporate interests have informed governance since long before either of us were born. That, rather than "policy wonks," is the rot at the heart of the government. Forever wars happen because they are extremely profitable for weapons manufacturers, not because warfare is a wonkish policy.
The only novelty "DOGE" brings to the table is the aesthetics of an SF tech startup, which won't help the government any more than it helped WeWork. It'll actually do less: WeWork was taken seriously, at least for a while. "DOGE" is impossible to take seriously.
And yet they're allying with Trump and his Republicans? Republicans are responsible for the most recent US "forever wars", and Trump has threatened to invade various countries Syria, North Korea, Venezuela, publicly proposed annexing Mexico, Canada, Panama, and Greenland, has fired missiles into numerous countries like Syria, assassinated an Iranian general, etc.
And Trump loudly opposes various orgs that are responsible for holding aggressive powers at bay, like how NATO represents a check on Russia's apparent violent expansionism.
Trump's approach to geopolitics seems just as violent as his predecessors but more mercurial and erratic.
I would venture that introducing fresh ideas and technologists with first principles thinking will yield better results.
It could, maybe. Provided the people you appoint have some measure of credibility and integrity. Or at least seem to have some kind understanding of the basic mechanics by which governments (even when reduced to a bare minimum) need to operate.
Elon and Vivek plainly do not fit this description, and that should be screamingly obvious by now.
Which is funny because if Wikipedia dies, who will continue providing updated training data to these models? It's a weird self-fulfilling prophecy that consolidates without financial replacement.
The same sources Wikipedia gets its information from. You can not even contribute to Wikipedia without providing outside sources, I think, which are usually websites or books
I sometimes use neural nets for obscure compound questions (with a so so efficiency), but I can't imagine using NN in place of Wikipedia. I go to Wikipedia to find factual information (by factual I don't mean guaranteed, I mean hard data - years, names, models etc.). How can anyone rely on a random text generator to get factual data?
>Genuinely don't know why anyone would use it when you have perplexity, gemini, chatGPT search, etc. at your disposal.
LLMs hallucinate/confabulate. I use Wikipedia to check source info and to find additional information. Of course there are more reliable sources than Wikipedia, but it's useful, still.
Google very publicly commits that enterprise customers (GCP, Workspace) have their data firewalled off from ads. If you have evidence to the contrary, there are many, many companies and governments that would like to know.
Practical challenge with a $250 prize: Make a 2D isometric HTML+JS game (dealer's choice on library) in the next 48 hours that satisfies these modest random requirements:
A character walks around a big ornate classic library, pulling books from bookshelves looking for a special book that causes a shelf to rotate around and reveal a hidden room and treasure chest. The player can read the books and some are just filler but some have clues about the special book. If this can be done with art, animations, sound, UI, the usual stuff, I'll believe the parent poster's claim to be true.
As someone using LLM-based workflows daily to assist with personal and professional projects, I'll wager $250 that this is not possible.
Sounds like a comfy sequence in a larger game I would anticipate on replay. I put my own $250 on the table (given the prompt and process were forthcoming).
Probably because CA politicians want to spend tax money on things like providing $150k of housing loans to undocumented migrants
Losing out on a house bid as an American citizen to an undocumented migrant thanks to this policy, truly the type of stuff only a CA politician could come up with.
I do. I provide housing units at the cost of maintenance. I believe profiting from rent is deeply unethical. My renters have paid basically for their share of heating, electricity, water, sewer, and wear and tear on the building itself.
But I cannot house the thousands of unhoused people in my city alone. I can provide housing for 2-5 of them at any given time.
That said, do you know what offering housing for a couple hundred bucks a month does for a struggling family? It fundamentally changes their ability to establish themselves. Build savings, escape poverty.
Edit: I'm not trying to signal or whatever by posting this. I have ethical beliefs about supporting one's local community and try not to be hypocritical about it. When people challenge me, suggesting I might be hypocritical ("why don't you spend your own money on it!") I try to respond with, "I do. And I see the changes it makes in real human lives."
But always as a response to someone trying to call me out, never unprompted.
No, pay for all of it, not what it costs you to maintain while your properties balloon in value far more than any rent you could have collected on them. You are no more ethical than any other landlord so please stop pretending to be. You and real estate investors like you are contributors to the homelessness problem.
I put the land I buy into a trust that removes my ability to profit from it. The trust cannot take actions that would cause them to profit from selling the land.
Any equity the trust holds is from me or others putting in funds that are held solely for the purposes of providing housing for the community at cost.
My personal mortgage I do pay for, and I will be ceding whatever land I hold to that trust upon my death so that it can provide housing in perpetuity.
The only reason the land is not currently in the trust's hands is because I need the ability to move freely. I could arrange to sell the house to the trust, and lease it from the trust, but because there's a bank involved with my mortgage that gets more complex.
The thing I discovered is that it's highly variable depending on location. Step one is talk to a lawyer who knows real estate and landlord tenant laws in your specific area.
Alternatively, search for a "community land trust" in your area and discuss with them how you can manage your property upon your death.
Thanks. So many people are like, "sure, I'd love to live in a world like X, but there's nothing I can do about it". Well, you can model it, make the change. Maybe someone will be inspired by you.
I read about someone who paid all his tenant's rent back to them after he became an anarchist or whatever. I crunched the numbers, and found out that for like 15k I could pay back every penny I had collected over maintenance, plus the appreciation of the principal. I sent out checks with amounts between $1500 and $4500, along with a note saying I didn't believe in rent any more.
I saw someone else do it, so I did it. Maybe someone on Hacker News will see my posts and do it too. Odds are small, but non zero.
I can tell you it made a material positive change in the lives of those people. That's a much better legacy than, "I died $15k richer" or "I went on vacation at a resort an extra time".
> The program is currently available to low- and middle-income first-time home buyers in California and Arambula’s bill aims to open it up to tax-paying, undocumented immigrants.
This drops about fifty tiers down the stack of things I can manage to get myself seriously worked up over by actually reading the article.
I’m finding it pretty hard to find a lot of outrage over “taxpayers now eligible for government services”.
They're being offered loans, which they'll have to pay back. This won't cost taxpayers anything. And anyway, these undocumented migrants ARE TAXPAYERS THEMSELVES. I fail to see any reason to be outraged about this.
Offering loans to people that couldn't otherwise get one increases the pool of buyers, driving demand and increasing prices. I don't raise that as a reason not to have the program, but you are missing second order effects if you consider there to be no cost to the broader public.
More importantly, how exactly is an undocumented immigrant paying taxes? Do they somehow have a tax ID or SSN without documentation? Or are you just referring to things like sales and fuel taxes?
It used to be that immigrants would simply invent a fake SSN in order to get a job. The e-verify system may have complicated that, but I imagine it's still pretty much the same for smaller businesses. And especially agricultural work, meat processing plants, etc. All the businesses that rely on undocumented labor.
So the immigrants are paying into the SS/medicare system, but they don't receive any of the benefits.
I think the solution to the housing problem is to increase supply. Not to try to prevent a whole group of people from being able to buy homes.
> I think the solution to the housing problem is to increase supply. Not to try to prevent a whole group of people from being able to buy homes.
That seems totally reasonable if there's demand for more houses and the ability to build more.
Phrasing it as though people who already can't to buy a home are being prevented simply by not providing government assistance is a big disingenuous though. Preventing them implies that they could otherwise buy the home, and if that were the case a government program wouldn't be needed.
They aren’t being handed $150,000 in unmarked bills. It’s loan assistance for a house which is pretty fucking difficult to throw into the back of a truck and haul over the border.
"healthcare, education, housing, and food were secured for all."
Ah yes all of this provided to you by Big Government, who can at a whim withdraw these services if you are found to engage in WrongThink.
We already have seen Western governments like Canada financially banning grandmothers who donated to a trucker protest or the UK imprisoning people for tweets.
Do you really want a bunch of DC politicians to have the power of life and death, not to mention access to swaths of capital that they will inevitably use to engorge and enrich themselves?
Your 600k in taxes will go into the pockets of various interest groups that markup their goods and services because they know Uncle Sam will foot the bill.
I would ultimately prefer a system of small, local, community managed democratic syndicalism. But, if I have to live in a capitalist world I'd prefer to live in one where the people in power at least try to take care of people.
Ideally, no, it would be centralized in neither the wealthy nor the government. But I think "raise taxes" is an easier path to net good than "let's seize the factories and run federations of democratically run industries in the interest of the wellbeing of all".
I think you're missing that the suggested path is not "let's seize the factories", but instead a return to "distributed charity" like we had before the Great Depression.
LLMs can be used to generate high-quality, human-like content such as articles, blog posts, social media posts, and even short stories. Businesses can leverage this capability to save time and resources on content creation, and improve the consistency and quality of their online presence.
2. Customer Service and Support:
LLMs can be integrated into chatbots and virtual assistants to provide fast, accurate, and personalized responses to customer inquiries. This can help businesses improve their customer experience, reduce the workload on human customer service representatives, and provide 24/7 support.
3. Summarization and Insights:
LLMs can be used to analyze large volumes of text data, such as reports, research papers, or customer feedback, and generate concise summaries and insights. This can be valuable for businesses in fields like market research, financial analysis, or strategic planning.
4. HR Candidate Screening:
Use case: Using LLMs to assess job applicant resumes, cover letters, and interview responses to identify the most qualified candidates.
Example: A large retailer integrating an LLM-based recruiting assistant to help sift through hundreds of applications for entry-level roles.
5. Legal Document Review:
Use case: Employing LLMs to rapidly scan through large volumes of legal contracts, case files, and regulatory documents to identify key terms, risks, and relevant information.
Example: A corporate law firm deploying an LLM tool to streamline the due diligence process for mergers and acquisitions.
I'm working on AI tools for teachers and I can confidently say that GPT is just unbelievably good at generating explanations, exercises, quizes etc. The onus to review the output is on the teacher obviously, but given they're the subject matter experts, a review is quick and takes a fraction of the time that it would take to otherwise create this content from scratch.
As a teacher - I have no shortage of exercises, quizes etc. Internet is full of this kind of stuff and I have no trouble finding more than I ever need. 95% of my time an mental capacity in this situation goes for deciding what makes sense in my particular pedagogical context? What wording works best for my particular students? Explanations are even harder. I find out almost daily that explanations which worked fine in last year, don't work any more and I have to find a new way, because previous knowledge, words they use and know etc of new students are different again.
>As a teacher - I have no shortage of exercises, quizes etc. Internet is full of this kind of stuff and I have no trouble finding more than I ever need
Which all takes valuable time us teachers are extremely short on.
I've been a classroom teacher for more than 20 years, I know how painful it is to piece together a hodge podge of resourecs to put together lessons. Yes the information is out there, but a one click option to gather this into a cohesive unit for me saves me valuable time.
>95% of my time an mental capacity in this situation goes for deciding what makes sense in my particular pedagogical context? What wording works best for my particular students?
Which is exactly what GPT is amazing at.Brainstorming, rewriting, suggesting new angles of approach is GPTs main stength!
>Explanations are even harder.
Prompting GPT to give useful answers is part of the art of using these new tools. Ask GPT to speak in a different voice, take on a persona or target a differnt age group and you'll be amazed at what it can output.
> I find out almost daily that explanations which worked fine in last year, don't work any more
Exactly! Reframing your own point of view is hard work, GPT can be an invaluable assistant in this area.
> Which is exactly what GPT is amazing at.Brainstorming, rewriting, suggesting new angles of approach is GPTs main stength!
No, it isn't. It just increases noise. I don't need any more info, I need just to make decisions "how?".
> Prompting GPT to give useful answers is part of the art of using these new tools. Ask GPT to speak in a different voice, take on a persona or target a differnt age group and you'll be amazed at what it can output.
I'm not amazed. At best it sounds like some 60+ year old (like me) trying to be in the "age group" 14 while after only hearing from someone how young people talk. Especially in small cultures like ours here (~1M people).
I have teachers in my family, their lives have been basically ruined by people using ChatGPT-4 to cheat on their assignments. They spend their weekend trying to workout if someone has "actually written" this or not.
So sorry, we're back to spam generator. Even if it's "good spam".
One potential fix, or at least a partial mitigation, could be to weight homework 50% and exams 50%, and if a student's exam grades differ from their homework grades by a significant amount (e.g. 2 standard deviations) then the lower grade gets 100% weight. It's a crude instrument, but it might do the job.
a bit dramatic. there has to be an adjustment of teaching/assessing, but nothing that would "ruin" anyone's life.
>So sorry, we're back to spam generator. Even if it's "good spam".
is it spam if it's useful and solves a problem? I don't agree it fits the definition any more.
Teachers are under immense pressure, GPT allows a teacher to generate extension questions for gifted students or differentiate for less capable students, all on the fly. It can create CBT material tailored to a class or even an individual student. It's an extremely useful tool for capable teachers.
is it spam if it's useful and solves a problem? I don't agree it fits the definition any more.
Who said generating an essay is useful sorry ? What problem does that solve?
Your comments come accross as overly optimistic and dismissive . Like you have something to gain personally and aren’t interested in listening to others feedback.
I'm developing tools to help teachers generate learning material, exercises and quizes tailored to student needs.
>Who said generating an essay is useful sorry ? What problem does that solve?
Useful learning materials aligned with curriculum outcomes, taking into account learner needs and current level of understanding is literally the bread and butter of teaching.
I think those kinds of resources are both useful and solve a very real problem.
>Your comments come accross as overly optimistic and dismissive . Like you have something to gain personally and aren’t interested in listening to others feedback.
Fair point. I do have something to gain here. I've given a number of example prompts that are extremely useful for a working teacher in my replies to this thread. I don't think I'm being overly optimistic here. I'm not talking vague hypotheticals here, the tools that I'm building are already showing great usefulness.
> a bit dramatic. there has to be an adjustment of teaching/assessing, but nothing that would "ruin" anyone's life.
If you don't have the power to just change your mind about what the entire curriculum and/or assessment context is, it can be a workload increase of dozens of hours per week or more. If you do have the power, and do want to change your entire curriculum, it's hundreds of hours one-time. "Lives basically ruined" is an exaggeration, but you're preposterously understating the negative impact.
> is it spam if it's useful and solves a problem?
Whether or not it's useful has nothing to do with whether or not it's spam. I'm not claiming that your product is spam -- I'll get back to that -- but your reply to the spam accusation is completely wrong.
As for your hypothesis, I've had interactions where it did a good job of generating alternative activities/exercises, and interactions where it strenuously and lengthily kept suggesting absolute garbage. There's already garbage on the internet, we don't need LLMs to generate more. But yes, I've had situations where I got a good suggestion or two or three, in a list of ten or twenty, and although that's kind of blech, it's still better than not having the good suggestions.
>Whether or not it's useful has nothing to do with whether or not it's spam.
I think it has a lot to do with it. I can't see how generating educational content for the purpose of enhancing student outcomes with content reviewed by expert teachers can fall under the category of spam.
>As for your hypothesis, I've had interactions where it did a good job of generating alternative activities/exercises, and interactions where it strenuously and lengthily kept suggesting absolute garbage.
I like to present concrete examples of what I would consider to be useful content for a k-12 teacher.
This would align with Year 9 Maths for the Australian Curriculum.
This is an extremely valuable tool for
- A graduate teacher struggling to keep up with creating resources for new classes
- An experienced teacher moving to a new subject area or year level
Bear in mind that the GPT output is not necessarily intended to be used verbatim. A qualified specialist teacher with often times 6 years of study (4 year undergrad + 2 yr Masters) is the expert in the room who presumably will review the output, adjust, elaborate etc.
As a launching pad for tailored content for a gifted student, or lower level, differentiated content for a struggling student the GPT response is absolutely phenomenal. Unbelievably good.
I've used Maths as an example, however it's also very good at giving topic overviews across the Australian Curriculum.
Here's one for: elements of poetry:structure and forms
Again, an amazing introduction to the topic (I can't remember the exact curriculum outcome it's aligned to) which gives the teacher a structured intro which can then be spun off into exercises, activities or deep dives into the sub topics.
> I've had situations where I got a good suggestion or two or three, in a list of ten or twenty
This is a result of poor prompting. I'm working with very structured, detailed curriculum documents and the output across subject areas is just unbelievably good.
There are countless existing, human-vetted, designed on special purpose, bodies of work full of material like the stuff your chatgpt just "created". Why not use those?
Also, each of your examples had at least one error, did you not see them?
>Also, each of your examples had at least one error, did you not see them?
I didn't could you point them out?
>There are countless existing, human-vetted, designed on special purpose, bodies of work full of material like the stuff your chatgpt just "created". Why not use those?
As a classroom teacher I can tell you that piecing together existing resources is hard work and sometimes impossible because resource A is in this text book (which might not be digital) and resource B is on that website and quiz C is on another site. Sometimes it's impossible or very difficult to put all these pieces together in a cohesive manner. GPT can do all that an more.
The point is not to replace all existing resources with GPT, this is all or nothing logic. It's another tool in the tool belt which can save time and provide new ways of doing things.
Also have teachers in my family. Most of the time is spent adjusting the syllabus schedule and guiding (orally) the stragglers. Exercises, quizes and explanations are routine enough that good teachers I know can generate them on the spot.
>Exercises, quizes and explanations are routine enough that good teachers I know can generate them on the spot.
Every year there are thousands of graduate teacher looking for tools to help them teach better.
>good teachers I know can generate them on the spot
Even the best teacher can't create an interactive multiple choice quiz with automatic marking, tailored to a specific class (or even a specific student) on the spot.
I've been teaching for 20+ years, I have a solid grasp of the pain points.
> Even the best teacher can't create an interactive multiple choice quiz with automatic marking, tailored to a specific class (or even a specific student) on the spot.
Neither can "AI" though, so what's the point here?
here's an example of a question and explanation which aligns to Australian Curriculum elaboration AC9M9A01_E4 explaining why frac{3^4}{3^4}=1, and 3^{4-4}=3^0
This is a relatively high level explanation. With proper prompting (which, sorry I don't have on hand right now) the explanation can be tailored to the target year level (Year 9 in this case) with exercises, additional examples and a quiz to test knowledge.
This is just the first example I have on hand and is just barely scratching the surface of what can be done.
The tools I'm building are aligned to the Austrlian Curriculum and as someone with a lot of classroom experience I can tell you that this kind of tailored content, explanations, exercises etc are a literal godsend for teachers regardless of experience level.
Bear in mind that the teacher with a 4 year undergrad in their specialist area and a Masters in teaching can use these initial explanations as a launching pad for generating tailored content for their class and even tailored content for individual students (either higher or lower level depending on student needs). The reason I mention this is because there is a lot of hand-wringing about hallucinations. To which my response is:
- After spending a lot of effort vetting the correctness of responses for a K-12 context hallucinations are not an issue. The training corpus is so saturated with correct data that this is not an issue in practice.
- In the unlikely scenario of hallucination, the response is vetted by a trained teacher who can quickly edit and adjust responses to suit their needs
Let’s call it for what it is- taking poorly organized existing information and making it organized and interactive.
“Here are some sharepoint locations, site Maps, and wikis. Now regurgitate this info to me as if you are a friendly call center agent.”
Pretty cool but not much more than pushing existing data around. True AI I think is being able to learn some baseline of skills and then through experience and feedback adapt and be able to formulate new thoughts that eventually become part of the learned information. That is what humans excel at and so far something LLMs can’t do. Given the inherent difficulty of the task I think we aren’t much closer to that than before as the problems seem algorithmic and not merely hardware constrained.
>taking poorly organized existing information and making it organized and interactive.
Which is extremely valuable!
>Pretty cool but not much more than pushing existing data around.
Don't underestimate how valuable it is for teachers to do exactly that. Taking existing information, making it digestable, presenting it in new and interseting ways is a teacher's bread and butter.
It’s valuable for use cases where the problem is “I don’t know the answer to this question and don’t know where to find it.” That’s not in and of itself a multibillion dollar business when the alternative doesn’t cost that much in the grand scheme of things (asking someone for help or looking for the answer).
Are you suggesting a chatbot is a suitable replacement for a teacher?
I’ve rarely if ever seen a model fully explain mathematical answers outside of simple geometry and algebra to what I would call an adequate level. It gets the answer right more often than explaining why that is the correct answer. For example, it finds a minimal case to optimization, but can’t explain why that is the minimal result among all possibilities.
They're currently already relying on overworked, underpaid interns who draft those documents. The lawyer is checking it anyway. Now the lawyer and his intern have time to check it.
I suggest we do not repeat the myth and urban legend that LLMs are good for legal document review.
I had a couple of real use cases used for real clients who were hyped about LLMs to be used for document review and trying to save salary, for Engish language documents.
We've found Kira, Luminance and similar due diligence project management stuff as useful being a timesaver if done right. But not LLMs.
Due to longer context windows, it is possible to ask LLMs the usual hazy questions that people ask in a due diligence review (many of which can be answered dozens of different ways by human lawyers). Is there a most favoured nation provision in the contract, is there a financial cap limiting the liability of the seller or the buyer, governing law etc.
Considering risks of uploading such documents into ChatGPT, you are stuck with Copilot M365 etc. or some outrageously expensive "legal specific" LLMs that I cannot test.
Just to be curious with Copilot I've asked five rather simple questions for three different agreements (where we had the golden answer), and the results were quite unequal, but mostly useless - in one contract, it incorrectly reported for all questions that these cannot be answered based on the contract (while the answers were clearly included in the document), in an another, two questions were answered correctly, two questions not answered precisely (just governing law being US instead of the correct answer being Michigan, even after reprompting to give the state level answer, not "USA") and hallucinated one answer incorrectly. In the third one, three answeres were hallucinated incorrectly, answered one correctly and one provision was not found.
Of course, it's better to have a LEGAL specific benchmark for this, but 75% hallucination in complex questions is not something that helps your workflow (https://hai.stanford.edu/news/hallucinating-law-legal-mistak...)
I don't recommend at least LLMs to anyone for legal document reviews, even for the English language.
I have no idea what type of law you're talking about here, but (given the context of the thread) I can guarantee you major firms working on M&As are most definitely not using underpaid interns to draft those documents. They are overpaid qualified solicitors.
I’ve been doing RLHF and adjacent work for 6 months. The model responses across a wide array of subject matter are surface level. Logical reasoning, mathematics, step by step, summarization, extraction, generation. It’s the kind of output the average C student is doing.
We specifically don’t do programming prompts/responses nor advanced college to PHD level stuff, but it’s really mediocre at this level and these subject areas. Programming might be another story, I can’t speak to that.
All I can go off is my experience but it’s not been great. I’m willing to be wrong.
> It’s the kind of output the average C student is doing.
Is the output of average C students not commercially valuable in the listed fields? If AI is competing reliably with students then we've already hit AGI.
Except for number 3, the rest are more often disastrous or insulting to users and those depending on the end products/services of these things. Your reasoning is so bad that i'm almost tempted to think you're spooning out PR-babble astro-turf for some part of the industry. Here's a quick breakdown:
1. content: Nope, except for barrel-bottom content sludge of the kind formerly done by third world spam spinning companies, most decent content creation stays well away from AI except for generating basic content layout templates. I work as a writer and even now, most companies stay well away from using GPT et al for anything they want to be respected as content. Please..
2. Customer service: You've just written a string of PR corporate-speak AI seller bullshit that barely corresponds to reality. People WANT to speak to humans, and except for very basic inquiries, they feel insulted if they're forced into interaction with some idiotic stochastic parrot of an AI for any serious customer support problems. Just imagine some guy trying to handle a major problem with his family's insurance claim or urgently access money that's been frozen in his bank account, and then forced to do these things via the half-baked bullshit funnel that is an AI. If you run a company that forces that upon me for anything serious in customer service, I would get you the fuck out of my life and recommend any friend willing to listen does the same.
3. This is the one area where I'd grant LLMs some major forward space, but even then with a very keen eye to reviewing anything they output for "hallucinations" and outright errors unless you flat out don't care about data or concept accuracy.
4. For reasons related to the above (especially #2) what a categorically terrible, rigid way to screen human beings with possible human qualities that aren't easily visible when examined by some piece of machine learning and its checkbox criteria.
5. Just, Fuck No... I'd run as fast and far as possible from anyone using LLMs to deal with complex legal issues that could involve my eventual imprisonment or lawsuit-induced bankruptcy.
2.I think you overestimate the caliber of query received in most call centres. Even when it comes to private banks (for those who've been successful in life), the query is most often something small like holding their hand and telling them to press the "login" button.
Also these all tend to have an option where you simply ask it and it will redirect you to a person.
Those agents deal with the same queries all day, despite what you think your problem likely isn't special, in most cases may as well start calling the agents "stochastic parrots" too while you're at it.
I haven't updated this in years but this was my attempt at building a real time personal dashboard, connecting various API's and components.
Was pretty helpful for me, basically acted as a heads up display that I could look at to track my stats over the day (steps, meditation, focus time, etc.)
Policy wonks and lawyers have run America into the ground with reckless spending and forever wars.
I would venture that introducing fresh ideas and technologists with first principles thinking will yield better results.