An even-simpler reality is that, to the extent AI helps you cheat on your professional exams, you're about to enter a dead-end profession that will no longer exist a few years from now.
To me exams is possibly the easiest thing to tune AIs for. You have the clearest metrics, you possibly have lot of material in training data already. And well those tests are not even supposed to be novel. Seems like a thing that LLM should really excel at.
Interesting thought. I feel currently AI is most useful to those who have a decent understanding of the subject material and can critique any output. To wholesale trust the output of AI and remove any human in the loop, well, it needs to be really correct all the time.
Exams are a different beast and really a subset of a range of common problems.
Still, I'm very curious what happens when people who have just cheated their way through college, or these kinds of professional exams, meet the real world? Will they all get fired a few months down the track?
Still, I'm very curious what happens when people who have just cheated their way through college, or these kinds of professional exams, meet the real world? Will they all get fired a few months down the track?
They will continue to use AI to do their jobs. Eventually, the people who pay their salaries will ask themselves why they continue to pay them.
To wholesale trust the output of AI and remove any human in the loop, well, it needs to be really correct all the time.
It's not now, but it will be. Accounting is what you might call an exact science, one where creativity isn't rewarded and where hallucinations by one model can be detected and corrected by others. There is no need for humans to do this type of work.
well, lets take that as true (no more humans doing accounting). Doesn't that mean that there'll be a knowledge gap there? What happens when new rules come along (laws change all the time which will affect accounting practices). How will an AI (at least the current batch) learn what needs to change when there's no prior art for it to lean on?
I mean sure, if we ever get AGI then all bets are off, but, as far as I know we're not there, and LLMs are unlikely to evolve into AGI. They're not thinking right? It doesn't actually _understand_ anything right? I mean, I'm quite probably wrong here, but, as far as I can tell it's really just very fancy backwards autocomplete.
(Shrug) Thinking, unthinking, meh. If you can perform near the top level at the International Math Olympiad you're not going to have much trouble with the tax code.
What will likely happen is that future tax codes will be written specifically with rules oriented towards automation. We won't have to train general-purpose LLMs by shoving trainloads of IRS documents, Congressional records, and tax court cases at them, as happens now. I think we'll see lots of specialized models ramp up at some point, for efficiency's sake if not just for accuracy and traceability.
>> I'm very curious what happens when people who have just cheated their way through college, or these kinds of professional exams, meet the real world
Certification questions, as well as interview questions usually quite far from the real world. The best strategy is to fake everything to pass through and then learn at work.
Basically fake it until you make it. The hardest part of swe job is to land it.
At the moment very many of the people who take those exams wouldn't get a job with me. They are only good for box ticking audit work. AI will take their jobs. Less than 1% will make audit partner and actually pay for the effort the exams require. I can see why people cheat.
A small % transition to industry from practice, and have too learn their jobs all over again. That group will still exist in my view. They are the ones who will be asking AI the right questions. God only knows how we will train that 1%!