If the book was written about a particular case, that seems like specific legal advice.
If the book was a generalized "choose your own adventure" where you compose a sensible legal argument from selecting a particular template and filling it in with relevant data - use of the book essentially lets the user find the pre-existing legal advice that is relevant to their situation.
Chatbots as a system are arguably a lot more like the latter than the former - its a tool that someone can use to 'legal advise' themselves.
Are you still referring to the scenario from the article, or a different one where it's a resource you use outside of court?
> Here's how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker.
Also, probably wouldn't matter. The interactive human-ish-like nature might cross the line to being considered as counsel, even if you said it wasn't. See my response to your other comment.
Right, this strikes me as exactly the kind of "I'm not touching you!" argument that basically never works in a court of law. The law's not like code. "Well it's not any different than publishing a book, so this is just free speech and not legal representation"; "OK, cool, well, we both know that's sophist bullshit, judgement against you, next case."
By providing the words to say and arguments to make to the court, in response to a specific case or circumstance, DoNotPay was giving protected "legal advice" as opposed "legal information". There is ambiguity to find between legal advice and legal information, but that isn't.
[EDIT] That is, would they be immune from e.g. malpractice if they did this so they "weren't representing" the defendant?