Hacker Newsnew | past | comments | ask | show | jobs | submit | thomasahle's commentslogin

I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.

I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.


> And also plagiarism, when you claim authorship of it.

I don't actually mind putting Claude as a co-author on my github commits.

But for papers there are usually so many tools involved. It would be crowded to include each of Claude, Gemini, Codex, Mathematica, Grammarly, Translate etc. as co-authors, even though I used all of them for some parts.

Maybe just having a "tools used" section could work?


I suspect the parent post was concerned about plagiarizing the author of training data; not software tools.

I'm always surprised that Python doesn't have as good TUI libraries as Javascript or Rust. With the amount of CLI tooling written in Python, you'd think it had better libraries than any other language.

Blessed was a decent one iirc:

https://github.com/jquast/blessed

One reason for the lack of python might be the timing of the TUI renaissance, which I think happened (is happening?) alongside the rise of languages like Go and Rust.


it has, but python being single threaded (until recently) didn't make it an attractive choice for CLI tools.

example: `ranger` is written in python and it's freaking slow. in comparison, `yazi` (Rust) has been a breeze.

Edit: Sorry, I meant GIL, not single thread.


> it has, but python being single threaded (until recently) didn't make it an attractive choice for CLI tools.

You probably mean GIL, as python has supported multi threading for like 20 years.

Idk if ranger is slow because it is written in python. Probably it is the specific implementation.


> You probably mean GIL

They also probably mean TUIs, as CLIs don't do the whole "Draw every X" thing (and usually aren't interactive), that's basically what sets them apart from CLIs.


Even my CC status line script enjoyed a 20x speed improvement when I rewrote it from python to rust.

It’s surprising how quickly the bottleneck starts to become python itself in any nontrivial application, unless you’re very careful to write a thin layer that mostly shells out to C modules.

Textual looks really nice, but I usually make web apps so I haven’t tried it for anything serious:

https://textual.textualize.io/


Textual is cook, but it's maintained by a single guy, and the roadmap hasn't been updated since 2023, https://textual.textualize.io/roadmap/

Textual is A++. Feels a bit less snappy than Ink, but it makes up in all things with its immense feature-set. Seriously fun building apps of all kinds with this lib.

I’m using Textual for my TUI needs, it’s very decent.

What about replacing

> Haskell provides indexable arrays, which may be thought of as functions whose domains are isomorphic to contiguous subsets of the integers.

with

> Haskell provides indexable arrays, which are functions on the domain [0, ..., k-1]?

Or is the domain actually anything "isomorphic to contiguous subsets of the integers"?


In Haskell specifically, arrays really do allow for the more general definition. This makes the library documentation[1] quite a bit more intimidating to newcomers (speaking from personal experience), but saves you the boilerplate and hassle of figuring out the mapping yourself if you're indexing your array by some weird nonsense like `[(False, 'a', 5000, 0)..(True, 'z', 9001, 4)] :: (Bool, Char, Integer, Int8)`.

[1] https://hackage.haskell.org/package/array-0.5.8.0/docs/Data-...


That is typical in most languages, but Haskell's Data.Array is actually parametric over both the index type and the element type, with the index type required to provide a mapping to contiguous integers. This makes it similar to eg. a hashmap which is parametric over both key and element types, with the key type required to provide hashing.

I'd rather see a programing language optimized for "few tokens". Something like toon, but for code.

It took Andrew Wiles 7 years of intense work to solve Fermat's Last Theorem.

The METR institute predicts that the length of tasks AI agents can complete doubles every 7 months.

We should expect it to take until 2033 before AI solves Clay Institute-level problems with 50% reliability.


That's exactly why the Millennium Prize Problem Bench[1] was created.

1. https://mppbench.com/


That's amazing :D


There is an ongoing effort to formalize a modern, streamlined proof of FLT in Lean, with all the needed prereqs. It's estimated that it will take approx. 5 years, but perhaps AI will lead to some meaningful speedup.


What I'm hoping to see is high volume automated formalization of the math literature, with the goal of formalizing (or finding flaws in) the entire thing.

And once we have that formalized corpus, it's all set up as training data for moving forward.


We can't really have across-the-board formalization of the math literature without getting the basics done first (including the whole undergrad curriculum) which is what the mathlib folks are working on. It will in fact be interesting to see if AI can meaningfully speed up that work (although they seem to be bottlenecked on review and merging at the moment, not new contribs per se. So a "coding" AI workflow may be a bit of a closer fit.)


If you have a sufficiently strong verifier 1/100000 reliability is already enough


Sure, but then 50% reliability just becomes a matter of whether you can make a strong enough verifier.


Not being a democracy does not give other countries a right to invade you.


ACR — Automatic Content Recognition: tech in some smart TVs/apps that identifies what’s on-screen (often via audio/video “fingerprints”) and can report viewing data back to vendors/partners.

VPPA — Video Privacy Protection Act: a U.S. law aimed at limiting disclosure of people’s video-viewing/rental history.

HDCP — High-bandwidth Digital Content Protection: an anti-copy protocol used on HDMI/DisplayPort links to prevent interception/recording of protected video.

DRM — Digital Rights Management: a broad term for technical restrictions controlling how digital media can be accessed, copied, or shared.

MPAA — Motion Picture Association of America: the former name of the main U.S. film-industry trade group (now typically called the MPA, Motion Picture Association).

TV / TVs — Television(s).


Appreciate this breakdown


U.S. — United States (of America)


Thank you


I assume they are just advocating rolling back the recent roll of legalization over the past few years.


> I’m wondering if some kids are saying they need this stuff to justify the condition or to play up the sympathy, to make the condition their personality.

In a highly competitive environment like Stanford, isn't it more likely that it's to get more time on tests -> better grades -> higher paying job?


I can only speak for myself, but I didn’t need more time on tests in college. If I didn’t know the answers, more time wasn’t going to help, and I’d get bored/frustrated and just want it to be over. I was usually one of the first people done, be it A or D work.

If they need all that extra time, the university on the degree might get them some extra money out of the gate, but I have to imagine it will work against them if they are slow and are always asking for extra time to complete projects. That’s not going to get them promoted and they will stagnate. I’ve also never had any job ask me about my grades.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: