Hacker Newsnew | past | comments | ask | show | jobs | submit | more ezequiel-garzon's commentslogin

I don't think your statement is controversial at all, as I imagine climate change skeptics and vaccine safety skeptics to be largely overlapping sets of people.

The real issue in terms of rational decision making, in the US and elsewhere, is the formation of the correct group and the incorrect group for pretty much anything, and their corresponding echo chambers. It looks like this recommended book at least attempts to break away from this trend.


For those Python programmers out there, if you don't mind sharing your experiences, do you spend any time at all on the REPL? If so, what fraction of the time, approximately? Using IPython, JupyterLab, or something else? Or do you just run it directly from an VS Code or PyCharm? Anything you may want to add about your routine would be appreciated.

Oh, if you (experienced programmer or not) happen to know about a good site or YouTube channel to see Python programmers in action (as opposed to tutorials), please share.

Thanks in advance, and apologies for the digression.


I use Jupyter and vscode together. I'll be write new snippets of code in Jupyter then move it into .py stand-alone files when I'm happy it. I'll use vscode to work on already established code. The extension for auto reloading in Jupyter is super helpful.

Notebooks are just plain awesome. Whenever I use a new api or service, I'll make a notebook with cells showing how to call/run each operation and commit as a sort of executable documentation.


I'm probably not the average python programmer.

But I normally just create two terminals (I have a tiling window manager) and in one I open a python file under /tmp/ write my code and execute it in the other terminal.

I would probably use a REPL if it was integrated in my favorite editor ( https://helix-editor.com ). But everything else I tried was to "clunky" for me.

Though I work with data scientists, and they love to do everything inside jupyterlab.


A its spooky reading someone with basically the exact same workflow as me!

I use helix in the terminal, regularly opening up a split pane in tmux to either breakpoint in, or test out bits of code interactively. I'm not quite as organized as having two regular panes- I'll close and open them pretty quickly. Often just to try some toy example of reorganising a duct or something before writing it out into code.


Haha great to hear I'm not the only one! I just miss the speed of Helix when using Jupyterlab et al, so I just do it this way

Yeah I'm definitely not that organized, I also don't keep both open all the time, my fingers are sometimes to quick and close one of the terminals without me thinking about it. But I kept it simple in my example, so others could get the idea, as this is basically the "concept" behind my workflow.


I use “breakpoint()” a lot for debugging. It’s by far my #1 tool for figuring out why something isn’t working. Recommend you learn how to use it.


Thank you so much. Your comment helped me a lot. I wish a lot more people knew about this.


I usually develop locally with the vscode debugger. On staging servers I often do remote vscode sessions also. On production I often use the REPL since I don’t want to install additional tools, but still need to inspect the state of a pipeline in a more step-by-step fashion.


Work with a ton of ETL and web APIs, and build internal tools. Almost everything new starts off in a notebook and then either moves to a standalone .py, ends up in aws lambda (typically Zappa flask projects), or in an airflow dag.

Almost never ever use the REPL.


I'm nowhere near a programmer, but I use 'ipython' quite a bit for prototyping.

Particularly in cases where I'm trying to figure out how I want to modify some object.

I dabble largely due to Ansible and system administration purposes. IDEs and the like aren't a thing for me; neovim/LSP instead.


REPL 0% JupyterLab 80% Pycharm 20%

The reason is that Jupyter environment is lightyears more powerful than REPL. Feels like REPL is for those who don't code / only those who don't code would use REPL. I don't even use that after the first day.


for python: 90/100 times, I just run the code/tests/debugger, 8/100 times I'll pull out the code and step through it with my own inputs, 2/100 use a repl and break on areas of interest to debug because the debugger just isn't cooperating. It's just too easy to use the debugger in something like vscode to run a module, especially in python since you can just right click run any old module a lot of the time just make a __main__ and feed some parameters, step through it, unlike with a static language.


I agree. Speaking of old school, they scanned their first (1960) issue: https://www.electronicsweekly.com/news/read-first-ever-elect...


"There is an art in deciding which rabbit holes to plunge down and which to step over."

Saved, bookmarked, framed.


The first website ever, at CERN [1], is not very consistent about this. Some pages [2] do link "home", but don't use this term: in this particular case, a link to "the WWW project" is given in the first sentence. Some [3] do not.

[1] http://info.cern.ch/hypertext/WWW/TheProject.html

[2] http://info.cern.ch/hypertext/WWW/Status.html

[3] http://info.cern.ch/hypertext/WWW/Helping.html


Does anybody know the motivation behind using `curl -sD- -o/dev/null` instead of `curl -I` on the landing page?


The former appears to retrieve headers via a standard GET request. Apparently, with the latter method, there's a chance you may get different results than you would see from a GET request. (I'm not an expert, so this is just what I discovered after digging a bit for curiosity's sake.)


I think this is the exact difference. `curl -I` makes a HEAD request while the other is making a GET request and showing the response header. Just as a f'instance against a machine running nginx on my local network: the GET response sends me a Transfer-Encoding in the response header while a HEAD request does not. I can see a lot of configurations where a HEAD request returns different headers than a GET.


As a matter of standards compliance <https://www.rfc-editor.org/rfc/rfc9110#section-9.3.2-2>:

> However, a server MAY omit header fields for which a value is determined only while generating the content.

I find omitting Transfer-Encoding quite understandable and reasonable; the whole purpose of HEAD is to say “don’t bother doing the work that GET would trigger, I don’t care about exactness, I’m just getting a general idea”. Though I do find cases where Content-Length is omitted, even on static resources, disappointing. Saw that happen for I think the first time a few weeks ago (that is, a Content-Length that was present in GET but absent in HEAD).

But certainly I’ve seen more than a few 405 Method Not Allowed responses to HEAD, which is definitely bad.


I know this is just one thing out of many, but sum is included in stats.


Thanks to both! I didn't know about it, so I needed the clarification. The scene: https://youtu.be/4xgx4k83zzc



There is a good critical close read by the Guardian.

https://www.theguardian.com/environment/2023/dec/13/what-the...


On the technically advanced end of the spectrum you'll find John MacFarlane [1], professor of philosophy at Berkeley and creator of pandoc [2]. Some people are just amazing.

[1] https://johnmacfarlane.net/

[2] https://pandoc.org/


Incredibly talented guy. Incredibly useful software.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: