Hacker Newsnew | past | comments | ask | show | jobs | submit | CreRecombinase's commentslogin

Have you tried this yet? Looks like it uses the ms graph API https://github.com/jgunthorpe/cloud_mdir_sync


I got briefly excited about this one, but I've run into the same issue of needing the IT department to explicitly permit me:

https://github.com/jgunthorpe/cloud_mdir_sync/issues/25


I turned this on and for most "general" use cases I found it useful, I also observed a downward biased in a family of "quantatitative estimation" tasks, so just remember when you have this kind of stuff turned on (always beware of mutating global state!)


It’s kind of wild how much work really smart people will do to get python to act like Fortran. This is why R is such a great language IMO. Get your data read and arrays in order in dynamic, scheme-like language, then just switch to Fortran and write actual Fortran like an adult.


Or, don't even write the fortran manually, just transpile the R function to fortran: https://github.com/t-kalinowski/quickr


R kinda sucks at anything that isn't a dataframe though.


R sucks for any real programming tasks that you don't hardcode every single thing. It sucks at loading modules from your own project. It sucks to find the path of the script you're executing.

Bacially everything is stateful in R. You call standard library functions to install third party libraries ffs. And that operation can invoke your C compiler.

Putting R and a repo in a docker container to run it in a pipeline where nothing is hardcoded (unlike our datascience guy's workspace) was the worst nightmare we had to deal with.


This. Writing Fortran is easy as hell nowadays.

But yeah, I learned Fortran to use with R lol. And it is nice. Such easy interop.


The comment you linked is a response to my comment where I tried (and failed) to articulate the world in which R is situated. I finally "RTFA" and the benchmark I think perfectly deomonstrates why conversations about R tend not to be very productive. The benchmark is of a hypothetical "sum" function. In R, if you pass a vector of numbers to the sum function, it will call a C function sum. That's it. In R when you want to do lispy tricky metaprogramming stuff you do that in R, when you want stuff to go fast you write C/C++/Rust extensions. These extensions are easy to write in a really performant way because R objects are often thinly wrapped contiguous arrays. I think in other programming language communitues, the existence of library code written in another language is some kind of sign of failure. R programmers just do not see the world that way.


Every two weeks or so I peruse github looking for something like this and I have to say this looks really promising. In statistical genetics we make really big scatterplots called Manhattan plots https://en.wikipedia.org/wiki/Manhattan_plot and we have to use all this highly specialized software to visualize at different scales (for a sense of what this looks like: https://my.locuszoom.org/gwas/236887/). Excited to try this out


Hey! This sounds like a really interesting use case. If you run into any issues or need help with the visualization, please don't hesitate to post an issue on the repo. We can also think about adding an example demo of a manhattan plot to help too!


If you’re working in R with ggplot2, you could also consider the `ggrastr` package, specifically, `ggrastr::geom_point_rast`


These really large scatterplots are also useful for visualizing claims, and finding fraud.


Have you tried ManimGL?

https://github.com/3b1b/manim/releases

Super awesome, and you can make it into an MCP for Cursor.


R was heavily inspired by scheme, and I think that's a big part of why it's so popular in the scientific community (it's a great language for authoring DSLs). In fact, DSLs are so good in R that lots of midwit CS bros love to dunk on R the language, not realizing that what they're complaining about is in fact some library function. I like to tell people that R is "scheme on the streets, FORTRAN in the sheets". Just like Clojure deviated from I think R was very much developed as a Lisp designed to facilitate complex and flexible scientific applications (with an emphasis on statistical computing). I think you could develop a compelling analogy that Clojure:JVM::R:Numerics-oriented C/FORTRAN


From TFA, ...the creator of the R programming language, Ross Ihaka, who provided benchmarks demonstrating that Lisp’s optional type declaration and machine-code compiler allow for code that is 380 times faster than R and 150 times faster than Python


R is built on a Lisp-like run-time core, complete with symbols, and linked lists made of cons cells, etc.


I had no idea R was so much like Julia in that regard. Makes me wonder if the Julia devs were just like, "What if R, but more general?"


This is what's so brilliant about the Microsoft "partnership". OpenAI gets the Microsoft enterprise legitimacy, meanwhile Microsoft can build interfaces on top of ChatGPT that they can swap out later for whatever they want when it suits them


I think this is good for Microsoft, but less good for OpenAI.

Microsoft owns the customer relationship, owns the product experience, and in many ways owns the productionisation of a model into a useful feature. They also happen to own the datacenter side as well.

Because Microsoft is the whole wrapper around OpenAI, they can also negotiate. If they think they can get a better price from Anthropic, Google (in theory), or their own internally created models, then they can pressure OpenAI to reduce prices.

OpenAI doesn't get Microsoft's enterprise legitimacy, Microsoft keep that. OpenAI just gets preferential treatment as a supplier.

On the way up the hype curve it's the folks selling shovels that make all the money, but in a market of mature productionisation at scale, it's those closest to customers who make the money.


$10B of compute credits on a capped profit deal that they can break as soon as they get AGI (i.e. the $10T invention) seems pretty favorable to OpenAI.


I’d be significantly less surprised if OpenAI never made a single $ in profit than if they somehow invented “AGI” (of course nobody has a clue what that even means so maybe there is a chance just because of that..)


That's a great deal if they reach AGI, and a terrible deal ($10bn of equity given away for vendor-locked credit) if they don't.


Fortunately for OpenAI the contract states that they get to say when they have invented AGI.

Note: they announced recently that that they will have invented AGI in precisely 1000 days.


Leaving aside the “AGI on paper” point a sibling correctly made, your point shares the same basic structure as noting that any VC investment is a terrible deal if you only 2x your valuation. You might get $0 if there is a multiple on the liquidation preference!

OpenAI are clearly going for the BHAG. You may or may not believe in AGI-soon but they do, and are all in on this bet. So they simply don’t care about the failure case (ie no AGI in the timeframe that they can maintain runway).


How so?

Still seems like owning the customer relationship like Microsoft is far more valuable.


QR decomposition isn’t in BLAS, you’re probably thinking of LAPACK.


No. OTOF is the gene symbol. Referring to the gene as otoferlin is perfectly legitimate


Exactly. Also, even if the gene and protein names were different, real scientists wouldn't get hung up on minor technical slip-ups like this, because everyone knows what is meant. (Former cell biologist)


That’s certainly not true in scientific computing. For us, dynamic is very much the exception rather than the rule.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: