Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am a little scared from the distinction we are start to make between "computers" and "developers' computers"

In most computer nowadays you cannot code (tables and smartphones), are computers doomed to be an expensive tool for few "nerd" ? What will be the impact on computer literacy ?



It's not recent; this developer-user split has been growing for a few years now:

http://boingboing.net/2012/08/23/civilwar.html

http://boingboing.net/2012/01/10/lockdown.html

...and RMS predicted this almost 20 years ago:

http://www.gnu.org/philosophy/right-to-read.en.html

I think the rise of P2P, file sharing, and the openness of the Internet in the last decade significantly narrowed the developer-user gap; and it's been growing since then, motivated by corporations' desire to maintain control over their users.


> motivated by corporations' desire to maintain control over their users.

I think that's only one factor, and not a majority one.

Most users don't want to have to deal with "how it works". They want a simple, easy to use tool that works reliably... And they want to call someone to "fix it" when it "breaks". That's how it works with plumbing, cars, landline phones, stereo components, televisions, and all the electronics they've ever used.

The exceptions are computers and some smartphones, which can present cryptic error messages, have weird things in their settings, and generally make a "dumb user" feel out of their element. Think about the confusion users feel when confronted with a funny noise in their car. "I'm not a mechanic, what does that noise mean?" is no different from "I'm not a computer person, what does that error mean?" What's more, the meaning of the question is not "what, mechanically/electrically, is at fault?" It is "how much time/money will it cost to get it fixed?"

It's not just a small preference, either - the height of luxury are "push button" services that "just work". Go to a high end hotel, and your room phone has just one button. Top end consumer products of all sorts struggle to be an easy-to-use "appliance". A dumbed down user interface without developer tools is user preference, it's status, it's customer comfort and pride, all tied into one.

So 99% of companies end up designing their interfaces like that hotel phone: http://salestores.com/stores/images/images_747/IPN330091.jpg

IMO the most impressive thing about OSX is how well it supports both audiences: it feels like a push-button, high luxury, comfortable, easy device to my mother. But under the hood there are great logs and a solid BSD-based operating system model. It comes prepackaged with a lot of developer tools, hidden in a place where I would look right away, but my mother would never notice.

Sure, some companies use software to limit and control their customers (cough cough Sony), usually with sharp legal/lobbyist teeth to enforce that control. But 99% of companies out there just want to make their users feel comfortable, high status, and competent to use their device.

While I agree with RMS that this split is inevitable, I don't believe it's about control. It's about two distinct market segments: auto enthusiasts who want control over the torque settings in their high end car, and people who just want a car that fucking works. Chefs who want sector-by-sector control over their oven's heating profile, and people who just want to be able to cook a fucking roast without burning it.


The problem with push button service is that you need people interacting at the backend to make the magic happen. Those people cost money. If you try to provide apple-level simplicity at google-level prices you won't be able to afford those people. Google itself is a fine example. It works well, until it doesn't, and then you're stuck. Providing a power user interface on a consumer level product is imho a necessity if you leave problems up to the user to solve. At least it gives them a chance of getting unstuck.


And this is why people will pay Apple prices for Apple gear. If you provide a "power user interface" it's a sure sign you're trying to save $$$ by skimping on support and you don't care about the user. This goes even for professional tools; cf. Autodesk Inventor vs. AutoCAD; Visual Studio vs. editor and command line tools.


It's more broad than that. The distinction lies in production versus consumption devices.

Tablets and phones are consumption. You can't do any serious work on them - development included.

This is why laptops and computers have stuck around in spite of the proliferation of cheap, tiny, elegant consumption devices.

So no, I don't think laptops and computers will go away for non-nerds, just for people who don't produce anything.


I would put Instagram or SnapChat firmly into the production column. While many do not like the "output" of this production, that does not change the fact.

And a lot of music creation apps exist for tablets/phones.

This production/consumption divide is too rigid.


I think we need a better set of words for this, but I'd still put Instagram and SnapChat on the consumption side - "producing" a photo and maybe applying a sticker and a filter isn't really production. Those features are designed to spruce up the photo that is fed directly into the consumption loop without much context or sense to it.

Now don't get me wrong - while I only use the two a little, I think they're fine. It's communication, an important part of human experience. But, at least in my mind, Instagram and Snapchat fall firmly into the same group as browsing Facebook or 9gag, as opposed to e.g. making a let's play video or a comic strip.


For now, it is true that most "professional" productivity apps are on laptops / desktops.

Yes, there are ways to take photos and create music on tablets and phones. You can do some basic editing on them, even. But the "professional" tools for photography and music, with all the bells and whistles you can think of, are still dominated by laptop / desktop computer programs. (The dominant programs being Photoshop for images, and DAWs like Logic, Ableton Live, and Pro Tools for music.)

The distinction between "production" and "consumption" devices is indeed kind of too rigid in a sense that, of course, professionals will utilize the creative tools that come on tablets and phones, even if the desktop / laptop programs are the primary tool. Tablets also can shine as an extended interface for desktop programs. (EG: Logic Pro (and others) have apps that turns an iPad into a remote controller for the main DAW. There are programs like Astropad that turn your iPad into a Wacom-like tablet for Photoshop, etc.)

The obstacle is interface. The fine-tuned control of a tablet or (especially) a phone is much poorer than using a mouse and keyboard with a large screen. Until that gets resolved, I doubt desktops / laptops will go anywhere.


You can create content on tablets, and some of it is excellent content.

Development isn't done on tablets because the input devices we have to make code are limited to a keyboard, and most people think text files are code, rather than a serialisation/deserialisation format for an AST.

You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.


You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.

I doubt it. Perhaps we'll be making ASTs by writing (i.e. drawing symbols with styli or pens), but I don't think we'll be doing it via gestures and speech. There's a reason that we don't teach math via interpretive dance.


There's also a reason people in the same room don't communicate by tapping buttons with letters on them.


Like 'Zecc said, maybe they don't care about the third parties present. Or maybe they want everyone to be a part of the conversation, which is often fine. Or maybe they talk about emotional matters.

But text is a pretty fine form of communication and I find myself using it very often at work (and at home I often talk this way to people not in the same room, but in the same flat). It's fast, it's convenient, it's less disrupting, and the only reason to avoid it are some silly preconceptions that digital communiction is somehow "worse" than spoken words.

Also, you never passed papers to your friends while in school? That's pre-smartphone equivalent of IM.


We do that a lot; for technical stuff it is often many times more efficient than talking.

Edit: And easier to search, remember, read again and the less nice variant of that; 'you never said that to me' 'I did: ' copy/paste.


I can't say I've found any scenarios where talking to someone is many times less efficient than typing them a message. There are certainly times where it's helpful to supplement conversation with code, but that's a different story.

Regarding being easier to search and read it again, it seems like there are potential technical solutions to that problem, but I would agree that we're not there yet.


Obviously people occasionally do talk in person via pressing buttons. That doesn't mean it's generally preferred.

Recorded speech is also searchable so not sure that's relevant.


> Recorded speech is also searchable so not sure that's relevant.

It is ; recorded speech is not very searchable, especially if you are talking in a group in a conference where people can be from different countries with different dialects (which is the normal situation for our group talks). Also it is not convenient and sometimes not possible to record every (conference) meeting (too much noise etc). With text it's automatically recorded and perfectly searchable...

Also some of my colleagues are not good at English listening but are very good technically; if I type what I mean they understand while if I/we tell them, everything has to be translated and/or repeated many times.

I think the tech is not there yet to say it's not relevant.


Totally agreed the tech is not there yet. I just also believe at the current rate machine learning is going it the issues will soon be much less relevant.


I hope so but that has been a long standing promise. For me no amount of voice to text beats typing. I do not know why but things like Dragon basically give me gibberish. And that is with natural language. With natural language, code and math it is just vomit. I have no clue how they will solve that soon and then mix.it with translation as well. Hope there is though.


I do that all the time at work. Coworker (who's sitting next to me) has headphones on? Message him on hip chat.


They don't care about other people in the same room trying to concentrate on what they're doing?


Go ask teens with a smartphone at a restaurant but I've been doing the same myself sometimes. 1-1 or group chats.

A more specialized scenario: I was copy/pasting stuff to a colleague in the same room yesterday.


They don't?


Generally, no they don't.


Plain text files are an incredibly powerful way of "storing ASTs", the advantages are far too numerous to list. The primary one being complete and total interoparability with all other tools that accept plain text files.

I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.


Not saying we shouldn't store software as text. Just saying we don't need to make software with text.


Get a better editor - one that lets you operate on semantic units. And/or get a better programming language - one that lets you operate on code as AST.


I think you're making good points, but please let me know when semantic editors are available for Go, Rust, JavaScript and Python.

One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.


> I think you're making good points, but please let me know when semantic editors are available for Go, Rust, JavaScript and Python.

I think there are ports of paredit-like features to those languages in Emacs too, and all the other semantic features of Emacs itself work with those. As long as the language's major mode properly defines what is e.g. a function, a symbol, etc. you can use semantic navigation and editing.

> One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.

I don't think this is something that AST gives you. AST is just a more machine-friendly representation of what you typed in the source code. Portability between different platforms depend on what bytecode/machine code gets generated from that AST. And since AST is generated from the compiled source anyway as one of the first steps in compilation, getting it to emit a right set of platform-specific instructions means you can compile the original source there too.

And AST doesn't solve the problem of calling platform-specific functions and libraries anyway.


Sure, there are many (excellent) AST based editors. However an AST editor based on a keyboard, and requires you to learn to type at 160 WPM, won't help most tablets be good code creation devices.

Data structures are shapes. A shape is better drawn than described in text.


My point is - there are AST-based editors and languages (e.g. Emacs with Paredit and Common Lisp) and you can see that even in that mode of "thinking" about code, you can't beat the speed, efficiency and flexibility of the keyboard.

> Data structures are shapes. A shape is better drawn than described in text.

Draw me a linked list. Tell me how much faster it is than typing:

   (list 1 2 (foobar) (make-hash-table) (list "a" "b" "c") 6)
Even on a visual keyboard on a tablet, it's faster to type than to draw data structures. A flat sheet of glass maybe gives us the ability to get the (x, y) coordinates of a touched point easier and with more precision, but it sacrifices many other important aspects - like tactile feedback and the ability to feel shapes. With physical keyboard, you're employing more of the features your body and mind has, and that's why it's faster than a touchscreen.

Unless you can find a completely different way of designing UX, then a tablet won't be a suitable device for creation. None of the currently existing solutions come close to beating a physical keyboard and a mouse.


> Draw me a linked list

I don't normally use linked lists, but here's an array:

"list joe (subtle gesture) mary (subtle gesture) dave end

If I wanted to delete dave from the list I could grab it and slide it away or say "list delete last".

> Tell me how much faster it is than typing

Everyone in the room I'm in now can talk at 200 words per minute and use their hands. Very few of them could type that fast.


> "list joe (subtle gesture) mary (subtle gesture) dave end

How will you go about drawing "joe" and "mary"? Is it faster than typing? Note that you can't always select stuff from dropdowns - you often have to create new symbols and values.

> Everyone in the room I'm in now can naturally talk at 200 words per minute.

How fast they can track back and correct a mistake made three words before? Or take the last sentence and make it a subnode of the one before that? Speech is not flexible enough for the task unless you go full AI and have software that understands what you mean.


>> You could easily build an AST with gestures and speech

> How will you go about drawing "joe" and "mary"?

I'll just say it, it's easier. As I said at the top of the thread, gestures and speech.

> How fast they can track back and correct a mistake made three words before?

I gave an example of opening an existing structure and modifying it in the comment you're replying to.

> Or take the last sentence and make it a subnode of the one before that?

Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.


> I gave an example of opening an existing structure and modifying it in the comment you're replying to.

Sorry, I misunderstood what you meant by "subtle gesture" there.

Anyway, in the original comment you said:

> Data structures are shapes. A shape is better drawn than described in text.

I'll grant you that speaking + gestures may not be a bad way of entering and manipulating small data structures and preforming simple operations. But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random), physical keyboards will still be much faster and much less annoying.

But I still doubt you could extend that to more complex editing and navigating tasks. Take a brief look at the things you can do in Paredit:

http://pub.gajendra.net/src/paredit-refcard.pdf

Consider the last three or four subsections and ask yourself, how to solve them with touch, gestures and speech. Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel? How about splitting a node into two at a particular point? Joining them together? Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for)? Searching in code for a particular substring?

Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction. There are stories on the Internet of blind programmers using Emacs who can achieve comparable speed to sighted ones. This usually involves using voice pitch and style as a modifier, and also using short sounds for more complex operations. Like "ugh" for "function" and "barph" for "public class", etc. So yeah, with enough trickery it can be done. But the question is - unless you can't use the screen and the keyboard, why do it?

> Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.

DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.


> But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random)

Totally agreed. Theoretically, you should just be able to gesture a list with your hands and say "joe mary dave" and the software knows from your tone that's three items and not one.

I don't know that much about lisp and s-expressions asides from that it can edit it's own AST. That's not a way of avoiding the question, it's just my own lack of experience.

> Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel?

Yes, I already think of a tree of blocks/scopes when editing code with a keyboard, visualising that seems reasonable.

> Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for).

Here's the kind of stuff I use an AST for: finding function declarations and making them function expressions. I imagine that would be (something to switch modes) "find function declarations and make them function expressions". Likewise "rename all instances of 'res' to 'result'" with either tone or placement to indicate the variable names. More complex operations on the doc would be very similar to complex operations in the doc.

> Searching in code for a particular substring?

Easy. Have a gesture or tone that makes 'search' a word for operating on the document, not in it.

> Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction.

Yep, I don't think it would still resemble normal speech and interaction either, the same way reading code aloud doesn't. It would however be easier to learn, removing the need to type efficiently as well as the (somewhat orthogonal) current unnecessary ability to create syntax errors.

> DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.

Not sure if I'm reading you correctly here: typing DOM methods in a keyboard in devtools is obviously slower than a single drag and drop operation. Using hands to do it directly is obviously even faster with the mouse.

Stepping back a little: I guess some people assume speech and gestures won't get significantly better, I assume they will.


That's great if you just want the strings joe and mary. What happens if you want a list of People?


Off the top of my head:

favouritePeople is Person list, name Joe age 32, Mary 23, Steve 64, end

Using tone to separate entries, but you could use a secondary gesture for that instead. Also some pattern matching.


> I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.

Unless AI advances considerably. For years I've imagined myself talking to the small specialized AI living in my computer, giving it instructions that it would translate to code...


Natural language is a terrible way to specify software.

Writing software is about telling a blazingly fast, literal, moron what to do. The ambiguity inherent in natural language is not a good way of telling such a thing what to do.


>>AI advances considerably >blazingly fast, literal, moron

I think I have discovered the source of your disagreement.


I suddenly envision a "The feeling of power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power )-type scenario, where one programmer suddenly discovers that he or she can understand and create binary patterns without relying on the AI.


And if we _are_ Ima start buying Spotify ads that just shout out "semicolon exec open bracket mail space dash s space passwords space owned at gmail dot com space less than space slash etc slash passwd close bracket semicolon" at top volume.


Actually AST edition with touch interface has been experimented by MS Research with Touch develop (https://www.touchdevelop.com/). In their editor you just insert/combine AST parts instead of typing them.


Came here to say this. It works surprisingly well on phones and tablets, primarily for making single file scripts.


It depends. If you spend some time writing in Lisp, you'll learn how it is to write in AST, including navigating and editing it as tree and not strings of characters. And you'll see that the keyboard is still the most convenient interface we have for that. Touch, gestures and speech lack both speed and precision to be effective at this job.


You couldn't really code on an Atari 2600 or a Super Nintendo either, but all of us somehow turned out OK. I wouldn't sweat it.


>are computers doomed to be an expensive tool for few "nerd" ?

No. Because of the Glorious PC Master Race - mods, trainers, hacks, overlays etc - these all need dev and root access.

Btw- game modding, cracking, save game editing etc - are the best gateway drugs towards full blown IT career.


Word. I remember using a hex editor to alter a saved game. It would be decades before I learned exactly how hex works.


Well, because we are on an ubuntu thread, I will link you those just for the fun ;)

http://www.ubuntu.com/tablet/developers https://plus.google.com/u/0/105864202742705090915/posts/jNvZ...


Have you met an average user? The mandatory updates, lack of permissions and sandboxing are only a good thing for a user with typical computer literacy level.

Hell, even lack of window management in iOS/Android systems is making UX much more easier to understand for majority of users I know. My granddad, who was an excellent mechanical engineer, have been using computers for the last 20 years, and he still struggles with click/double-click distinction.


> and he still struggles with click/double-click distinction.

Have you tried teaching him that? I highly doubt an old person, especially one with engineering background, will have trouble with understanding the distinction if someone bothers explaining it to them.

Or in general - it's surprising how much non-tech people can understand about technology if someone bothers to sit down with them and explain the concepts to them. Usually the reason they don't learn this stuff themselves is the typical human impulse of "if I haven't figured it out in 3 seconds flat, it's too difficult and I won't understand it".


So many timesI've lost count.


The mandatory updates, lack of permissions and sandboxing are only a good thing for a user with typical computer literacy level.

Only if you want to keep them illiterate, which companies are more than happy to do since it means they can be more easily persuaded and dependent consumers.


People have had 25 years (Wild guess) to become literate, and they haven't. What makes you think that's going to change?


General (human language) literacy took centuries.


It probably isn't if the attitudes in present IT world continue. But it doesn't have to be this way - about the only thing needed to fix this situation is to create an expectation that yes, you have to sit down and spend 5 minutes learning before you can use this stuff effectively.

Somehow nobody complains that cars, or microwave ovens are too complicated. Everybody knows they have to learn how to use them - either through a training course or just by reading a manual.


Most people really do not care enough to learn past a 'just use it' detail.

Are my parents or family interested in password managers ? Heck no... why should they, because the browser will remember stuff for them.

Permissions ? You have to be joking... they want to read their email or draw a picture.

Computers are there to make life easy - they're convenience tools (for the mass market). If people have to understand them more than switch them on a press a few buttons, they've failed.

It's not the IT world... for years, we were outcast as geeks and nerds (they were insults in the past). It's that the average person doesn't want (or need) to know about this.

How many people service their own car ?


> Most people really do not care enough to learn past a 'just use it' detail.

True, but there is still some learning to do. The only way you can reduce it (barring solving general AI and making a system that actually knows what you mean) is by reducing the things a device/piece of software can do. That's what the industry is doing - cutting out features, turning software into shiny toys. Because from the market perspective, is enough that the people sign up / buy the product - it doesn't have to be actually useful.

That's why software for professionals look complicated - because there the company actually has to make a useful tool. This state of thing is sadly a big loss for humanity - if the only way to make stuff "sexy" is to make it barely useful, then the general population is in fact missing out on all the amazing things technology could allow.

(And the tech people are missing out too, because they're too small a niche. It's more profitable to target the masses instead. That's why all mobile devices are getting dumber.)

> It's not the IT world... for years, we were outcast as geeks and nerds (they were insults in the past). It's that the average person doesn't want (or need) to know about this.

Oh but it is the IT world. We've been invaded by the "normal people" and we've lost the battle. Most programmers employed nowadays are not much different from your average non-tech person, and have nowhere near the technical expertise you'd associate with the "geek and nerds" of the past.

> How many people service their own car ?

I'm not talking about servicing, but about driving. You have to spent 30+ hours in training to be allowed to drive on a public road. Nobody complains because people understand that to use the car well, you have to learn how to do it.


> either through a training course or just by reading a manual.

If I had to read a manual to operate my microwave, toaster, coffee machine, sandwich maker, oven, games console, etc etc, I'd just get rid of them.


You probably were taught how to use most of those by your parents, either directly or by observing. I find it hard to believe that any time you're dealing with a new class of appliances for the first time, you don't even peek at the manual or some tutorials.

I say class, because most tosters work the same, most microwaves work the same, most smartphones work the same and most 3D modelling programs work the same too. But you have to get that first little bit of knowledge about the class of tools from somewhere, even if from your own experimentation. Humans aren't born with knowledge how to use technology.


> Only if you want to keep them illiterate

You sound like a guy who teaches his kid to swim by throwing him in the stormy sea.


Have you met an average user?

I don't think anyone ever has.


What do you mean? I pick up my Android phone and I've got an app that gives me a python shell, "terminal ide" which includes tons of cli developer tools like a C compiler and various editors, a full debian install I use for more secure SSH (using real openssh) and development and even some operations on various servers. There are full out Java IDEs on Android that you can install even.

So here's just a few ways you can code on Android:

QPython: https://play.google.com/store/apps/details?id=com.hipipal.qp...

AIDE (Java): https://play.google.com/store/apps/details?id=com.aide.ui&hl...

Terminal IDE: https://play.google.com/store/apps/details?id=com.spartacusr...

If all else fails, just deploy debian with Linux Deploy: https://play.google.com/store/apps/details?id=ru.meefik.linu...

If desktops become more expensive, it'll just mean people are more motivated to make tools like this. Android phones and tablets are basically treated as cheap commodities and there's an extremely competitive market for them, if anything, the entry price has gone down.

Now, admittedly I'm not sure how this situation is on iOS, but maybe someone could link similar tools on there?


Sure, but how productive are you when coding on your phone vs on your desktop?


There's definitely a productivity hit, but it's also not the kind of thing that isolates people. For the cost of a cheap BT keyboard you can be fairly productive using just a tablet, even a phone maybe. If you have a TV and use casting, a phone could do quite well.


The contention wasn't that coding on a phone or tablet isn't productive, it was that you can't do it. I love Pythonista on my iPad, and the latest version makes coding on my phone surprisingly feasible. I wrote a version of snake on my 6+ with my kids.


I always find that comments like this are doom and gloom and never celebratory that we might reach a point where computers are finally stable and secure enough to be treated like appliances. The first automobiles required dozens of steps just to start the engine, did people back then lament the difference between "cars" and "mechanics' cars"?


Here's my blogpost on this from 5.5 years ago http://drupal4hu.com/future/freedom.html


I was a teen in the mid 1980s when computers were too expensive. The situation today is orders of magnitude better, it isn't really comparable at all.

For one thing, a Raspberry Pi is more powerful than the Sinclair ZX-81, Apple IIe, or Atari 400/800 I had access to back then, and much cheaper.


I think this split is going to get worse, especially in the Apple ecosystem. Their apparent desire is that the iPad and Pro are the computer replacement but there isn't (or will anytime soon) a way to create applications for those platforms from that (iOS) environment. Their, admittedly market-speak, statements on stage hint that they would like to see the the tablets/phones replace desktops for the larger userbase. Odd times.



This. It paints a unsettling picture for the future of general purpose computing.


What do you mean you "cannot" code on tablets and smartphones? There are nice interpreters and compilers in the official app stores for major mobiles OS, aren't there? I've used Python on iOS, Android and Windows Phone. Also J, Ocaml, some dialects of Lisp, C# and Ruby, that I can remember now (each language on at least one of those OSes, sometimes more than one). Not to mention these devices all come with web browsers which means at the very least you can use JavaScript (I've done at least one Project Euler question on an iPod Touch in CoffeeScript standing in line at the bank.)

The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)


Those are "second-class" or even "third-class" citizens in the ecosystem. Can you use those language interpreters and compilers to write apps that can interact with the system and exchange data with the other apps? That's what makes the traditional, document-centric, PC ecosystem so powerful.

While being able to play around with Project Euler can be fun, it amounts to "I can run a Turing-machine simulator" and doesn't represent anything more than a tiny fraction of what people want to do with computers when they say they want to "code". You may as well be playing one of the numerous puzzle games that involve much of the same concepts.

To use your iPod Touch as an example, if it were more like a traditional desktop computer, you would also be able to do things like write an app to manage your music playlists.

The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)

Not surprising if it's a Windows tablet based on the PC architecture - those are far closer to the traditional desktop than iDevices and Androids. If by C# compiler you're referring to the one that comes with the .NET framework, that's been there since the first versions; pity it's not so well known with MS trying to push VS as hard as possible...


> Can you use those language interpreters and compilers to write apps that can interact with the system and exchange data with the other apps?

Yes, you can. See https://play.google.com/store/apps/details?id=com.aide.ui


You can code on a tablet, same way as you can stand in line at the bank. Neither is very efficient, and most of us would rather not.


Apparently, you can't really do anything on a tablet or phone. Or at least I can't. My phone app chose this particular comment as a nice time to play up and do a double posting. Hence the copy further down, which I am not allowed to delete.


Well of course you can technically code on a smartphone or tablets, I simply meant that is absolutely not practical.


I tried to code on smartphone, never again. I am x times more productive on desktop.


I've recently started using Termux on my phone with a bluetooth keyboard - I'm as productive as I would be doing dev over SSH. All the tools I'd use on a server are there (node, git, nano, etc). I've written a small API server with it and it wasn't a disaster. Admittedly I'm more productive when I'm on my laptop with Atom and a couple of monitors, but if that isn't an option I can still do work. It's a bonus rather than an alternative.


Absolutely! Programming is much more comfortable with a physical keyboard. I wasn't suggesting coding on a phone is ideal, just that it is possible.


Edit: Just realized you might have meant _on_ the phone as in 'using the on screen keyboard'. Ug. That would be truly awful.

During one weekend in which my only options were android devices, I was pleasantly surprised by the packages available in termux. With tmux, git, and ssh installed, I mounted the tablet at the right height and connected a quality keyboard via usb. I actually forgot that I was coding on a tablet!

The phone experience was far more sensitive to maintaining good posture throughout, but being strongly incentivized to keep good posture actually made the experience more pleasant in a way. However, this particular phone was around 1280x720 I believe - seeing individual pixels again, and being pixel-limited (not physical size limited) in the use of panes in tmux were the only facets I found truly unpleasant.

I'm eager to try coding with a high res VR headset.


All of this will disappear within the next 5 years, as the distinction between programming and consumption narrows down.

It seems like a vast majority of software developers, consciously or not, do not wish for software development to improve beyond a certain point as they fear it would become too accessible and therefore lower the value of their skills. The truth is that we actively make programming as difficult as possible, and everybody loses. I can understand that writing code as text would make sense 50 years ago, but there is no excuse for this today.

Consumer UI is now reaching the 3rd dimension with AR and VR, while software development is stuck in the 1st dimension. A long linear piece of string. It is difficult to believe that those who have the power to create great consumer UX are completely blind to improving their own. Software development has some of the worst UX ever.

The solution to all of those issues has been known for a while, and is dead simple to understand. We need to create a new communication platform, powered by ideas from logic programming and the semantic web. Think of it as 2 huge semantic knowledge graphs, the first describing the real state of the world, the second describing the ideal state of the world. Build a UI on top of it (which should feel more like a graph-oriented Excel than RDF/Prolog) to let people, agents and IoT devices communicate "what is" and "what should be". Then, all it takes is an inference algorithm that can match providers with seekers, get them to commit to some set of world changes (through some sort of contract), and let people manage and track the commitments/tasks they're expected to get done. That's it, that replaces 80% of software needs. Thank you very much.

Knowledge Graph -> Semantic Marketplace -> Smart Contracts -> Task Management


Interesting ideas (even though your prediction regarding the next five years seems rather... bold). Where is this vision sketched in some more detail? Any links?


Half of what I ever said online is about this. Somehow, I never got to write a detailed description of the vision.

Perhaps I should take this opportunity to make that happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: