I'm finishing up a 350+ page family history book that I wrote in LaTeX; I've got two or three more to go, and I wouldn't consider using anything else. It's got its quirks, but it was so easy to work with page formatting, footnotes/endnotes, and sections that it made up for all the fiddly issues I ran into with images. I have also been using the genealogytree[0] package (which relies on TikZ, coincidentally used by OP) to draw diagrams programatically. In addition to satisfying my scripting itch, it looks beautiful despite my unlearned eye for graphic design.
Related: I wrote a resume in LaTeX a few years ago. After an in-person interview, the manager took me to meet the team: When we walked in, they were crowded around a monitor arguing about figure out how I'd made such a cool-looking CV. It probably didn't actually look all that good, but anything that isn't Word stands out nowadays.
For longer documents in TeX, I wrote a collection of TeX macros for to make cross referencing easy. The collection was a little tricky to write but is fairly easy to use and works great. The package is based on some logical names that serve as, say, pointers; then in one place define such a name and in other places refer to it and get the page number, etc. inserted where the reference is made. Given some simple examples, it's really easy to use. I have a simple macro for my favorite text editor that will create a new, the next according to one scheme, such logical name. Having an easy way to have cross references is nice.
I also have something similar but simpler for bibliographic references.
TeX, some TeX macros, a good text editor with a good macro language (e.g., KEdit), and a good spell checker (e.g., ASpell with the last TeX distribution I got) are super good writing tools to have.
They work nicely for me. I haven't looked at the code in years, so there may be some dependencies, likely minor, maybe just in documentation, on some of my other macros.
There may be some subtle bugs, but I haven't found any. If you find a bug and know just what usage encounters the bug, then don't do that usage again! Or fix the bug!
There is enough documentation so that you can see the ideas -- actually they are all quite simple. The macros were a good TeX exercise.
\newread\XREFileIn
% \message{\string\XREFileIn = \the\XREFileIn}
\newwrite\XREFileOut
%
\newcount\SNChapter
\newcount\SNUnit
\newcount\SNTable
\newcount\SNFigure
%
\SNChapter=0
\SNTable=0
\SNFigure=0
%
\def\SNChapterPrefix{}
%
\def\TrimTag#1 {#1} % For trimming trailing blanks.
\def\First#1 #2 #3 {#1} % Parsing first token.
\def\Second#1 #2 #3 {#2} % " second "
\def\Third#1 #2 #3 {#3} % " third "
\def\SNUNone{CH UN PG} % Value for tags before there is an XRF file.
\def\SNTNone{TB JUNK PG} % Value for tags before there is an XRF file.
\def\SNFNone{FG JUNK PG} % Value for tags before there is an XRF file.
\def\SNPNone{PT JUNK PG} % Value for tags before there is an XRF file.
%
\def\SNUAdvanceChapter{\advance\SNChapter by1
\SNUnit=0}
%
% Define a 'chapter' tag:
%
% Need a \global on \advance because might have the \SNC from
% within a group, e.g., {\bf \SNC ...}.
%
\def\SNC#1{{\global\SNUnit=0
\write\XREFileOut{\string#1}%
\def\JUNKA{\SNChapterPrefix\the\SNChapter\space JUNK}%
{\edef\JUNKB{\write\XREFileOut{\JUNKA}}\JUNKB}%
\write\XREFileOut{\the\count0}}}%
%
% Define a 'unit' tag:
%
% To be more sure page number is correct, add text to list
% BEFORE writing page number to XRF file. In principle could
% do the same for \SNT and \SNF but from how these are used in
% practice a page number error would be nearly impossible.
%
\def\SNU#1{{\global\advance\SNUnit by1
\write\XREFileOut{\string#1}%
\SNChapterPrefix\the\SNChapter.\the\SNUnit
\def\JUNKA{\SNChapterPrefix\the\SNChapter\space\the\SNUnit}%
{\edef\JUNKB{\write\XREFileOut{\JUNKA}}\JUNKB}%
\write\XREFileOut{\the\count0}}}%
%
% BEGIN Modified at 04:16:31 on Tuesday, December 29th, 2015.
%
% In this collection, we now have a new
% macro
%
% \SNP -- Sequentially Number Pointer
%
% This macro intended for cross
% referencing to a place in text, that
% is not specifically to a 'unit'
% (definition, theorem), table, or
% figure.
%
% In short, can have
%
% Note\SNP{\SNTagCU}
% that, for $a, b \in R$ and
%
% which will define tag \SNTagCU but
% insert nothing into the document.
%
% Then elsewhere can have
%
% Since as on page
% \SNCGetPage{\SNTagCU}
% an inner product is bilinear,
%
% which will insert into the document
% the page number of the page where the
% macro \SNTagCU was defined. Many
% more details below:
%
% The macro \SNP is part of this
% package of cross referencing and
% sequential numbering but does not
% actually 'number' or 'sequentially
% number' anything. We start the name
% of \SNP with 'SN' just to regard
% macro names of the form SNx as
% 'reserved' in own TeX usage.
%
% Then elsewhere in the document, can
% refer to that place by its page
% number, chapter number (apparently
% with its chapter prefix), etc.
%
% E.g., if give a little discussion of,
% say, bilinear, can type, say,
%
% Note\SNP{\SNTagCU}
% that, for $a, b \in R$ and
%
% and that will just 'define' tag
% SNTagCU. Of course the tag SNTagCU
% was likely from running own KEdit
% macro isntag to find the new tag name
% SNTagCU and insert it in the argument
% of the macro \SPP. That is, in KEdit
% we would have line
%
% Note\SNP{}
%
% current, run KEdit macro isntag, and
% get a new tag found an inserted to
% have something like
%
% Note\SNP{\SNTagCU}
%
% Then elsewhere in the document can
% write, say,
%
% Since as on page
% \SNCGetPage{\SNTagCU}
% an inner product is bilinear,
%
% and get the page number of the page
% in the document with
%
% Note\SNP{\SNTagCU}
%
% that is, where tag \SNTagCU was
% defined.
%
% So, the lines
%
% Note\SNP{\SNTagCU}
% that, for $a, b \in R$ and
%
% write to the XRF file the standard
% three lines that would be written by,
% say, macro \SNU. With those three
% lines, the first has the tag
%
% \SNTagCU
%
% the next line has chapter and unit
% and the third line has the page
% number, all for what was the case for
% that part of that page when the macro
% \SNP was run.
%
% Then, macros
%
% \SNUGetChapter\SNTagCU
% \SNUGetUnit\SNTagCU
% \SNUGetPage\SNTagCU
%
% will all work fine to extract the
% data on, respectively, chapter, unit,
% and page and insert it into the
% document. That is, there is no
% reason to have separate macros to
% extract and for macro \SNP.
%
% Really the macro \SNP is just like
% macro \SNU except does not add 1 to
% SNUnit and does not insert into the
% document the chapter prefix, chapter
% number, and unit number. So, where
% macro \SNP is used, nothing is
% inserted into the document; this is
% in contrast with, say, macro \SNU and
% is why need the new macro \SNT
%
% XREF001.TEX --
%
% Created at 02:29:35 on Thursday, April 13th, 2006.
% ======================================================================
%
% Modified at 01:03:52 on Thursday, April 13th, 2006.
%
% Macros for sequential numbering and cross-references
%
% Macros \SNUAdvanceChapter, \SNU#1, \SNT#1, \SNF#1,
% \GetSN, \SNC#1, \SNCGetChapter#1, \SNCGetPage#1,
% \SNUGetChapter#1, \SNUGetUnit#1, \SNUGetPage#1,
% \SNTGetTable#1, \SNTGetPage#1, \SNFGetFigure#1,
% \SNFGetPage#1, \SNChapterPrefix
%
% Counters \SNChapter, \SNUnit, \SNTable, \SNFigure,
%
% Here SN abbreviates 'sequential numbering'.
%
% Here we have a start on a fairly general collection of
% macros for sequential numbering and cross-referencing.
%
% For example, maybe in some chapter, that at present is
% chapter 3, we want to sequentially number definitions,
% theorems, remarks, and examples -- call them all 'units'
% -- as in:
%
% 3.1 Definition:
%
% 3.2 Theorem:
%
% 3.3 Definition:
%
% 3.4 Remark:
%
% 3.5 Example:
%
% Then with this package we would select some tags, say,
% starting with SNTag, and type
%
% \SNU{\SNTagA} Definition:
%
% \SNU{\SNTagB} Theorem:
%
% \SNU{\SNTagC} Definition:
%
% \SNU{\SNTagD} Remark:
%
% \SNU{\SNTagE} Example:
%
% So, macro \SNU abbreviates 'sequential numbering with
% units'. So, with macro \SNU we get a case of automatic
% sequential numbering.
%
% Suppose 3.1 Definition: appeared on page 43. Then
% elsewhere we could write
%
% See \SNUGetChapter\SNTagA.\SNUGetUnit\SNTagA Theorem
% on page \SNUGetPage\SNTagA.
%
% and get
%
% See 3.1 Theorem on page 43.
%
% So, we get automatic cross-referencing.
%
% Of course, to do cross-referencing in such a general way,
% need to run TeX at least twice, once to find, for each
% sequentially numbered 'unit', its page and write this
% data to a file, and once to read this file and insert the
% sequential numbering and page numbers where desired in
% cross-references.
%
% The file is \jobname.XRF.
%
% To use this sequential numbering with units,
%
% o At each chapter, after the \eject, if there is
% one, and just before the \hOne, have
%
% \SNUAdvanceChapter
%
% to increment by one the counter \SNChapter and
% set the counter \SNUnit to 0.
%
% o Otherwise use macros
%
% \SNU\SNTagA
% \SNUGetChapter\SNTagA
% \SNUGetUnit\SNTagA
% \SNUGetPage\SNTagA
%
% as illustrated.
%
% o Then run TeX at least twice, basically until
% the file \jobname.XRF quits changing.
%
% So, we get sequential numbering and cross-referencing with
% 'units'. The macros and counters particular to 'units'
% all begin with \SNU.
%
% We let
%
% \newcount\SNUnit
%
% keep track of the number of the unit. So, when we use
%
% \def\SNUAdvanceChapter{\advance\SNChapter by1
% \SNUnit=0}
%
% to increment
%
% \newcount\SNChapter
%
% we also set \SNUnit=0 for the new chapter.
%
% With the \SNU macros as illustrated, will get file
% \jobnane.XRF like
%
% \SNTagA
% 1 1
% 1
% \SNTagB
% 1 2
% 1
% \SNTagC
% 1 3
% 1
% \SNTagD
% 1 4
% 1
% \SNTagE
% 1 5
% 1
% \SNTagF
% 1 6
% 1
% \SNTagG
% 1 7
% 1
% \SNTagH
% 1 8
% 1
% \SNTagI
% 1 9
% 1
%
% So, get three lines for each invocation of \SNU. The
% first line has the tag. The second line has data
% particular to 'units' macros. And the third line has the
% page number.
%
% For in Appendix I, may want units to go
%
% A.I.1.2
%
% that is, to have a 'prefix' of 'A.I.'. To this end there
% is macro
%
% SNChapterPrefix
%
% which is default empty. Setting this macro to
%
% \def\SNChapterPrefix{A.I.}
%
% will give the prefix 'A.I.'. But, such a prefix should
% have no blanks else the parsing of the macros that get
% cross-referencing information will get confused!
%
% But we may also want sequential numbering and
% cross-referencing for tables, figures, equations, etc.
%
% Then the idea is to have more sequential numbering
% macros. But, do not want a proliferation of files. So,
% these other macros should also use the file \jobname.XRF.
% For compatibility, each of these other macros should also
% write three lines to file \jobname.XRF. The main freedom
% is just in the second line.
%
% Each tag, e.g., \SNTagA, becomes a macro. So, each tag
% should be spelled like a TeX macro and have spelling
% different from all other TeX macros in use.
%
% During the first pass of TeX, macros
%
% \SNUGetChapter\SNTagA
%
% \SNUGetUnit\SNTagA
%
% \SNUGetPage\SNTagA
%
% will notice that \SNTagA is not defined and will define
% it as the macro \SNUNone from
%
% \def\SNUNone{CH UN PG}
%
% the three tokens of which are intended to abbreviate,
% respectively, chapter, unit, and page.
%
% Then each of the three macros
%
% \SNUGetChapter\SNTagA
%
% \SNUGetUnit\SNTagA
%
% \SNUGetPage\SNTagA
%
% will return 'CH', 'UN', and 'PG', respectively, as
% temporary place holders and a way to get page breaking
% approximately correct before another pass of TeX that
% will read and use the XRF file written on the previous
% pass.
%
% We also have macros for tables and figures. For tables,
% the names begin with SNT; figures, SNF.
%
% The main challenge was in macro SNU (SNT, SNF) that has
% to write both the sequential numbering and the page
% number. The cause of the challenge was how, really,
% when, TeX actually does page breaking. So, once TeX
% starts a new page, it keeps adding lines until it clearly
% has enough, and maybe too many, lines for that page.
% Then TeX decides where to break the page, uses any left
% over lines for the next page, and continues. So, when a
% macro in the text executes, TeX does not yet know the
% page number but the macro does know the chapter number
% and, say, the unit number.
%
% When TeX sees a \write, TeX temporarily puts the \write
% in a 'whatsit' until the page breaking and then executes
% the \write. 'Whatsits' are discussed starting a line
% 14,007 of the file of TEXBOOK.TEX.
%
% The trick, then, is to have
%
% \def\JUNKA{\the\SNChapter\space\the\SNUnit}
% {\edef\JUNKB{\write\XREFileOut{\JUNKA}}\JUNKB}
% \write\XREFileOut{\the\count0}
%
% In the first line we define \JUNKA that has the data to
% be written. The second line has a group with an \edef
% (define a macro expanded immediately) of a temporary
% macro \JUNKB with the \write and with the data to be
% written and, then, with an invocation of \JUNKB. Due to
% the \edef, the whatsit for the \write has only constants
% and the right constants. Those constants are from
% counters
%
% \SNChapter
% \SNUnit
%
% which might well change by the time the \write is
% executed; but, the constants will still have the values
% we need to have written.
%
% On the third line, we use \write to write the page
% number, and the page number in \count0 is expanded at the
% time of the page breaking and, thus, has the correct page
% number.
%
% Another challenge is how to handle the data read from
% file \jobname.XRF: As each line is read, it has a
% trailing blank. So, have to use and/or parse the blank.
%
\def\SNP#1{{\write\XREFileOut{\string#1}%
\def\JUNKA{\SNChapterPrefix\the\SNChapter\space\the\SNUnit}%
{\edef\JUNKB{\write\XREFileOut{\JUNKA}}\JUNKB}%
\write\XREFileOut{\the\count0}}}%
%
% END Modified at 04:16:31 on Tuesday, December 29th, 2015.
%
% Define a table tag:
%
\def\SNT#1{{\global\advance\SNTable by1
\write\XREFileOut{\string#1}%
\def\JUNKA{\the\SNTable\space JUNK}%
{\edef\JUNKB{\write\XREFileOut{\JUNKA}}\JUNKB}%
\write\XREFileOut{\the\count0}%
\the\SNTable}}
%
% Define a figure tag:
%
\def\SNF#1{{\global\advance\SNFigure by1
\write\XREFileOut{\string#1}%
\def\JUNKA{\the\SNFigure\space JUNK}%
{\edef\JUNKB{\write\XREFileOut{\JUNKA}}\JUNKB}%
\write\XREFileOut{\the\count0}%
\the\SNFigure}}
%
% Get all the sequence number data from file \jobname.XRF:
%
\def\GetSN{{\openin\XREFileIn=\jobname.XRF
\pushCount\InChapter
\pushCount\InUnit
\pushCount\InPage
\pushCount\MoreFlag
\ifeof\XREFileIn\MoreFlag=0 \else\MoreFlag=1 \fi\relax % \jobname.XRF exists?
\ifnum\MoreFlag=1 \relax
\loop
\read\XREFileIn to\LineIn
\ifeof\XREFileIn\MoreFlag=0 \else\MoreFlag=1 \fi
\ifnum\MoreFlag=1
\let\InTag=\LineIn
\read\XREFileIn to\LineIn
\edef\InValues{\LineIn}
\read\XREFileIn to\LineIn
\InPage=\LineIn
\edef\JUNK{\InValues\the\InPage}%
\global\expandafter\let\InTag=\JUNK
\repeat
\fi
\closein\XREFileIn
}}
%
% Get cross-references on 'chapters':
%
\def\SNCGetChapter#1{\ifx#1\undefined
\edef#1{\SNUNone}\else\fi
\expandafter\First#1 }
%
\def\SNCGetPage#1{\ifx#1\undefined
\edef#1{\SNUNone}\else\fi
\expandafter\Third#1 }
%
% Get cross-references on 'units':
%
\def\SNUGetChapter#1{\ifx#1\undefined
\edef#1{\SNUNone}\else\fi
\expandafter\First#1 }
%
\def\SNUGetUnit#1{\ifx#1\undefined
\edef#1{\SNUNone}\else\fi
\expandafter\Second#1 }
%
\def\SNUGetPage#1{\ifx#1\undefined
\edef#1{\SNUNone}\else\fi
\expandafter\Third#1 }
%
% Get cross-references on tables:
%
\def\SNTGetTable#1{\ifx#1\undefined
\edef#1{\SNTNone}\else\fi
\expandafter\First#1 }
%
\def\SNTGetPage#1{\ifx#1\undefined
\edef#1{\SNTNone}\else\fi
\expandafter\Third#1 }
%
% Get cross-references on figures:
%
\def\SNFGetFigure#1{\ifx#1\undefined
\edef#1{\SNFNone}\else\fi
\expandafter\First#1 }
%
\def\SNFGetPage#1{\ifx#1\undefined
\edef#1{\SNFNone}\else\fi
\expandafter\Third#1 }
%
% Read file \jobname.XRF here, now, before anything else.
% Get this reading done before any hint of a need to write
% to that file.
%
% Modified at 08:13:46 on Monday, January 4th, 2016.
%
% If a TeX file has
%
% {\bf \SNU{} Theorem:}\ \
%
% instead of, say,
%
% {\bf \SNU{\SNTagDA} Theorem:}\ \
%
% the TeX can die here:
%
\GetSN
%
% Important: First read file \jobname.XRF. Then open it
% for writing. This open will give the file length 0 and,
% thus, destroy the data, if any, just read. If
% execute
%
% \write\XREFileOut{\string#1}
%
% etc. without doing an open, then output will go to
% console.
%
% From some fairly careful experiments, commands
%
% \newread\XREFileIn
% \newwrite\XREFileOut
% \openout\XREFileOut=\jobname.XRF
% \closein\XREFileIn
%
% seem to ignore TeX block nesting and to be fully 'global'
% across blocks.
%
\openout\XREFileOut=\jobname.XRF
%
% ======================================================================
I've also published multiple books for myself and my friends and family using LaTeX, use LaTeX whenever I want to print out and post or distribute an article that looks crappy in the original, and I've long written my resumes in LaTeX.
In my experience there's a lot of layout that's been done in Illustrator. I suspect it's because of how long Illustrator's been around and had pretty good layout support, especially when compared with the early years of Aldus PageMaker and QuarkXPress and other tools. Illustrator was needed to do a lot of graphics work that these other tools didn't support, so it can make sense to do as much as you can in a single tool. If you can get good results doing everything you need in terms of text handling in Illustrator, why take the time to use a DTP application when you're likely going to need to use Illustrator anyway for graphics work.
For example, I wrote "iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift"[1][2] using LaTeX, and I think it came out rather well (Pearson has some pretty amazing LaTeX compositors that took my rough ramblings and turned them into something beautiful).
Quite a while ago, I also used TeX (not LaTeX, IIRC) as part of the typesetting backend of a database publishing tool for the international ISBN agency, to publish the PID (Publisher's International Directory). This was a challenging project. IIRC, each of the directories (there were several) was >1000 pages, 4 column text in about a 4 point font. Without chapter breaks. My colleagues tried FrameMaker first on a subset, they let it run overnight and by morning it had kernel-panicked the NeXTStation we were running it on. The box had run out of swap.
TeX was great, it just chugged away at around 1-4 pages per second and never missed a beat. Customer was very happy. The most difficult part was getting TeX not to try so hard to get a "good layout", which wasn't possible given the constraints and for these types of entries just made everything looks worse.
Yes, I am a lawyer with mind dyslexia and I often got into trouble for missing stuff on my letters or rearranging the letters of things like the other parties names or case numbers etc. So I use the LaTeX letter document class and import the other parties information from an adr file. That way I only have to type the darn thing once. I think I can automate more stuff on documents where I often make errors but I'm just getting started with LaTeX. It's been a big stress reliever for me.
Interesting. I always instruct my lawyers to reference all counterparty information only once, either in the header or preferably in the signature block, and then make everything else reference an impersonal defined term. I hate transaction-specific information being littered randomly around a document.
I work on supply chain optimization at Target, and we write lots of internal documents in TeX. Admittedly the team has a relatively academic background—many PhDs in math, CS and OR—but it's still industry use :).
Some Word/PowerPoint creeps in as well, but I highly prefer the TeX documents. They look better, are easier to edit/reuse and slot neatly into Git. I really wish more people used it (or Markdown or whatever) instead of Office nonsense.
Completely agreed (happy LaTeX user in industry). However, I have recently been using org-mode for emacs and treating LaTeX as one possible output format. Being able to output to HTML/markdown is really useful for automating import into more commonly used tools.
For anything with non-trivial maths or structure, I'd still use LaTeX, but it's very nice to have an (easier, more portable) choice of output formats.
I use it when I need to write some documents (user guides etc...). If the alternative is open office or some other "WYSIWYG" editor I'll take LaTeX any time.
I don't really love it though, I find myself using org-mode more and more for that purpose, then I let emacs generate the latex code for my doc. It's not nearly as flexible but it works well and your source is a lot more readable.
I also use LaTeX for my resume, if you find a good theme they end up looking pretty sharp and professional with little work.
I do EECS research in the aerospace industry. Every journal we ever submit to either requires or accepts Latex. I also used it regularly to write reports as an undergrad. At least in the EECS side of the engineering field, Latex is alive and well.
Well, this was in academia and it was a long time ago, but when I was in college at the turn of the century I used it. Once I noticed that the exact same paper got a whole extra grade in LaTeX, I never went back to word processors.
Unlike what most LaTeX users may think, LaTeX is actually not even widely adopted in academia, with less than 20% of scholarly articles published every year written using LaTeX (https://www.authorea.com/107393-how-many-scholarly-articles-...). That said, it is the only powerful option to professional typeset mathematical notation. And for that reason, it is used by few in some non-academic research fields (military, gov, pharma, tech, HN readers).
Yes. I just wrote a guidebook with it. It's not that great if you want to do a full-color glossy thing, but the results are way better than using something like Word.
I work in scientific instrumentation, on the product development side. I've never seen LaTeX output (unless it's been well concealed) in a report or paper.
What I'm seeing is that the use of typesetting and even its cousin, word processing, are generally going downhill. My employer has largely given up on print advertising. If we write papers, they're for the more commercial oriented journals or trade mags, that don't use TeX for their own typesetting.
More and more, I see stuff that's just written in the e-mail editor, with graphics copy-pasta'd from screen captures, or directly into web based services such as Office 365. Or, PowerPoint (with all of the pitfalls described by Edward Tufte).
Word processing has practically been relegated to documents that nobody reads, such as functional procedures and announcements from HR.
I realize this all sounds kind of cynical, but I hope is taken in good humor. However, the gist is that writing and correspondence are becoming increasingly informal.
As a sysadmin I use it, though to be fair most often in the form of emacs org-mode export to latex because I like the way it makes stuff look (so I'm not editing in latex unless I need to for some reason, which you can do in snippets inside org mode). Resumes, and anything I want to be formal such as latters and invitations I also use it for.
FWIW, I prefer to use it (LaTex with Beamer) to make technical presentations at conferences. This[1] was my most recent output.
Though, I spent way too much time than I care to admit making those, especially tweaking the TikZ 'diagrams' (first time I tried) that can be seen later in the slides. I find it quite convincing.
That's definitely not the case in computer science and electrical engineering at least. LaTeX definitely isn't great, but I don't see a viable alternative.
There's always InDesign, but given its time to proficiency, I think the bast case for similar purposes might still be something creating LaTeX/DocBooks source that ports well to InDesign.
In my current experience with states-side academia: draw a line between the most math/theoretical departments through physics and end up at the engineering departments. The tendency to use TeX/LaTeX follows that line from a relatively high probability to zero on arrival to engineering. In European academia it stays reasonable high throughout.
I think it's not perfect, but for many academic purposes, LaTeX is still the best tool in my opinion. It's definitely not perfect, but there's a lot of historical weight and quirks, but the results look great if a little effort is spent. :)
I honestly miss LaTeX a lot, especially the quality of documents I could create with it (when I was in academia). Hopefully I'll get a chance to do a white paper later this year and bust out the skills.
But I am totally willing to accept my role as an outlier... I even wrote a small TeX package at one point, so I'm aware it's not a "fun" system. :P
I'm not sure why you were downvoted. Most of the academics I know hate it. They use it because certain journals require it, or their advisor makes them use it.
I use TeX. LaTeX also works, but the books are longer and less well written than Knuth's original TeXBook! :-)!
I love TeX -- it's one of my favorite and most important tools.
I have a Ph.D. in applied math, and IMHO TeX (or LaTeX) is just essential, call that more than ESSENTIAL for my work.
E.g., now I'm a "solo founder" of a startup, a Web site. The crucial core of the work is some original applied math I derived. So, yup, i typed it into TeX. So, as I wrote the corresponding software, I referred back to the TeX output of my core math -- worked great!
Without the math, the software would be impossible; one would just look at the screen and wonder what the heck to type. With the math, the software was just routine, essentially just trivial.
For typing material with a lot of math, I see no reasonable alternative to TeX or LaTeX.
I wrote my Ph.D. dissertation with word processing (thankfully!) but without TeX. What a pain. I could have included more math in the dissertation if I'd had TeX to do the word whacking. More generally, at one point in my career, I could easily have written and published a lot of original and tutorial applied math but didn't because of the difficulty of the math word whacking before TeX.
The last paper I published, some a bit wild mathematical statistics, was a good test for TeX -- some of my mathematical expressions in subscripts were a bit much, but TeX worked flawlessly!
If anyone is typing a lot of mathematical material and objects to TeX, then just encourage them to do the typing without TeX and see if they like that world better!
Computing is changing the world in major ways, some just astounding and/or astoundingly good; math is helping, a lot now and will more in the future; and TeX is just crucial for getting the math word whacking done. But Knuth knew that and did a great job.
So far, for the near and distant future of computing, TeX is one of the stronger pillars of civilization.
If it doesn't compromise your work, can you speak more of the path you took from a Ph.D. to startups/tech, and how your research allowed you to go down that path?
I'm a Ph.D. student in applied math as well, currently.
I tried a grad math department and didn't
like it: (1) In a course in real
analysis, early on the prof discussed some
set theory. The summer before I'd had an
NSF thing in axiomatic set theory, Suppes,
von Neumann, an appendix in Kelley, etc.
His first test had a problem, and at the
last minute I saw a solution and wrote it
down. He called me on the carpet -- nasty
guy. I apologized for using little omega
for its usual meaning without defining it,
and then he saw that my solution was
better than his and I was off the
carpet. Bummer. He was too quick to
cut me off at the knees. (2) Course was
in Kelley, General Topology. As a ugrad
senior, I'd lectured a prof once a week
and covered all of it except the last
chapter on compactness when I cut out to
finish my honors paper [The typing was so
hard that from rolling the carriage a half
step my left arm hurt for a year!] But the
course in grad school, same book, was
beneath me. I turned in a stack of solved
exercises and was a nice guy -- I didn't
submit any I'd done in ugrad. Waste of
time. (3) There was an abstract algebra
course from Herstein's book -- by then
nearly all beneath me. E.g., my ugrad
honors paper had been on group
representation theory which is heavy
linear algebra and abstract algebra stuff.
I solved some exercise in ring theory and
got sent to a full prof. The only thing
new in the course for me was Galois
theory, so I studied that some weekend and
took an oral exam for the course. Waste
of time.
I wanted the math for math-physics but
didn't see how to get that there.
Certainly not Galois theory. There were
some good ways but not with the courses
they put me in. The specs for the q-exams
were a disaster -- the faculty committee
had a political train wreck. Bummer.
I got recruited by the NBS&T in DC.
Getting to DC then was the land of milk
and honey for applied math. I got
married, and she went for her Ph.D.
We had a great time, good French cheese,
some quite good French wine, lots of
plays, concerts, trips to Shenandoah, etc.
I got into descriptive statistics,
multi-variate statistics, statistical
hypothesis testing, numerical linear
algebra, curve fitting, the fast Fourier
transform, second order stationary
stochastic processes, extrapolation, and
power spectral estimation, optimization,
the Navier-Stokes equations, did a lot of
catch up reading in the basics, a lot more
in linear algebra, multi-variate calculus,
e.g., exterior algebra, and more. Kept
busy. Had a great time. Also got into
computing in a fairly big way. Got some
nice items, e.g., two new cars, etc.
My favorite book on my bookshelf,
including for applied math, is J. Neveu,
Mathematical Foundations of the Calculus
of Probability.
Worked in industry and saw some problems
in combinatorial optimization,
deterministic optimal control, and
stochastic optimal control, identified a
problem in stochastic optimal control and
found an intuitive solution, applied to
grad school in applied math. Got into
Cornell, Brown, Princeton, and more.
Independently in my first summer did the
research for my dissertation in stochastic
optimal control. Had lots of delays
having to do with my wife and, then, our
budgeting. In a rush, wrote some
corresponding software in two months, much
of it over Xmas at wife's family farm, and
wrote and typed in the final dissertation
in six weeks, stood for orals, and got my
Ph.D.
During Ph.D., did work in military systems
analysis, some optimization, statistics,
and Monte Carlo -- wrote the corresponding
software.
The day my wife got her Ph.D. she was in a
clinical depression from the stress. To
help her get better, I took a job I didn't
want as a B-school prof in applied math
(also played a leadership role in campus
computing and did some consulting) but was
near her home family farm that I hoped
would help her. It didn't. I took a job
in AI at IBM's Watson lab and did some
optimization, mathematical statistics, and
AI. My wife never recovered from her
illness and died.
Then I became an entrepreneur.
I did some interesting work in two cases
of optimization; thus I found good
solutions to the customers' problems that
they believed could not be solved; that I
solved the problems scared them off. One
solution turned out to be just linear
programming on networks -- I was coding up
the W. Cunningham variation when the
customer ran away. The other problem was
just some Lagrangian relaxation; I got a
feasible solution within 0.025% of
optimality in 500 primal-dual iterations
in 900 seconds on a slow PC to a problem
in 0-1 integer linear programming with
40,000 constraints and 600,000 variables
-- scared the pants off the two top people
in the customer's company. They had tried
simulated annealing, failed, and concluded
that no one could solve their problem;
that I found a good solution, both the
math and the software, scared them off.
I looked into lots of stuff that didn't
work out.
Lesson: US national security, especially
around DC, was, maybe still is, really
eager for a lot in applied math --
optimization, stochastic processes, etc.
In wildly strong contrast, I've seen no
interest in business at all comparable,
not even in Silicon Valley. The US DoD is
often quite good at exploiting applied
math; in comparison,
business, in a word, sucks. The
flip side of that suckage is, in some
cases, an opportunity.
Lesson: Business is still organized like
a Henry Ford factory where the supervisor
knows more and the subordinates are there
to add routine muscle to the thinking of
the supervisor. Sooooo, US business just
HATES anyone who knows more than any of
the supervisors about anything relevant to
the business, and one can about count all
the good cases of applied mathematics in
business without taking shoes off.
Business CAN make good use of specialized
expertise and does with lawyers, licensed
engineers, and medical doctors. Each of
these, however, is usually outside the
usual organization chart pecking order, is
often from an outside service, in a
research division, in a staff slot off the
C-suite, etc. Each of these has a
profession that is crucial; applied math
doesn't. Bummer.
In business, an applied mathematician who
shows the company how to save 15% of the
operating costs is a lose-lose to the
C-suite: If the project flops, then it
was a waste, and anyone in the C-suite who
signed off on the budget has a black mark.
If the project is successful, everyone in
the C-suite feels that their job is at
risk from the guy who did the good
project. So, the C-suite sees any such
project as a lose-lose situation.
Nearly no one in US business got promoted
for doing an applied math project
successfully or got fired for not trying
an applied math project.
So, sure, to make money with applied math,
go into business, your own business, as
your own CEO, and own the business.
Now some of the opportunities are closely
related to the Internet -- take in data,
manipulate the data with some applied
math, maybe somewhat original and novel,
spit out valuable results. Then
monetize the results whatever way looks
best. Use the math as a crucial, core,
powerful, technological advantage, secret
sauce. Don't expect the customers/users
to see anything about the math -- just get
them results they will like a lot. Do the
other usual things when can -- viral
growth, network effects, lock in, good
publicity, own data, etc.
My software now is 100,000 lines of typing
with about 25,000 programming language
statements and the rest voluminous
comments. About 80,000 of the 100,000 are
for on-line, and the rest are for
off-line, occasional batch runs for some
of the data manipulations. There is some
light usage of SQL Server.
I am basing on Windows and the .NET
Framework. For the Web site, that is
Microsoft's IIS (Internet Information
Server -- handles the TCP/IP Web site
communications and much more leaving a
nice environment for my code for the
actual Web pages) and ASP.NET for the Web
page controls (single line text boxes,
links, check boxes, radio buttons, etc.).
My Web pages and my code for the pages is
just dirt simple -- Microsoft writes a
little JavaScript for me, and I have yet
to write a single line of it. There's no
use of Ajax, no pull downs, pop ups, roll
overs, overlays, icons, etc. The Web site
is also dirt simple, a seven year old who
knows no English and has only a cheap
smart phone dirt simple.
I wrote a little C code; am using some
open source C code, and otherwise wrote
all the code in Visual Basic .NET -- seems
fine to me. The important stuff is the
math I derived; given the math, the code
is routine, and Visual Basic .NET is well
up to the work. Since I don't like C
syntax, I don't like the syntax of C#.
Maybe someday I will convert to C#, but in
an important sense Visual Basic .NET to C#
is just converting to a different flavor
of syntactic sugar -- indeed, IIRC there
is a translator.
So, I'm an entrepreneur working to sell
the results of some math I derived.
So, to do such a thing, think of a problem
and a solution, write the code, sell the
results. Of course, problem selection is
a biggie. And want a problem that can
solve and with own applied math with a
better solution than available otherwise;
want the software not too much to write;
want the computing resources within what
is reasonable now or soon (possibly
considering the cloud); want the results
to be a must have instead of just a
nice to have for enough users/customers
times money per each to make some big
bucks.
If you are a solo founder, then you get to
avoid a common cause of failure -- founder
disputes. As a founder, you SHOULD
understand all the work, so if you are a
solo founder you will!
Don't expect any equity funders to write
you a check until you have revenue
significant and growing rapidly. Thus, as
a solo founder with revenue significant
and growing rapidly you won't accept their
check. No one in equity funding has yet
seen even 10 cents from applied math
research; you won't get back even laughs.
Ph.D. academics is really good at work
that is "new, correct, and significant".
Business just HATES anything really new or
significant and has no idea how to check
"correct". E.g., the information
technology VCs keep looking for
simplistic, empirical patterns and have no
idea how to evaluate anything new.
Really, their looking for such patterns is
likely also just a publicity scam;
instead, they want to invest money in a
business where accountants working for
their limited partners, who, if that is
possible, know even less about math, can
say that they made a good investment. In
an analogy, they want to buy a ticket on a
plane that has already reached nice
altitude and is climbing quickly. Maybe
the startup will take their money if there
are five founders, all exhausted, all with
all credit cards maxed out, and each with
a pregnant wife.
For a good applied mathematician -- with some
original, powerful, valuable work, good at
software, with a business with significant
revenue growing rapidly -- to report to a BoD
of business people, essentially anyone
else in business, is a bummer. E.g., at
an early BoD meeting you will outline an
applied math project for some nice
progress in the business, and about the
time you get to sufficient statistics, an
ergodic assumption, completeness of
Hilbert space, the polar decomposition,
something in NP-complete, or a martingale,
the board members will soil their clothes,
leave a smelly trail to the rest room,
and then run screaming from the building.
They will meet off-site, fire you, put the
business up for sale for the cash in the
bank, and be glad you are gone. Bummer.
So, go into business for yourself. Or,
don't expect anyone in business, who no
doubt knows next to nothing about math,
doesn't even remember sin' = cos, to
create a job suitable for your talents,
training, and business value in applied
math.
Heck, at one time in business, I saved a
major company just by formulating and
solving
y'(t) = k y(t) (b - y(t))
The BoD was thrilled, but I scared the
socks off the C-suite.
That was the third time. The first time I
wrote some software that pleased the BoD
and saved the company. The second time I
beat everyone in the C-suite at Nim -- I'd
read the algorithm in Courant and Robbins.
Scared the socks off the C-suite.
Applied math in business is a wide open
field -- nearly no one there now. So, you
will be alone. You can trust the solid
math you know, the solid, new proofs you
write, and a lot in software, but no one
will do anything but laugh until you have
the big bucks in the bank; then, since you
did something valuable they don't
understand and know they could not have
done, they will fear you and hate you;
they will all agree and may gang up on
you; they may attack you. The laughing
is not nearly new: Just read the Mother
Goose "The Little Red Hen"; that's still
the case.
There's a lot of good, foundational
applied math code out there for
optimization, statistics, etc. you might
be able to exploit. In computing,
operating systems, .NET etc., SQL etc. are
astounding and from free to usually quite
cheap.
Nearly no one in business can identify,
formulate, and solve even a problem that
is basically just linear programming --
the competence in applied math in US
business is, well, they forgot plane
geometry. To expect them to derive some
simple Lagrangian relaxation is asking for
hen's teeth.
As an applied mathematician in business,
you will be essentially alone out there,
in the nearly empty intersection of math
and business. If you are successful,
then you will necessarily also be
exceptional, and necessarily nearly
everyone who is exceptional is alone.
My first server is an AMD FX-8350, 64 bit
addressing, 8 cores, 4.0 GHz processor
clock, 32 GB ECC main memory, etc. That's
a lot of computing power for the money.
Fill that up doing something valuable, buy
20 more, fill those up, and sell out for
$1 billion or so. It's a heck of an
opportunity.
Thank you so very much, graycat. Incredibly helpful, and I appreciate the time you put into the discussion. I feel as if I need to read your post 3-4 times to absorb it all.
I am working in numerical methods for PDE so some of this was far afield but it does make sense that those are the areas ripe for opportunity.