I have been working on this as a hobby project for a few weeks now, and it's starting to look decent. It is barely an early alpha yet, so please forgive any catastrophic data loss etc...
It is a tool for breaking down projects into manageable pieces, and to estimate the total time of the project.
Pretty useful. I've been dogfooding during the development, and it is a lot better than my usual method of indented lists in a text file.
* Click the button to create a new project.
* Type in your project title.
* Click the "+"-button to add a task.
* Hover over a taks to see the button for adding subtasks.
* Drag&drop task to reorder them or move them up/down the hierarchy.
* Open the edit-dialog on a task to set an estimated time.
* When you enter the first estimate in a task, you get projected estimates for all other tasks.
It is desktop-only at the moment. Some of the stuff works on mobile, but don't count on it. None of the drag&drop works, for example, and it relies on mouse hover.
- The big, red button where normally there's an "accept" button is very dangerous while editing a task. Delete actions should be small-ish not to be done by accident. I suggest changing that button for an "Accept" one, and setting the delete as an extra row or somewhere not so-easy to click.
- Clickable items are better with a `cursor: hover` on those elements. In that case, you know where you can click in a better way.
- It's a productivity task, so make some things dead-obvious. For example, the fact that you can drag tasks around should not be discoverable by accident. You might want to add an icon or a small text on the bottom of the screen to make it obvious.
The hover to reveal sub-task add button doesn't work well with tablets as you end up opening the task edit first and then you can click on the now visible plus sign.
As @Tommi mentioned, hover isn't a great concept in general, as UI elements are hidden and user has to discover them. Though, this is less of an issue in an app the user will be constantly using.
* I think hovering to change the task is fine when you are adding smaller things, but if you are putting in a bunch of tasks it would be nice if you could just keep them there (like an edit mode).
* I think the time entry part could use some work, obviously alpha here, but I could see things like a default time for each task, and a button to add/remove increments of time in a quick way.
* Having some sort of bubbling surrounding or highlighting for each task group might be nice from a visual standpoint.
> * I think hovering to change the task is fine when you are adding smaller things, but if you are putting in a bunch of tasks it would be nice if you could just keep them there
You mean like having the description and estimated time editable from the tree-view? It adds clutter...
> * Having some sort of bubbling surrounding or highlighting for each task group might be nice from a visual standpoint.
Hmm. It can't be there the whole time since the bulles would add borders-around-borders-around-borders for as deep as your tree is. The clutter adds up fast.
I worked on something similar[1] and was totally hamstrung by front-end engineering.
I can't tell if this a straight up adaptation of PERT 3-point estimates or not. That's essentially what I was trying to develop. It's taught in project management courses but there're no simple tools for it.
I'm wary of projecting from a few tasks to all tasks. The strength of the PERT 3-point method is that it can provide a projected cumulative distribution of outcomes based on expert opinion.
There's a lot of PM and ops research literature about the PERT 3-point technique. My suspicion though is that it actually works less because of the quasi-statistical side of things, and more because of what psychologists called "the unpacking effect".
The unpacking effect is simple: ask someone to estimate a task ("how long will it take to prepare for a flight across the country?"). The estimate will probably be low. If, instead you ask them to enumerate subtasks -- to unpack the task -- subsequent estimates become more accurate.
I use this at work when estimating story points. See a story like "as a user, I can add a phone number to my profile", then speculate aloud what will be required. "We need a database field, to update the model, controller and view, we need some styling and documentation; does this need to be part of the analytics backend too?"
>PERT 3-point estimates...It's taught in project management courses but there're no simple tools for it.
Thanks for mentioning some theory behind this. What is lost by doing a simple best/worst/likely time estimate of all sub tasks, as compared to doing the full blown statistical analysis? The simple approach can be achieved relatively easily with a spreadsheet. For small-ish projects, does the full blown approach provide any significant benefit?
I work mostly on small projects, implementing new server/network infrastructure for small/medium businesses. Time estimation is always a crap shoot, and the only approach I know to combat it is to pad the estimate with generic line items like "testing" and "troubleshooting". If the statistical analysis would help create a better estimate of time, I would spend time developing or contributing to an estimating tool.
> What is lost by doing a simple best/worst/likely time estimate of all sub tasks, as compared to doing the full blown statistical analysis?
I'm not sure I understand you, but I'm just going to talk at you anyway.
In the 3-point estimate you break the estimate down into smaller parts, then have experts assign best/likely/worst case values to each part. Then they are rolled up into a single CDF.
The thing is that the formulae used are basically made up. They give a "triangular distribution", which superficially resembles a normal distribution at low precision. But lots of things don't resemble a normal distribution. You can hope that through the central limit theorem your estimate will improve towards resembling a normal distribution ... except that humans persistently underestimate everything.
A lot of the Ops Research / PM literature is about fiddling with the formulae, or introducing adjustments based on historical data, which all helps. But by itself the 3-point method improves outcomes simply because you bother to enumerate stuff.
The other secret is that if you do it a few times, you tend to look at previous estimates and you begin to remember things that are frequently forgotten. A common cause of underestimation is leaving off common tasks. For example, software developers can often give a reasonable estimate for the core work they're doing (code and tests), but tend to forget to account for everything else that's needed before a feature can be considered done done: merging, discussion with others, integration, deployments, documentation and so on.
Thanks. So the main benefits are from simply enumerating sub tasks, assigning best/likely/worst case values, and repeating the exercise to improve sub task coverage. It sounds like the actual statistical analysis is only an incremental improvement beyond that, and probably does not add significant value to small projects.
> I worked on something similar[1] and was totally hamstrung by front-end engineering.
Well, frontend is my thing, so... The estimation is a total hack, though. But it can easily be ripped out.
> I can't tell if this a straight up adaptation of PERT 3-point estimates or not.
It's probably not. I skimmed through some papers I found, but only to confirm the basic Idea I had to begin with. There is lots of room for improvement.
> I'm wary of projecting from a few tasks to all tasks.
Yes. This is intentionally just bullshitting. You are often very unsure of how big a task is going to get, and this bullshat number is sometimes as good as any number I can think of.
And, sometimes the best way of getting the right answer is to give a wrong one.
> My suspicion though is that it actually works (...) more because of what psychologists called "the unpacking effect".
This is what I have noticed at work, and what I have (tried to) optimize the UI for.
* The automagical guessing of missing estimates. Propagating the estimate for a subtask to other subtasks at the same level makes a lot of sense for a first guess, if I'm any good at task breakdown. The propagating up to the "aunt/uncle" tasks (i.e. siblings of the parent) is a bit stranger - seems like it should dramatically underestimate most top-level tasks, and I'd worry about the anchoring effect. Still, it makes it very convenient and quick.
Have you thought about visually distinguishing autoguessed estimates from manually entered ones? At least for leaf tasks, I'd like a reminder of what I have left to estimate. (You could propagate up the "degree of guessedness" to parent nodes, but you probably wouldn't want to, or it would undermine the usefulness of guessing in the first place.)
* The visual tree structure is nice. I like how the horizontal arrangement puts a natural limit on how deeply you can nest, and therefore discourages overplanning :)
Yeah I could see this function well if you have some option to set all tasks on the same level. Atm it feels kind of confusing though, and especially the "bubbling". I made this structure, then added an estimation to subtask 2-1-1, and all the other estimations were filled in. I didn't understand at all what was going on at first and thought I had hit a bug.
same thing that confused me too. I estimated one subtask and all other tasks and subtasks of other tasks were estimated. This seems to be a case of preemptive overengineering.
You're basically competing with Excel here. One of the strengths of Excel is the excellent support for keyboard. That is, easy entry of new cells and navigating between them by only using keyboard.
Your application's support for keyboard is nonexistent. If I have to enter lines of text and I have to continuously click (+), I will close the browser pretty fast.
For some useful theory for estimation and planning of projects, Eli Goldratt wrote amazing content in the book "Critical Chain" which is an easy to read novel.
Should you find yourself developing this tool further, these books will give you an edge that most other tools lack -
The visual hierarchical structure is very helpful, both for breaking a project down and understanding it. However, I was worried for a moment that there was no way to have multiple subtasks at the same level, probably because there is no immediate feedback during the dragging. The lack of feedback requires you to guess what's going to happen when you drop the task.
The icons distinguishing guessed from manual estimates are rather meaningless to me. I suppose the manual icon is supposed to represent a set "target"? The guessed estimate could be something with a question mark in it to represent the guess.
As mentioned already, the "Delete" button is where a "Save" should be, I've found myself automatically moving towards it after editing a task.
The tasks could be slightly smaller, so that more fit on a screen (I can see only 6 in a column at most, though my screen is only 1366x768).
Overall it seems like a great tool and I'd be interested in using it. The front page would certainly benefit from a "email when it's beta/ready" box, as my email would be in there now.
> However, I was worried for a moment that there was no way to have multiple subtasks at the same level
Hmm. Any idea how to make it more clear? (Apart from just stating it... I need a manual, obviously.)
> The lack of feedback requires you to guess what's going to happen when you drop the task.
Right! I have plans for that, but it's the kind of think that takes a lot of work, with relatively low bang-per-buck. At least so early in development.
> The tasks could be slightly smaller
I experimented a bunch with that, and picked readability over space saving. But yes, as the project grows it gets harder to get an overview. I like the [CMD]+[-]... :)
Be nice if you could use the enter / return key when inputing task titles to move to a new task. I think the delete button is strangely placed. This would normally be an accept button. Maybe just have a trashcan icon or something on the task itself.
In a nutshell, the key to making such an
effort very effective is just really good
knowledge of and experience with all the
parts of the project to be done.
Below in (1) -- (5) I give an overview of
parts of the field.
(1) Importance of Experience.
Here some examples where a lot of
experience is crucial:
A kitchen/bath remodeling firm can give a
good estimate of time and cost because
they have already done, say, one such job
a week for five years, that is, 250 jobs;
so, essentially they have already seen it
all, that is, all the possibilities of
what can go wrong and how much there is to
do. Similarly for an auto body shop, a
residential HVAC company, a plumber or
electrician, a landscaping company, an
asphalt paving company, a residential
housing general contractor, etc.
(2) Software Projects and Experience.
One means of project planning and
estimating for software projects looks at
each of the parts of the work and starts
with the experience level of the team for
that part.
For parts of the work where the team has
no experience, that is, is doing such work
for the first time, no estimates are made!
So, the project planning works only for
projects where there is good experience
for all the parts of the work.
Alas, often software projects need to do
some things for the first time.
and in particular the concept of a
critical path. As I recall, this work
can make good use of linear programming
which can identify the critical path.
Intuitively the critical path is a
sequence of tasks such that, if any task
in that sequence takes one minute longer,
then the whole project will take one
minute longer. So, the tasks on the
critical path have no slack.
So, to speed up the whole project, must at
least speed up the projects on the
critical path.
Next, then, of course, is the issue of
where to put more resources to speed up
the projects on the critical path. This,
then, is a resource allocation problem,
another optimization problem.
(4) Alternatives and Uncertainty.
One of the biggest problems in getting
projects done on time and within budget is
handling uncertainty, e.g., the effects of
unpredictable, outside influences, that
it, exogenous random inputs.
And, there can be some alternatives in how
to execute the projects; then during the
project might select different
alternatives depending on how the project
has gone so far, that is, the then current
state of the project.
State -- Essentially the situation or
status of the project at some particular
time, and generally want enough
information in the state (vector or
collection of information) such that the
past and the future of the project are
conditionally independent given the state,
that is, want to satisfy the Markov
assumption so that in planning the future
can use just the current state and f'get
about all the rest of the past.
One such state is always the full past
history of the project, but commonly
much less information is also sufficient
to serve as the state.
Commonly we can be drawn into the applied
mathematics of stochastic optimal control,
say, at least discrete time stochastic
dynamic programming, e.g.,
Stuart E. Dreyfus and Averill M. Law, 'The
Art and Theory of Dynamic Programming',
ISBN 0-12-221860-4, Academic Press.
Dimitri P. Bertsekas and Steven E. Shreve,
'Stochastic Optimal Control: The Discrete
Time Case', ISBN 0-12-093260-1, Academic
Press.
Wendell H. Fleming and Raymond W. Rishel,
'Deterministic and Stochastic Optimal
Control', ISBN 0-387-90155-8,
Springer-Verlag, Berlin, 1979.
E. B. Dynkin and A. A. Yushkevich,
'Controlled Markov Processes', ISBN
0-387-90387-9, Springer-Verlag, Berlin.
So, a lot of quite serious applied math
research has been done, and a lot is
known.
Broadly stochastic dynamic programming
makes the best decisions that can be made
making use of only information available
at the time each decision is to be made.
Nicely enough from the Markov assumption
and just a simple application of Fubini's
theorem, we can say a little more about
being best possible.
Interesting here, and a way to get some
massive speedups of some software, is the
concept of certainty equivalence in the
case of LQG, that is, linear, quadratic,
Gaussian control as in
So, the system is linear (as in a linear
operator or, for an important special
case, matrix multiplication); the
exogenous random variables are Gaussian;
and the quantity to be controlled, say,
minimized, is quadratic (which also
includes linear).
(5) Spreadsheets.
Sometimes a spreadsheet is used for
project planning, and there is an
interesting observation to be made here:
So, suppose we are doing financial
planning for five years and develop a
spreadsheet with one column for each
month, that is, 60 columns, maybe 61,
and one row for each variable.
So, some of the spreadsheet cells
represent exogenous random variables, and
some cells represent decisions to be made.
(A) We can fix the decisions and
recalculate the spreadsheet, say, 500
times and, thus, do Monte Carlo simulation
for the random cells and get an empirical
distribution and expected value of the
money made at the end of the project.
(B) We can fix the values of the exogenous
random variables and then use some
optimization to find the values of the
decisions that maximize the money made at
the end of the project. Spreadsheet
software has long, often had some
optimization, from linear programming or
L. Lasdon's generalized reduced gradient
software, version 2 -- GRG2.
This approach, in terms of football, say,
the Superbowl later today, would be like
the quarterback on first and ten calling
the next four plays without considering
the extra information he will have after
each of the plays. Quarterbacks wouldn't
do that, and business planners should not
either.
(C) But what about making the best
decisions with the random exogenous random
variables not fixed? Then usually we want
to maximize the expected value of the
money made at the end, but the form of the
decisions is special: At each period, the
decisions are in terms of the state of
the project as of the end of the last
period.
So, such optimization is called discrete
time stochastic dynamic programming,
Markov decision theory, etc.
So, nicely enough, computer spreadsheet
software has enabled some millions of
planners to specify in very precise terms
some nearly complete formulations of
problems to be solved with stochastic
dynamic programming.
For a reasonably large spreadsheet, such
optimization would be a super computer
application -- nicely, there really is a
lot of parallelism that can be exploited.
At one time I pursued just this direction
at the IBM Watson lab in Yorktown Heights.
Had the semi-far sighted management let me
continue, by now there should be a good
business for cloud sites to do such
problems, say, use a few thousand cores.
Ah, IBM has missed out on several much
larger opportunities!
There are various approaches to making the
computations faster, say, using
multivariate splines to approximate some
of the functions (tables) to be generated,
the R. Rockafellar idea of scenario
aggregation, and more.
My guess is that the OP has in mind
something simpler.
As I recall, there is some project
planning software. Right, a simple Google
search shows, e.g., Microsoft's Project
2013.
> My guess is that the OP has in mind something simpler.
Yes.
Not to dismiss the actual research that has been done, but... Estimates are mostly bullshit anyway.
This tool helps you to bullshit more accurately than nothing, while still being "super simple" to use.
The projection algorithm I use is extremely basic so far. There is a ton of improvements I have in mind already, and I haven't even bothered to more than skim read the research papers I found yet.
I did have plans for some simple Monte Carlo sampling to get a nice probability distribution graph, though.
> Not to dismiss the actual research that has been done, but... Estimates are mostly bullshit anyway.
Right, but in some cases you can know fairly well or quite well. E.g., say you are planning and one of the exogenous
random variables is the number of people,
or customers, who arrive
during the plan. Then with just a few assumptions, that
mostly can be checked just intuitively, that number of
people will have Poisson distribution and all that have to
estimate is the arrival rate parameter. One of the random variables might have to do with weather, but there is a lot
of data on the probability distribution of what the
weather can do.
But generally, yes, that applied math is a lot of theorems
and proofs about what the heck to do if you had a lot more
data than you do have. Or, in the South Pacific, it is an
airline service from island A to island B, terrific if
you are at A and want to get to B but no good if there's
no chance of getting to island A.
As I recall, long in some major US DoD projects,
keeping track of the critical paths and adding
resources to those was regarded as doing a
lot of good.
A related part of project planning is the subject,
often pursued in B-schools, of materials requirements
planning -- in practice there can be a lot to that.
And closely related there is supply chain optimization,
that is, when the heck will the critical parts arrive
for our poor project?
Also related is constraint logic programming or
how the heck to load all those 18 wheel trucks, each
with 40,000 pounds of boxes of fresh pork, from our
only three loading docks where each truck gets what
its scheduled deliveries need and the boxes are ready
from the kill and cut operation when the truck is
parked at the loading dock? Real problem. Such problems
are commonly looking for just a feasible solution, that
is, not necessarily an optimal solution, to some
constraints that might have been for an optimization
problem. Well, then, in such optimization, just
finding a first feasible solution is in principle as
hard as finding an optimal solution given a feasible
solution, so that constraint logic programming gets
into optimization. At one time, SAP, C-PLEX, etc.
got heavily involved.
Another planning problem is dial-a-ride bus
scheduling -- one of my Ph.D. dissertation
advisors tried to get me to pursue that problem
as a dissertation, but I avoided it like the Big
Muddy Swamp full of huge alligators and poisonous
snakes and picked another problem, right,
in stochastic optimal control, a problem I
could actually get some clean results in.
Did I mention, project planning is a big field?
Your software looks like it has a user interface a lot
of people will like a lot, but with enough usage
some users will still encounter some of the
challenging aspects of project planning.
I have been working on this as a hobby project for a few weeks now, and it's starting to look decent. It is barely an early alpha yet, so please forgive any catastrophic data loss etc...
It is a tool for breaking down projects into manageable pieces, and to estimate the total time of the project. Pretty useful. I've been dogfooding during the development, and it is a lot better than my usual method of indented lists in a text file.
* Click the button to create a new project.
* Type in your project title.
* Click the "+"-button to add a task.
* Hover over a taks to see the button for adding subtasks.
* Drag&drop task to reorder them or move them up/down the hierarchy.
* Open the edit-dialog on a task to set an estimated time.
* When you enter the first estimate in a task, you get projected estimates for all other tasks.
It is desktop-only at the moment. Some of the stuff works on mobile, but don't count on it. None of the drag&drop works, for example, and it relies on mouse hover.