politicking software gardener
178 stories

98.6 Degrees Fahrenheit Isn’t the Average Anymore

5 Comments and 7 Shares

Jo Craven McGinty, reporting for The Wall Street Journal:

Nearly 150 years ago, a German physician analyzed a million temperatures from 25,000 patients and concluded that normal human-body temperature is 98.6 degrees Fahrenheit. That standard has been published in numerous medical texts and helped generations of parents judge the gravity of a child’s illness. But at least two dozen modern studies have concluded the number is too high.

The findings have prompted speculation that the pioneering analysis published in 1869 by Carl Reinhold August Wunderlich was flawed.

Or was it?

In a new study, researchers from Stanford University argue that Wunderlich’s number was correct at the time but is no longer accurate because the human body has changed. Today, they say, the average normal human-body temperature is closer to 97.5 degrees Fahrenheit.

Read the whole story
129 days ago
Is this about better nutrition?
Cambridge, Massachusetts
128 days ago
A response to climate change, perhaps?
121 days ago
Athens, Greece
Share this story
4 public comments
113 days ago
The fact the doctors don’t know this is a very real problem.
127 days ago
That's been my base temperature for as long as I can remember. Don't know if it's a general change in people today or maybe that a sample of 25,000 patients from 150 years ago would have a lot more people fighting infections that we now vaccinate folks for.
Nashville, Tennessee
133 days ago
New York, New York
133 days ago
Nice. I thought it was just me.

How do committees invent?


How do committees invent?, Conway, Datamation magazine 1968

With thanks to Chris Frost for recommending this paper – another great example of a case where we all know the law (Conway’s law in this case), but many of us have not actually read the original ideas behind it.

We’re back in 1968, a time when it was taken for granted that before building a system, it was necessary to design it. The systems under discussion are not restricted to computer systems either by the way – one of example of a system is "the public transport network." Designs are produced by people, and the set of people working on a design are part of a design organisation.

The definition of design itself is quite interesting:

That kind of intellectual activity which creates a whole from its diverse parts may be called the design of a system.

When I think about design, I more naturally think about it the other way around: how to decompose the whole into a set of parts that will work together to accomplish the system goals. But of course Conway is right that those parts do have to fit together to produce the intended whole again.

There are two things we need at the very beginning of the process:

  • An (initial) understanding of the system boundaries (and any boundaries on the design and development process too) – what’s in scope and what’s out of scope.
  • A preliminary notion of how the system will be organised. Without this, we can’t begin to break down the design work.

Given a preliminary understanding of the system, it’s possible to begin organising the design team. Decisions taken at this early stage, with limited information, can have long-lasting consequences:

…the very act of organizing a design team means that certain design decisions have already been made, explicitly or otherwise. Given any design team organisation, there is a class of design alternatives which cannot be effectively pursued by such an organization because the necessary communication paths do not exist. Therefore, there is no such thing as a design group which is both organized and unbiased.

These days its less likely that you’ll have a dedicated design team – even the seemingly obvious statement that it makes sense to (at least partially) design something before building it can feel like a controversial statement! But of course we do undertake design activities all the time, perhaps informally and implicitly, sometimes more explicitly. We’ve just learned to take smaller steps with each design increment before getting feedback. If it helps, then in the software context if you mentally substitute ‘design and development’ every time Conway talks about design and the design organisation I don’t think you’ll go too far wrong.

Once you’ve got your initial organisation of the design (and development) team done, delegation of tasks can begin. Each delegation represents a narrowing of someone’s scope, with a commensurate narrowing of the class of design alternatives that can be considered. Along with the narrowing of individual scopes, we also make a coordination problem:

Coordination among task groups, although it appears to lower the productivity of the individual in the small group, provides the only possibility that the separate task groups will be able to consolidate their efforts into a unified system design.

It’s a rare team that re-organises in the light of newly discovered information, even though it might suggest a better alternative.

This point of view has produced the observation that there’s never enough time to do something right, but there’s always enough time to do it over.

The two most fundamental tools in a designer’s toolbox are decomposition and composition. The system as a whole is decomposed into smaller subsystems which are interconnected (composed). Each of these subsystems may in turn be further decomposed into, and then composed out of, parts. Eventually we reach a level that is simple enough to be understood without further subdivision. Of course therefore the most important decisions a designer can make involve the criteria for decomposing a system into modules, but that’s another story!

The different subsystems talk to each other through interfaces (a newly emerging term in 1968). Now, if we think about systems composed of subsystems interacting via interfaces, we will find a parallel in the organisation by making the following substitutions:

  • Replace ‘system’ by ‘committee’
  • Replace ‘subsystem’ by ‘subcommittee’
  • Replace ‘interface’ by ‘coordinator’

Or to put that in more modern terms, I think you can also:

  • Replace ‘system’ by ‘group’
  • Replace ‘subsystem’ by ‘team’
  • Replace ‘interface’ by ‘team leader’

We are now in a position to address the fundamental question of this article. Is there any predictable relationship between the graph structure of a design organization and the graph structure of the system it designs? The answer is : Yes, the the relationship is so simple that in some cases it is an identity…. This kind of structure-preserving relationship between twos sets of things is called a homomorphism.

By far my favourite part of the paper is the second half, where the implications of this homomorphism are unpacked. It was Fred Brooks who actually coined the term ‘Conway’s Law’ in The Mythical Man Month when referring to this paper. The mythical thing about the man person month of course is the illusion that person-months are fungible commodities – a very tempting idea from the management perspective, but utterly wrong! Conway shows us why. The ‘resource units’ viewpoint would say that two people working for a year, or one hundred people working for a week are of equal value…

Assuming that two men and one hundred men cannot work in the same organizational structure (this is intuitively evident and will be discussed below) our homomorphism says that they will not design similar systems; therefore the value of their efforts may not even be comparable. From experience we know that the two men, if they are well chosen and survive the experience, will give us a better system. Assumptions which may be adequate for peeling potatoes and erecting brick walls fail for designing systems.

We all understand this at some level, but it’s easy to forget. Plus, there are organisational forces that work against us:

  1. We come to the early realisation that the system will be large, with the implication that its going to take more time than we’d like to design with current team size. Organisational pressures then kick in to "make irresistible the temptation to assign too many people to a design effort".
  2. As we add people, and apply conventional management structures[^1] to their organisation, the organisational communication structure begins to disintegrate.
  3. The homomorphism then insures that the structure of the system will reflect the disintegration which has occurred in the design organisation.

The critical moment comes when the complexity has not yet been tamed, and the skills of the initial designer are being tested to the maximum:

It is a natural tempatation of the initial designer – the one whose preliminary design concepts influence the organisation of the design effort – to delegate tasks when the apparent complexity of the system approaches his limits of comprehension. This is the turning point in the course of the design. Either he struggles to reduce the system to comprehensibility and wins, or else he loses control of it.

Once an organisation has been staffed and built, it’s going to be used. Organisations have an incredible propensity for self-preservation.

Probably the greatest single common factor behind many poorly designed systems now in existence has been the availability of a design organisation in need of work.

I’ve always had a preference for smaller teams consisting of highly-skilled people over larger groups. Revisiting Conway’s law as I put this write-up together, the more often overlooked observation that the design organisation structure doesn’t just direct the design, but actually constrains the set of designs that can be contemplated strikes me most forcibly. Every person you add reduces your design options.

Perhaps the most important thing therefore, is to "keep design organisations lean and flexible." Flexibility of organisation is important to effective design, because the design you have now is rarely the best possible for all time.

And so finally, in the 3rd to last paragraph, we find the formulation that has come to be known as Conway’s Law:

The basic formulation of this article is that organisations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.

[^1]: Conway refers to the military-style organisational structure of each individual having at most one superior and at most approximately seven subordinates – pretty much the rule of thumb we still use today.

Read the whole story
169 days ago
Athens, Greece
175 days ago
Akron, OH
Share this story

Zone of Ceremony


Static typing doesn't have to involve much ceremony.

I seem to get involved in long and passionate debates about static versus dynamic typing on a regular basis. I find myself clearly on the side of static typing, but this article isn't about the virtues of static versus dynamic typing. The purpose is to correct a common misconception about statically typed languages.

Ceremony #

People who favour dynamically typed languages over statically typed languages often emphasise that they find the lack of ceremony productive. That seems reasonable; only, it's a false dichotomy.

Dynamically typed languages do seem to be light on ceremony, but you can't infer from that that statically typed languages have to require lots of ceremony. Unfortunately, all mainstream statically typed languages belong to the same family, and they do involve ceremony. I think that people extrapolate from what they know; they falsely conclude that all statically typed languages must come with the overhead of ceremony.

It looks to me more as though there's an unfortunate Zone of Ceremony:

A conceptual spectrum of typing, from dynamic on the left, to static on the right. There's a zone of ceremony slightly to the right of the middle with the languages C++, C#, and Java.

Such a diagram can never be anything but a simplification, but I hope that it's illuminating. C++, Java, and C# are all languages that involve ceremony. To the right of them are what we could term the trans-ceremonial languages. These include F# and Haskell.

Low ceremony of JavaScript #

Imagine that you're given a list of numbers, as well as a quantity. The quantity is a number to be consumed. You must remove elements from the left until you've consumed at least that quantity. Then return the rest of the list.

> consume ([1,2,3], 1);
[ 2, 3 ]
> consume ([1,2,3], 2);
[ 3 ]
> consume ([1,2,3], 3);
[ 3 ]
> consume ([1,2,3], 4);

The first example consumes only the leading 1, while both the second and the third example consumes both 1 and 2 because the sum of those values is 3, and the requested quantity is 2 and 3, respectively. The fourth example consumes all elements because the requested quantity is 4, and you need both 1, 2, and 3 before the sum is large enough. You have to pick strictly from the left, so you can't decide to just take the elements 1 and 3.

In JavaScript, you could implement the consume function like this:

var consume = function (source, quantity) {
    if (!source) {
        return [];
    var accumulator = 0;
    var result = [];
    for (var i = 0; i < source.length; i++) {
        var x = source[i];
        if (quantity <= accumulator)
        accumulator += x;
    return result;

I'm a terrible JavaScript programmer, so I'm sure that it could have been done more elegantly, but as far as I can tell, it gets the job done. I wrote some tests, and I have 17 passing test cases. The point isn't about how you write the function, but how much ceremony is required. In JavaScript you don't need to declare any types. Just name the function and its arguments, and you're ready to write code.

High ceremony of C# #

Contrast the JavaScript example with C#. The same function in C# would look like this:

public static class Enumerable
    public static IEnumerable<intConsume(
        this IEnumerable<intsource,
        int quantity)
        if (source is null)
            yield break;
        var accumulator = 0;
        foreach (var i in source)
            if (quantity <= accumulator)
                yield return i;
            accumulator += i;

Here you have to declare the type of each method argument, as well as the return type of the method. You also have to put the method in a class. This may not seem like much overhead, but if you later need to change the types, editing is required. This can affect downstream callers, so simple type changes ripple through code bases.

It gets worse, though. The above Consume method only handles int values. What if you need to call the method with long arrays?

You'd have to add an overload:

public static IEnumerable<longConsume(
    this IEnumerable<longsource,
    long quantity)
    if (source is null)
        yield break;
    var accumulator = 0L;
    foreach (var i in source)
        if (quantity <= accumulator)
            yield return i;
        accumulator += i;

Do you need support for short? Add an overload. decimal? Add an overload. byte? Add an overload.

No wonder people used to dynamic languages find this awkward.

Low ceremony of F# #

You can write the same functionality in F#:

let inline consume quantity =
    let go (acc, xs) x =
        if quantity <= acc
        then (acc, Seq.append xs (Seq.singleton x))
        else (acc + x, xs)
    Seq.fold go (LanguagePrimitives.GenericZero, Seq.empty) >> snd

There's no type declaration in sight, but nonetheless the function is statically typed. It has this somewhat complicated type:

quantity: ^a -> (seq< ^b> -> seq< ^b>)
  when ( ^a or  ^b) : (static member ( + ) :  ^a *  ^b ->  ^a) and
        ^a : (static member get_Zero : ->  ^a) and  ^a : comparison

While this looks arcane, it means that it support sequences of any type that comes with a zero value and supports addition and comparison. You can call it with both 32-bit integers, decimals, and so on:

> consume 2 [1;2;3];;
val it : seq<int> = seq [3]

> consume 2m [1m;2m;3m];;
val it : seq<decimal> = seq [3M]

Static typing still means that you can't just call it with any type of value. An expression like consume "foo" [true;false;true] will not compile.

You can explicitly declare types in F# (like you can in C#), but my experience is that if you don't, type changes tend to just propagate throughout your code base. Change a type of a function, and upstream callers generally just 'figure it out'. If you think of functions calling other functions as a graph, you often only have to adjust leaf nodes even when you change the type of something deep in your code base.

Low ceremony of Haskell #

Likewise, you can write the function in Haskell:

consume quantity = reverse . snd . foldl go (0, [])
    go (acc, ys) x = if quantity <= acc then (acc, x:ys) else (acc + x, ys)

Again, you don't have to explicitly declare any types. The compiler figures them out. You can ask GHCi about the function's type, and it'll tell you:

> :t consume
consume :: (Foldable t, Ord a, Num a) => a -> t a -> [a]

It's more compact than the inferred F# type, but the idea is the same. It'll compile for any Foldable container t and any type a that belongs to the classes of types called Ord and Num. Num supports addition and Ord supports comparison.

There's little ceremony involved with the types in Haskell or F#, yet both languages are statically typed. In fact, their type systems are more powerful than C#'s or Java's. They can express relationships between types that those languages can't.

Summary #

In debates about static versus dynamic typing, contributors often generalise from their experience with C++, Java, or C#. They dislike the amount of ceremony required in these languages, but falsely believe that it means that you can't have static types without ceremony.

The statically typed mainstream languages seem to occupy a Zone of Ceremony.

Static typing without ceremony is possible, as evidenced by languages like F# and Haskell. You could call such languages trans-ceremonial languages. They offer the best of both worlds: compile-time checking and little ceremony.


Tyson Williams

In your initial/int C# example, I think your point is that method arguments and the return type require manifest typing. Then for your example about long (and comments about short, decimal, and byte), I think your point is that C#'s type system is primarily nominal. You then contrast those C# examples with F# and Haskell examples that utilize inferred and structual aspects of their type systems.

I also sometimes get involved in debates about static versus dynamic typing and find myself on the side of static typing. Furthermore, I also typically hear arguments against manifest and nominal typing instead of against static typing. In theory, I agree with those arugments; I also prefer type systems that are inferred and structual instead of those that are manifest and nominal.

I see the tradeoff as being among the users of the programming langauge, those responsible for writing and maintaining the compiler/interpreter, and what can be said about the correctness of the code. (In the rest of this paragraph, all statements about things being simple or complex are meant to be relative. I will also exaggerate for the sake of simplifying my statements.) For a dynamic language, the interpreter and coding are simple but there are no guarantees about correcteness. For a static, manifest, and nominal language, the compiler is somewhere between simple and complex, the coding is complex, but at least there are some guarantees about correctness. For a static, inferred, structual langauge, the compiler is complex, coding is simple, and there are some guarantees about correctness.

Contrasting a dynamic language with one that is static, inferred, and structual, I see the tradeoff as being directly between the the compiler/interpreter writers and what can be said about the correctness of the code while the experience of those writing code in the langanguage is mostly unchanged. I think that is your point being made by constrasting the JavaScript example (a dynamtic langauge) with the F# and Haskell examples (that demonstrate the static, inferred, and structual behavior of their type systems).

While we are on the topic, I would like to say something that I think is contraversial about duck typing. I think duck typing is "just" a dynamic type system that is also structual. This contradicts the lead of its Wikpeida article (linked above) as well as the subsection about structual type sytems. They both imply that nominal vs structual typing is a spectrum that only exists for static langauges. I disagree; I think dynamic langauges can also exist on that sepcturm. It is just that most dynamic languages are also structual. In contrast, I think that the manifest vs inferred spectrum exists for static languages but not for dynamic languages.

Nonetheless, that subsection makes a great observation. For structual languages, the difference between static and dynamic languages is not just some guarantees about correctness. Dynamic languages check for type correctness at the last possible moment. (That is saying more than saying that the type check happens at runtime.) For example, consider a function with dead code that "doesn't type". If the type system were static, then this function cannot be executed, but if the type system were dynamic, then it could be executed. More practically, suppose the funciton is a simple if-else statement with code in the else branch that "doesn't type" and that the corresponding Boolean expression always evaluates to true. If the type system were static, then this function cannot be executed, but if the type system were dynamic, then it could be executed.

In my experience, the typical solution of a funcitonal programmer would be to strengthen the input types so that the else branch can be proved by the compiler to be dead code and then delete the dead code. This approach makes this one function simpler, and I generally am in favor of this. However, there is a sense in which we can't always repeat this for the calling function. Otherwise, we would end up with a program that is provably correct, which is impossible for a Turning-complete language. Instead, I think the practial solution is to (at some appropriate level) short-circuit the computation when given input that is not known to be good and either do nothing or report back to the user that the input wasn't accepted.

2019-12-16 17:12 UTC

This blog is totally free, but if you like it, please consider supporting it.
Read the whole story
172 days ago
Athens, Greece
172 days ago
Akron, OH
Share this story

Figures From Classical Paintings Experience Contemporary Life in Collages by Alexey Kondakov


Ukrainian artist Alexey Kondakov (previously) lifts figures out of classical paintings and drops them into modern-day photographs. Elegantly posed in dynamic lighting, his figures commute on public transit, dance in nightclubs, and peek around corners in otherwise mundane digital collages. The juxtaposition of the two worlds is humorous and at times seamless in its execution.

Through placement and shadows, Kondakov’s images sell the idea that the classical figures are three-dimensional objects photographed in a three-dimensional world. An image from an upcoming nightlife series depicts a mostly nude woman in a unique pose that, in context, can be read as dancing. Other images from his ongoing “Daily Life of Gods” use architecture and landscapes to ground the painted figures in an alternate reality.

To see more of his period-blending collages, give Alexey Kondakov a follow on Instagram.

Read the whole story
180 days ago
Athens, Greece
Share this story

Download More Than 300 Art Books From the Getty Museum’s Virtual Library


“The Rosebud Garden of Girls” by Julia Cameron. Virtual Library title: “Julia Margaret Cameron: The Complete Photographs” by Julian Cox, Colin Ford, Joanne Lukitsh, and Philippa Wright

Over the last five years, the Los Angeles-based Getty Museum has developed a program to share more than three hundred books in its Virtual Library. Each unabridged volume, drawn from the Getty Publications Archive, has been cleared for copyright issues and is available for free download. Greg Albers, Digital Publications Manager for Getty Publications, shared with Hyperallergic that books in the Virtual Library have been downloaded 398,058 times to date. The initiative is a way to keep compelling and historically important books available even if they have, literally, gone out of print. Topics in the Virtual Library collection range from fine and decorative art genres to features on specific artists. Dive into diverse titles including “Art and Eternity: The Nefertari Wall Paintings Conservation Project 1986 – 1992” and “Julia Margaret Cameron: The Complete Photographs”—among dozens and dozens of others on the Virtual Library Website. (via Hyperallergic)

“Pilgrim Flask and Cover with Marine Scenes” (circ 1565-1570), Workshop of Orazio Fontana, tin-glazed earthenware. Virtual Library title: “Italian Ceramics: Catalogue of the J. Paul Getty Museum Collection” by Catherine Hess

Read the whole story
185 days ago
Athens, Greece
Share this story

Waterfall managed to put a man on the moon, Agile managed to collect more likes

1 Share

This was my provocative answer on one of the numerous flames over Agile vs the Waterfall model. And as usual these fights start because people diminish the waterfall model with the excuse we have to be more Agile. I am inclined to give my 0.02E in a more constructed manner (and also try to resurrect my zombie blog). One main assumption is that what I discuss does not have focus on small, or one-off software

All models are wrong, but some models are useful. I strongly believe this quote which is attributed to George Box, so let me state beforehand that I do not believe that following blindly any models is the best way, but we can use them as a guidance to explore ideas and also finalize the way we work. Waterfall model has been out of fashion for quite some time, for understandable reasons, but it seems that it always remains one of the favorite punch bags for people embracing the “Agile methods”. Interesting most people who bash waterfall model and embrace Agile methods are the ones advocating Scrum, which as many say Scrum is not Agile. ¯\_(ツ)_/¯

Lets have a look on waterfall model. It is a sequence of a number of stages Requirements, Design, Implementation, Verification, Maintenance that they are followed linearly, and back stepping is frowned upon. In theory people do on each stage an excellent work, so everything falls in place like the water flows down a beautiful waterfall. Well, .. in theory. In practice life is never so nice, and this is the point where the rivals are using in order to discard the whole model. And this is exactly where they throw the baby out with the bath water.

“Waterfall does not permit changes”, they say. “The design and requirements are set in stone”, they say. “Like a waterfall there is only one direction never up”, the say. Well I can start with the last part to disagree, because obviously they have not seen a real waterfall. First of all, waterfalls are not created in any place, but where there are the right conditions. They are not created spontaneously, but it takes time for the water to find/create the path, smooth it and achieve this nice waterfall image that everybody has in his mind. Even during the creation, there are stones, rocks, rough edges. If you zoom really well in this place you can see that as the water of the waterfall falls down on them, it is repelled and you can even see it go back upwards in order to find a way to surpass the obstacle. Is this not the definition of agility? After time (a lot I admit) when  the rough edges have been smoothed down, then the water can flow unobstructed to the path. The image it creates it is the same mental image that people in Agile methods envision when they claim that always move forward in small steps.

Leaving aside the picturesque image I (ab)used, one can also check the papers that introduced the waterfall model, mostly Royce1970 and Bell and Tharner. Nowhere in the documents you will find any advocacy of such a rigid model. There are a number of feedback loops, that break this supposedly rigidity, where they show that as you progress in more depth you gain more insight, and you have to come back and fix the misinformation or omissions in the previous stages.

So what the water fall model dictates?

  1. Have requirements. Which basically means, before doing anything understand what you have to solve, what is the actual problem.
  2. Have a design. Do not simply start running around like a headless chicken, but try to frame the solution
  3. Solve it. Not much to say there.
  4. Test it. Is the real problem solved?
  5. The sooner you find bugs, the easier is to solve them. I think this is common knowledge, that if you find an issue that you must change your whole design, then costs more than a simple code bug. (I am not talking about the costs of the results of the bug, like a security bug that leaks all your data).
  6. if you build something useful, people will use it, therefore you have to maintain it. If you add also run it (if it is a service), then you almost get to devops 🙂
  7. Documentation is important. This is a hidden part of the model. On each step you have to produce something, which in many cases it is a documentation that has to be used by another step either in higher or lower layer. And guess which is the most often complain of software engineers?? Lack of documentation.
  8. There is no mentioning on how much compressed these steps can be. A
    design document can be just one paragraph for something very

What are the alternatives that people who believe represent the Agile
camp are offering?

  1. Not waste time on gathering requirements. Just solve the current problem that is shown. If something else comes up you just add it as another issue.
  2. Do not waste time on design. Just solve the current issue as best. It will eventually evolve.
  3. Go fast to the implementation, since this is a small task.
  4. Test it.
  5. You do not find bugs, because you were solving the current problem.
  6. Code is more important of documentation and design.

Most of the above statements have different flaws that can be boiled
down to a few points:

  • What if you do not see the big picture and you just solve a problem but not the actual problem? As you start building, you acquire cruft. This cruft will make you much slower in the future if a more
  • It is a fallacy to believe that you if you do not see the big picture and just go solve all the small problems, it will lead you to the global minimum. Besides the trivial or straightforward cases it will lead you to some local minimum, and you will be trapped. Trying to design and pre-think some issues you acquire some insight that can help you to recognize if you are in a local minimum.

I have numerous time referred to such people as people who “believe that represent the Agile method”. I phrased it in such way because, I do not believe either that these people really understand Agile methodology. I have observed most of these people are programmers (or ex- programmers), who want to cut the boring parts, and get straight to the good stuff (the coding part). Therefore, they consider everything else as overhead or burden that they have to eliminate, in order to be free to perform their art. I cannot  disagree that writing code can feel much more entertaining, but I doubt if this the optimal way of building something.

Given the original idea behind of the waterfall model, and the Agile manifesto, I can draw more similarities between them, than differences. What I believe is that the Agile method was an attempt to remove all the cruft that the abuse and misuse of the waterfall method has acquired all these previous year, and come back to a more natural way of engineering software. Agile has designing, has planning, and waterfall has feedback loops, and flexibility.

After writing all this I also found this which explains in a better way some of the misinterpretations of the waterfall model. But since I have already written this, I will just publish it.

Read the whole story
190 days ago
Athens, Greece
Share this story
Next Page of Stories