Monday, May 31, 2010

A "knowledge economy" for New Zealand?

Andrew D Atkin

From a casual perspective you would think that the countries that support those expensive factories that push out all those high-tech products must be the main countries that are very rich. However, it doesn't quite work that way because mass-production must still bow down to the laws of competition and market saturation.

If you build a mass-production plant for, say, $100m and it provides an impressive yield of, say, 30%, then other capitalists are going to get excited and build their own production plants to compete with you - they want a piece of that pie too. So in time, your yield as a ratio to your original capital investment will come back to a more 'normal' 10% for as soon as you and your competitors saturate the market, as will happen.

The effect? Goods dependant on intensive mass-production become 'just another industry' serving the market. The great riches only come to the early adopters and for only a temporary time (that was their reward for taking a risk with something new). And in turn, in time, you might be no better off investing in coffee shops than in high-tech products - or anything!

My point is that New Zealand should not try to pick winners with respect to "the next big thing", at least not by looking at what other countries are doing. (If you're playing catch-up, then you're probably already too late!).

Market saturations always bring new products and services into line with other products and services, in time. This is why New Zealand should not be talking about a "knowledge economy" as such - the world has PLENTY of knowledge, especially emerging from the human capital giants, India and China.

Where New Zealand should be focused is in the products and services that it has a natural edge over other countries with. In a global economy, I think the future for New Zealand will (or should) mostly be in tourism and lifestyle property development, in conjunction with its well-established agricultural industry. This is a good and highly secure place for New Zealand to focus its capital.

-7-3-13: A good article relating to my point.

--------------------------------------------------------------------

Note: Outsourcing and free trade:

Having free-trade links to China is seductive because their labour is so cheap, giving us cheap products to import. But the problem is that with unprotected outsourcing you can find yourself outsourcing the very thing that makes your country a rich country in the first place - that is, your industrial base. So for a short-term pay-off of cheap imported goods, you can nonetheless end up with a serious long-term decay of real wages (purchasing power) of the citizenry.

Capital located in a rich country must, substantially, share the fruits of its profits with its workers. Indeed, the real (nationally relevant) profit of an organisation can be measured not only in final business profits, but paid wages included. With free-trade you can undermine that process; the capital owners can (and do) share less of their fruits with the workers. Outsourcing to cheap labour effectively threatens to concentrate nearly all profits from capital-output to the capital owners - progressing towards the situation of lower wages, and bigger business profits. Import tariffs naturally protect against this process by making it less feasible for capital to relocate overseas where there is cheap labour, because the tariff functionally replaces the savings obtained from the exploitation of cheap labour, and therefore undermines the advantage of outsourcing.

So outsourcing at its worst can lead to a diminishing returns. You can develop into a society made up of very rich capital-owners but with a universally poor(er) working and middle class; that is, the wage-earning classes.

Luckily, New Zealand is somewhat immune to (potentially) catastrophic outsourcing, because the country's capital is literally locked to the nations ground: New Zealand is heavily invested in agriculture and tourism. However, because New Zealand is so openly linked with China, it should continue to focus its capital investment towards these "protected" areas so as to ensure that its capital stays in the country, and likewise translates to sustained gains for the wage-earning class.

Sunday, May 30, 2010

Some fundamental thoughts on human intelligence

Andrew Atkin:




Some people argue that there is a real and substantial genetic difference between different people's intellectual capacity. On a base level, I doubt a substantial difference exists.

Explaining:

The human brain is an expensive appendage. It consumes about 20% of our metabolic energy. Hence, there must be significant evolutionary pressure to stop it from growing unnecessarily large. However, the brain itself is only the hardware. The software of the brain is the in-built architecture that also dictates our intellectual capacity. We can assume that the software (in terms of in-built architecture) is nearly perfect amongst most people simply because good software does not come at an adaptive cost. Better software should be heavily reinforced (in evolutionary terms) because it provides only 'advantage' and virtually no 'disadvantage' in itself. So in terms of architecture, the human brain should be essentially optimised for all specimens - notable imperfections in architecture should have wiped themselves out long ago.

So, considering that the human brain is virtually the same size (as a ratio to the brain-stem) in all human animals, and considering that all human brains are made of the same neurological matter, and considering that there is no reason to believe that there would be any substantial difference in the quality of the neurological architecture amongst different humans, I would in turn question any claim of a fundamental genetic superiority of any individual or human group over another. That assertion simply does not seem rational.

So where are the differences?

Obviously there is a difference between intellectual types (people possessing different programmes designed to process different things). The most commonly observed differences are amongst the sexes.

I would also guess that there could be random differences amongst various individuals as well, considering that the human animal is a group-animal and it makes sense for evolution to exploit our capacity to specialise, so in turn evolution may have provided some in-group variability to enhance specialisation of which goes beyond the differences seen in the sexes.

There are also bound to be at least some differences between the races who have part-evolved in substantially different environments i.e. a different collection of intellectual strengths and weaknesses.

However, I think the biggest differences in mental ability that we see manifest amongst different people come from environment influences, and maybe especially epigenetic influences (epigenetic has often been confused with genetic - we now know that the womb environment switches all kinds of different genes on and off, depending on its status). And almost certainly the overwhelming factor leading to a compromised brain is deprivation.[see Understanding Mental Sickness in my June index]

How do we measure?

There is no substantial way to measure the difference between different brains. The only thing we really know that an IQ test tests for is the individual's ability to perform on it. Though, of course, serious retardation is obvious through casual observation.

Culture:

Our definition of what intelligence is comes mainly from what our current society requires. For example, if we lived in a world where 99% of the occupational demand is for computer programmers, then we would tend to define intelligence through a test that may indicate an individuals ability to perform at this specific role.

Cultural requirements lead to labelling - and the labelling is no doubt just a part of the incentivising process which encourages people to conform to their societies needs.

However, as I have tried to show, the labelling of different people as 'bright' or 'dumb' is probably quite erroneous it terms of its ultimate accuracy. Mislabelling can also have the effect of undermining people's confidence on false grounds, and in turn leaving people with otherwise ample capacity demotivated out of a false belief in their inherent lack of ability. Obviously I think this is something we should be careful about.

---------------------------------------------------------------------

Addition: 2-3-11:

How do you make an IQ test?

John Taylor Gatto, the famous educational critic, has commented that the IQ-test does not validate the intelligence bell-curve theory, but claims that the IQ-test was only derived from it.

Meaning: The presumption that differences in intelligence between various peoples within a given population exist along a bell-curve is first taken as a given; and then, the IQ-test is (has been) developed by chopping and changing the questions within the test until a bell-curve result is eventuated from applied experimentation. In other words, it's one object of b.s measured up against another object of (probable) b.s to "prove" what is almost certainly, at base, total b.s.

Is Gatto right on that? I would say he is bound to be. Because there is in fact no way to measure intelligence from a position that is not ultimately entirely subjective. And there is no way to validate the pure presumption that human intelligence follows a bell-curve structure in terms of social differences.

In my view, for example, the differences in intelligence between various healthy human minds will only be discrete, and what we recognise as "genius" is just the effect of a relatively intense developmental specialisation (most obvious in idiot-savants). So am I right? You, me and everybody else can only guess. Nothing in this territory can be measured without a subjectively derived measuring stick.

However: You will almost certainly find broad correlations between how individuals perform on IQ-tests, and how they perform in the real world with respect to professional and academic success. But this will only be because you can expect a correlation between intelligence and real-world performance on any test requiring some mental aptitude. It is true that intelligent people will tend to do better on IQ-tests (or any test requiring some essential intelligence) in terms of averages.

So, IQ tests may have a place for broad studies of populations*, but they should never be taken too seriously as a measuring stick for isolated individuals. An individual with a high-IQ can still be rather stupid, and visa versa.

*Even still they must be taken with a pinch of salt due to cultural differences and differences in developmental histories, etc. For example, a brilliant mind will never be good at maths if they have never actually done it. You can't test in isolation to history.

---------------------------------------------------------------------

Addition: 4-6-11:

What actually is intelligence?

At base, I don't think it's anything special. Intelligence is ultimately just an internalisation -through memory- and finally simulation of the external environment. Example: I remember a while back reversing my car into a park. Before I did it I naturally prior-simulated the event in my mind, so I then did it efficiently in practice.

I think it's a safe assumption that this is how advanced predators operate, and why they tend to be more intelligent than their prey. They simulate what they are about to do before they do it so they can be fast and precise in practice, as most of their neurological processing for the attack has been taken care of in advance.

The more developed and powerful the simulator (brain), the further we can look into the future. I would say that human intelligence evolved from pressure for our species to do exactly that. Humans, obviously, have incredibly long-range perception (simulation) and to a point where we can 'experiment'. Much (maybe all?) thinking is just experimenting with different simulated scenarios. And we do this of course in relation to both the social and physical worlds.

As humans simulate different but common scenarios we obviously get more efficient at it, as our brains learn to 'crop back' on neurological processes (data compression) that are not required for an accurate simulation of the real world. Repetition will facilitate cropping and therefore efficiency.

However, the potential for accurate simulation will of course be dependant on receiving good original information. Eg. If a mother doesn't interact with her baby in the earliest months of its life, then that child will never be able to develop a proper "social simulator" and will always struggle to relate to (simulate) other people as an adult.

However, what I am talking about is the computer. So what about the "person" behind it? Does the consciousness that views the memory-sequence have an interactive role to play whereby it works with the simulations on some unknown/unknowable level? Who knows.

Wednesday, May 19, 2010

Planned obsolescence - get rid of it!

Andrew D Atkin:

One of the reasons why we have all those mass-production plants churning out all those goods, is because those goods have been carefully designed with strategic weaknesses that ensure that they will fail after a certain amount of use, forcing the consumer to repurchase the product. A classic example is the light bulb; make that filament just slightly thicker and it will last forever.

Why on earth do we tolerate this crazy waste? Where are the durability regulations?

Building products to last is not hard. All you have to do is throw in a little more material mass at the (otherwise) weak points, and your product could last you a virtual lifetime. And I can assure you - that little bit of (comparative) over-building adds next to nothing to the overall construction cost of the device.

Okay, some products must wear out in any circumstance, in particular for where moving parts interface and therefore induce mechanical wear. However, products can be easily designed so that mechanical interfaces (and other must-perish components) can be easily replaced. What's more, You-tube videos can be created to show anyone how to make a specific replacement for any given product-component (think of a TV cooking show - monkey see, monkey do!). Furthermore, with the use of the internet the exact component that you require can be effortlessly purchased online - you could just dial up a schematic of your product by typing in the model number to a product-based search engine, and then click on the specific component that you need. It should be as simple as that.

We shouldn't have to throw away an entire product due to one or two worn-out components, in the same way that we don't throw away our cars just because the tyres have gone bald. Today more than ever, there is no excuse for planned obsolescence. There is no need for a "throw away" society like it exists today. It is, or could be, simply too easy to maintain our stuff.

Motive for planned obsolescence:

The motive for planned obsolescence is clear enough. A lack of planned obsolescence can quickly lead to a glut of the product the manufacturer is selling, and that glut will instantly threaten the sale-price of any more products coming onto the market. This is a big deal if you've already sunk, say, $50m into your production plant. You don't want to destroy your own demand.

Effect of removing planned obsolescence:

What would happen if we managed to get rid of planned obsolescence (probably via government regulations, and publicly advertised durability ratings)?

With minimal population growth, our existing production plants would be forced to slow down to meet the reduced demand. Yes, there would be a reduction of economic activity, but only for the right reasons i.e. we just wouldn't need to produce and consume so much for a given living standard achieved. In the end, we would work less but be just as rich.

There is the argument that planned obsolescence facilitates progress by creating an enduring high demand for new products, and therefore creates the opportunity for products to be further developed. I don't buy this argument at all (no pun intended), as I will explain:

Modern computers are made obsolete not because they wear out, but because much better computers can be purchased to replace them. This is the kind of progress that we would see with the removal of planned obsolescence. There would be less incremental change (over time) with the products that we buy, but when the change does occur we will see much more substantial advancements - improvements substantial enough to allow a significant portion of the market to sell off their perfectly well-functioning dishwasher (or whatever) for the state-of-the-art new model.

So progress will happen in fewer but big steps, rather than many small steps. Indeed, you could assume that progress will happen more aggressively because the pressure will be on engineers to make truly significant improvements to their products, and society will also have more resources to facilitate that progress via the improved overall efficiency from the removal of waste.

However. With the removal of planned obsolescence you will get a small reduction in economies-of-scale - production plants will tend to be geared for smaller production runs, and that means less investment in extreme and high-speed automation in the production process. So yes, this will make products a little more expensive to buy off hand, but then not by much because the global demand for products in general is still going to be massive in any circumstance (we're 6+ billion and growing!). So, in turn, you would still tend to see highly efficient mass-production invested into good quality products.

Regardless, even if those new products are a bit more expensive, you would of course be purchasing them less often, and you could still pick up anything you need second-hand with confidence that it will last for a very long time.

Conclusion:

Getting rid of planned obsolescence is winner on every count. My conclusion is that we, as a society, should stop it via political pressure as soon as we possibly can. It really is just crazy.