Saturday, 14 November 2009

Google Go! on Mac OS-X

Google have just released a new programming language called Go.

I decided to install it and learn it.

This is what I did on Leopard 10.5.8. It took 5 easy steps (thanks to Kelvin Wong).

Step 1.

I installed mercurial (the Google instructions to use easy_install did not work for me)

Step 2.

I added these lines to ~/.profile (Kevin added his to ~/.bash_profile)

To determine what file to edit or create requires you to follow a procedure because bash only reads in the first one it finds:

If you have a ~/.bash_profile file, edit it.
Else if you have a ~/.bash_login file, edit it.
Else if you have a ~/.profile file, edit it.
Else if you have a ~/.bashrc file, edit it.
Else you may create one of the above.

I already had a ~/.profile file since I use fink and it creates one. Others have used ~/.bash_profile or ~/.bashrc.
export GOROOT=$HOME/go
export GOARCH=386
export GOOS=darwin
export GOBIN=$HOME/bin
export PATH=$GOBIN:$PATH
I then closed and re-opened the terminal session.

I checked that it worked with
env | grep '^GO'
Step 3.

I manually created ~/bin and followed Kelvin's recommendation to make it executable with
mkdir ~/bin
chmod 755 ~/bin
Step 4.

I downloaded the Go repository using mercurial.
hg clone -r release https://go.googlecode.com/hg/ $GOROOT
This printed the following:
requesting all changes
adding changesets
adding manifests
adding file changes
added 4016 changesets with 16888 changes to 2931 files
updating working directory
1640 files updated, 0 files merged, 0 files removed, 0 files unresolved
Step 5.

I built the Go compiler.
cd $GOROOT/src
./all.bash
During this step, OS-X asked me to allow in incoming connection to an application called 8g.

This didn't seem to work as I got this error:
FAIL: http.TestClient
Get http://www.google.com/robots.txt: dial tcp www.google.com:http: lookup www.google.com. on 10.x.x.x:53: no answer from server
This group discussion on the error suggests that Go is actually built but that the Go DNS resolver may not work on OS-X just yet.

It will work if 'Allow all incoming connections' is selected in the Firewall section of the Security preferences.

Sunday, 25 October 2009

Google's ChromeOS

In case you missed the news, Google has announced that it will build a secure operating system for, well, any device: desktops, netbooks, tablets, phones.

Basically, it is
"Google Chrome running within a new windowing system on top of a Linux kernel"
and for security they are
"completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates."
As you might imagine, this has stirred quite some commentary from magazines and bloggers: gizmodo, techcrunch and Wikipedia always has something to offer.

But I think most have missed the other key ingredients - everyone but ToxProX at least.

Google is not just working on a new OS. It is also working on at least two other related projects: Native Client and O3D.

O3D

I'll cover O3D first since I don't have much to write. The web site suggest that this stands for 'Open web standard for 3D graphics'.

It is a browser plugin that allows web developers to add 3D graphics to their application. I think the short video demo describes it best.

Basically, GPU accelerated 3D graphics for your browser.

Native Client

Native Client also known as NaCl (but never as Salt) is a way to run 'normal' programs safely.

A normal program is a word processor; a game; a photo editor; a VoIP client; a movie maker; or a 3D earth browser. Most of the software you use are normal programs and they are usually compiled to machine code for your CPU type and for your operating system.

What is Native?

A native program runs on the CPU and not in another program that decodes the instructions and then performs the operation.

That's not very clear so I will try a metaphor:

It is like reading a book. A book written in English is easily read by someone who understands or natively speaks English. Give them a book in German, and a German-English dictionary, and they could also read the book - but much slower.

For each word that they have not seen before, they would have to look up the word in the German-English dictionary, read the English meaning and then decode the meaning of the German sentence. They would have to do this for every sentence.

Initially they would be slow, but as their German vocabulary grows their sentence translation speed would improve because they are optimising the process of looking up the English meaning of a German word by memorizing it. But they will never be as fast as a native speaker since they are always translating.

Many programs written in Java, JavaScript, Python, Ruby, C#, lisp, perl and flash work this way. The program is written in a non-native language and another piece of software does the translation. Fast CPUs and cleaver optimizing techniques allow them to run quickly, but a native program that did the same thing would run at least twice as fast, but more often at least 10 times faster.

(A side note here is that CPUs are not getting that much faster any more which is why all language developers are working on ways to make their interpreter work faster or getting their compiler to generate faster and often smaller code.)

So, why aren't all programs written to run natively? The answer is portability. Interpreted programs can generally run on any operating system and on any CPU. Speed is traded for portability. It also means that we often loose the benefits of hardware accelerated graphics.

Native Client CPU Support

Most home and business computers use just 2 types of CPUs: x86-based and ARM based.

Intel, AMD and some other manufacturers make x86 CPUs which are generally used in servers, desktops and more recently netbook computers.

ARM licenses their designs to many manufacturers which integrate various modules and produce very low-powered chips for use in mobile devices such as ipods and mobile phones.

NaCl is being built for these two CPU architectures. This doesn't prevent future support for other CPUs like IBM's PowerPC or Sun's Sparc or Sony's Cell processor.

Native Client Security

Back to Native Client. This could be part of web 3.0: All applications loaded from the web, running securely in the browser at native application speeds? Maybe web 2.5?

NaCl tackled security in a new way. The programs are run in a sandbox. The sandbox disassembles the program, enforces memory access rules, rejects code that doesn't obey strict rules to prevent it jumping out of the sandbox, and only allows interaction with the real world through the limited API back to the browser or perhaps the ChromeOS.

This solution has some interesting benefits:
  • There is no need to enforce a trusted development chain so developers don't need a special, trusted compiler and developer certificate. Signed applications are unnecessary.
  • Buggy code can not crash the OS and nor can it do any damage since it is running in a sandbox which does not have access to hardware or the OS.
  • Malware can't make use of bugs to gain privileged access to the OS and the sandbox ensures all code stays in the sand. So, malware can not spread itself, access any file on the OS or leave the sandbox (or is it a salt box?). Could this be the end of Malware as we know it? I think Google thinks so.
Write Once, Run Anywhere

An unrealized dream of software architects is to write a program once, and to be able to run it on anything. Native Client makes this a reality. It will be a new program format that allows an application to run in any web browser (with a NaCL plugin I guess) on any operating system and on any hardware.

It also means that software will no longer need to be installed.

Enter Google Gears

Google Gears allows you to run your Google applications while off-line. It provides a database for local caching of data, HTML, images, JavaScript and perhaps NaCl programs as well.

Once you are back on the net, your locally generated data will synchronise with your on-line data.

Say someone creates a game for Native Client. You would agree to the license (if any), make any payment and run the game. The game would be cached locally so you won't need to download it each time you want to play, and this cache will allow you to use it when your computer is not connected to the web.

Now if the developer fixes a bug, or adds a new feature, the browser will check to see if the cached version is up-to-date. If not, it will automatically download and cache the latest version.

If this is how it might work, Google have just disrupted the whole universe of content distribution. There is no need for software installation. No need for update services. No need for fancy package management like Debian's APT. It will be like a universal version of iPhone applications.

So you can look forward to Google Earth, Picasa and Google Office applications all running from the web without having to install them and keep them up-to-date.

So, if programs run fast, run anywhere, are secure and don't crash the browser or OS, why wouldn't you use it for other parts of an operating system? Did I mention that NaCl also supports POSIX threading and IO? Now it just needs a hardware layer, device drivers and a GUI.

We already know that ChromeOS will be based on Linux so just the GUI remains to be built. Or does it?

Chrome is the GUI

What if Chrome (the browser) IS the GUI? It already has a window manager - we call them tabs. They can be pulled-out of the browser, minimised and resized - just like a window manager.

It already has a scripting language - JavaScript, and Chrome's V8 engine is fast.

It already supports HTML5 which covers-off video, audio and virtually everything that Adobe's Flash does.

And, it is getting hardware accelerated 3D graphics in the form of O3D.

A version of the Chrome browser was released for the Chrome OS recently. It was a deb package so it is designed for, most probably, Ubuntu. It is not much different from the standard Chrome browser, but it does have a clock display, battery indicator and a hint of network settings.

My guess - there is no window manager apart from the Chrome browser.

In Summary

Native Client promises much:
  • Programs will run fast.
  • Any compiler can be modified to produce NaCl code.
  • The compiler will make a program that will run on ARM and x86 (for now) CPUs.
  • The program will run in a sandbox that restricts what it can do.
  • Buggy programs can do no damage.
  • Malicious programs like viruses and worms can do no damage, nor can they spread or modify files on your OS.
  • May be enabling technology for ChromeOS

Sunday, 6 September 2009

Running an SNMP Agent on OS-X Leopard 10.5.x

I have been writing a awk script to parse SNMP data. The data would be collected using snmpbulkwalk, piped through the awk script and the output can be saved to a file or piped through some XML processor.

eg.
snmpbulkwalk -v2c -cpublic -OXf some.ip.address > machine.snmpwalk
snmpbulkwalk -v2c -cpublic -OXf some.ip.address | awk -f script.awk | xmllint --format - > machine.snmpwalk.xml
To test it I wanted an SNMP agent running on OS-X. I found this web page helpful so thought I would document my experience here.

Unix Alert

This post assumes that you understand most of the jargon and Unix commands that come with OS-X. I also use programs from the fink project.

Edit /etc/hostconfig

I used joe (from fink), but you could use nano or any other text editor.
  1. sudo joe /etc/hostconfig
  2. Change SNMPSERVER=-NO- to -YES-
NB: I haven't rebooted yet so I don't know if the SNMP agent starts automatically. For me this is not important since I am only testing and not wanting the actually collect SNMP data.
# This file is going away

AFPSERVER=-NO-
AUTHSERVER=-NO-
AUTOMOUNT=-YES-
NFSLOCKS=-AUTOMATIC-
NISDOMAIN=-NO-
TIMESYNC=-YES-
QTSSERVER=-NO-
WEBSERVER=-NO-
SMBSERVER=-NO-
SNMPSERVER=-YES-

Edit /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist

To make the SNMP agent start at boot time I found this post which explains that you need to edit /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist and to change the element following the Disabled key from true to false.
<key>Disabled</key>
<false/>
Creating the SNMP Agent Configuration Files

You could do it manually, but why would you?
sudo /usr/bin/snmpconf -i
Notes
  • It asked me to merge in an existing file in /etc/snmp/snmpd.conf which I did.
  • I only created the snmpd.conf file.
  • I only changed the Access Control Setup (but you could do this manually if you prefer).
  • I later manually edited the file to change the location and contact details.

Here is a shortened log:
fox:pc-snmp2xml phil$ sudo /usr/bin/snmpconf -i

The following installed configuration files were found:

1: /etc/snmp/snmpd.conf

Would you like me to read them in? Their content will be merged with the
output files created by this session.

Valid answer examples: "all", "none","3","1,2,5"

Read in which (default = all):

I can create the following types of configuration files for you.
Select the file type you wish to create:
(you can create more than one as you run this program)

1: snmpd.conf
2: snmptrapd.conf
3: snmp.conf

Other options: quit

Select File: 1

The configuration information which can be put into snmpd.conf is divided
into sections. Select a configuration section for snmpd.conf
that you wish to create:

1: Access Control Setup
2: Extending the Agent
3: Monitor Various Aspects of the Running Host
4: Agent Operating Mode
5: System Information Setup
6: Trap Destinations

Other options: finished

Select section: 1

Section: Access Control Setup
Description:
This section defines who is allowed to talk to your running
snmp agent.

Select from:

1: a SNMPv3 read-write user
2: a SNMPv3 read-only user
3: a SNMPv1/SNMPv2c read-only access community name
4: a SNMPv1/SNMPv2c read-write access community name

Other options: finished, list

Select section: 3

Configuring: rocommunity
Description:
a SNMPv1/SNMPv2c read-only access community name
arguments: community [default|hostname|network/bits] [oid]

The community name to add read-only access for: public
The hostname or network address to accept this community name from [RETURN for all]:
The OID that this community should be restricted to [RETURN for no-restriction]:

Finished Output: rocommunity public

Section: Access Control Setup
Description:
This section defines who is allowed to talk to your running
snmp agent.

Select from:

1: a SNMPv3 read-write user
2: a SNMPv3 read-only user
3: a SNMPv1/SNMPv2c read-only access community name
4: a SNMPv1/SNMPv2c read-write access community name

Other options: finished, list

Select section: finished

The configuration information which can be put into snmpd.conf is divided
into sections. Select a configuration section for snmpd.conf
that you wish to create:

1: Access Control Setup
2: Extending the Agent
3: Monitor Various Aspects of the Running Host
4: Agent Operating Mode
5: System Information Setup
6: Trap Destinations

Other options: finished

Select section: finished

I can create the following types of configuration files for you.
Select the file type you wish to create:
(you can create more than one as you run this program)

1: snmpd.conf
2: snmptrapd.conf
3: snmp.conf

Other options: quit

Select File: quit


The following files were created:

snmpd.conf installed in /usr/share/snmp
Manually Editing the SNMP Agent Configuration File

I used joe (from fink), but you could use nano or any other text editor.
sudo joe /usr/share/snmp/snmpd.conf
Starting the SNMP Agent

Note: This is also required to start the agent after a reboot.
sudo /usr/sbin/snmpd
Testing the SNMP Agent
snmpbulkwalk -v2c -cpublic 127.0.0.1
You should see something like this:
SNMPv2-MIB::sysDescr.0 = STRING: Darwin fox.local 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.255
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (173086) 0:28:50.86
SNMPv2-MIB::sysContact.0 = STRING: bill
SNMPv2-MIB::sysName.0 = STRING: fox.local
SNMPv2-MIB::sysLocation.0 = STRING: Redmond

SNMP to XML

In a later post, I will publish my SNMP to XML script.

UPDATE: Instead of doing this, I started an Open Source project which can be found here.

Saturday, 29 August 2009

Apache Tomcat 5.5 Installation on Mac OS-X Leopard 10.5.x


I needed to get Apache Tomcat 5.5 running on my Mac to experiment with Orbeon.

This was the most helpful information that I found, but I needed some extra steps.

1. Download Apache Tomcat 5.5 from here. Use the zip file since, for some reason, the README says that the tar shipped with OS-X is no good.

2. Move the downloaded zip file to where you want to unpack it. I choose my home directory which is /Users/phil

3. Open it and it should expand into a directory like /Users/phil/apache-tomcat-5.5.28

4. Browse to /Users/phil/apache-tomcat-5.5.28/bin

5. Open startup.sh with TextEdit.app

6. Add the following lines to the file after the EXECUTABLE line:
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.5/Home
export CATALINA_HOME=/Users/phil/apache-tomcat-5.5.28

You will need to use the location where you placed the tomcat directory rather than mine.

7. I had to do three more things at this point to make all the shell (.sh) file executable.
1. Open Terminal.app
2. cd /Users/phil/apache-tomcat-5.5.28/bin
3. chmod u+x *.sh

8. Start tomcat this way:
./startup

I had another problem at this point. I already had an eXist server running on port 8080. eXist is included in Orbeon as well.

To change the port I needed to open /Users/phil/apache-tomcat-5.5.28/conf/server.xml and edit the line containing
Connector port="8080" maxHttpHeaderSize="8192
I changed the 8080 to 8180. You can choose almost any other port if that suits you situation better.

I still couldn't get tomcat to run. I found that it was already running so I had to kill it and then restart it.

I used the following command in Terminal to locate the program:
ps | grep tomcat
This generated the following lines. Ignore the 'grep tomcat' one and note the number of the second line - the 838 in my case.
3821 ttys000 0:00.00 grep tomcat
838 ttys007 0:10.71 /System/Library/Frameworks/JavaVM.framework/Versions/1.5/Home/bin/java -Djava.util.logging.config.file=/Users/phil/apache-tomcat-5.5.28/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/Users/phil/apache-tomcat-5.5.28/common/endorsed -classpath /Users/phil/apache-tomcat-5.5.28/bin/bootstrap.jar -Dcatalina.base=/Users/phil/apache-tomcat-5.5.28 -Dcatalina.home=/Users/phil/apache-tomcat-5.5.28 -Djava.io.tmpdir=/Users/phil/apache-tomcat-5.5.28/temp org.apache.catalina.startup.Bootstrap start
Now I had to kill the server with this command:
kill 838
Now I could restart tomcat and see the default home page.

I then pointed my browser to http://localhost:8180 and the default page was displayed.

Wednesday, 12 August 2009

QR Codes







You may have noticed that I have added a QR code for mobile phones. It doesn't do much, other than allow your QR-enabled mobile phone to open this blog.



What is a QR Code?





I'll quote from Google's Chart API page:

QR Codes are a popular type of two-dimensional barcode, which are also known as hardlinks or physical world hyperlinks. QR Codes store text, which can be a URL, contact information, telephone number, even whole verses of poems!

QR codes can be read by any device that has the appropriate software installed. Such devices range from dedicated QR code readers to mobile phones.

QR code is trademarked by Denso Wave, Inc. As you'd expect, the Denso Wave website includes a lot of useful information about QR codes.

So, anyone can turn almost any smallish textual content into a QR code.

Making Them

You can make you own QR codes easily with this online tool.

Google also provides a handy services to create your own. You can add the URL to any web page as an image like I did on this blog.

Here are some more examples

A link to a web page:

http://chart.apis.google.com/chart?cht=qr&chs=350x350&chl=http%3A%2F%2Fphilatwarrimoo.blogspot.com%2F


















Send an email to Bill:

http://chart.apis.google.com/chart?cht=qr&chs=350x350&chl=mailto%3Abillyg@microsoft.com


















Phone Bill:

http://chart.apis.google.com/chart?cht=qr&chs=350x350&chl=tel+611555123456



















Add Bill's contact details:

http://chart.apis.google.com/chart?cht=qr&chs=350x350&chl=MECARD%3AN%3ABilly+G%3BORG%3AMicrosoft%3BTEL%3A%2B611555123456%3BURL%3Awww.closedsource.com%3BEMAIL%3Abillyg%40microsoft.com%3BADR%3A1+Wayto+Dr.%3BADR%3A1+Wayto+Dr.+Meekrae+ZE%3BNOTE%3AOnly+phone+on+Tuesdays%3B%3B




















And don't forget Bill's birthday:

http://chart.apis.google.com/chart?cht=qr&chs=350x350&chl=BEGIN%3AVEVENT%0D%0ASUMMARY%3ABill's+Birthday%0D%0ADTSTART%3A20090930%0D%0ADTEND%3A20090930%0D%0AEND%3AVEVENT%0D%0A

Go here for other formats.

Monday, 10 August 2009

Energy Consumption of a new Washing Machine

About 6 months ago we bought a new front-loading washing machine. It is an Electrolux EWF1282.

I have been measuring its power consumption for over a month and today I looked at the results.

I collected data over 43 days.
It used 1.7 kWh. This is about 30Wh per day.

A friend has suggested that this figure looks too good so I will measure it again. The cold-wash cycle has been tested to consume about 300Wh per wash for a 8kg load so a figure of 30Wh seems too good to be true.

It uses less than 70L/wash (varies depending on load). We wash, on average, at least one load per day in cold water.

The machine has run for nearly 48 hours over this 43 day period which means that it averages about 35.5W when it is running.

We think that the clothes wash just as well as our old top-loader and we hope that it is gentler on the clothes. Buy the amount of lint in the filter so far, this is probably right.

So the washing machine uses less energy when running than an Apple MacBook Pro 15"; 3 times as much as our Netgear DG834GV ADSL router; and about as much as our 2 kitchen compact fluros.

Our old washing machine consumed 240 Wh/day running at about 250W on average.
The old machine also used about 140L/wash.

So the new machine is much better: half the water consumption, and 12.5% of the energy consumption compared to the old one.

Saturday, 25 July 2009

Our First Solar Energy Credit


At the end of March we installed 1500W of solar panels and a 2kW grid-connected inverter. As of today it has supplied nearly 500kWh of energy.

Yesterday we received our first Electricity bill that included a credit for the Solar generated electricity that we have sold to the grid. We have generated about 3.8kWh per day since the feed-in meter was installed. This is about 2/3 rds of our daily consumption.

We also we notified that electricity and service charges have been increased 21.4% as of July. We are now paying about 20c per kWh including GST.

Saturday, 27 June 2009

Cloud Computing

Cloud computing is all about running a service or a server using a pool of computers. The computers could be your own or you could lease time on a commercial cloud.

Who's Who


Some of the big names in this space are Amazon, Google, GoGrid and ElasticHosts. Apart from Google, they allow you to run a complete operating system and whatever other software you like on their infrastructure - which is why it is called Infrastructure as a Service (IaaS). Google's is a bit different, it is a Platform as a Service (PaaS). Google also offers applications like Google Docs (word processor, spreadsheet, etc.) which is known as Software as a Service (SaaS).

Infrastructure as a Service

Infrastructure as a Service interests me at the moment. It promises to disrupt current practice. No longer does a company need to buy servers for their business. They can lease time from one or more providers without having to outlay any capital. Nor do they need to maintain or upgrade any hardware.

How does it work?


An example may help.
Your IT department wants to upgrade one third of your servers. A normal request that you might get each year. Instead of investing in new machines they suggest that the company leases time on ElasticHosts (EH) server cloud and on Amazon's AC2 as a backup site.

The IT people would setup accounts, request a certain number of virtual servers and copy the disks of the current servers to EH and AC2. Each virtual server would be configured to have the necessary number of CPUs, RAM and network bandwidth. The IT department would then administer the servers from your offices just as if they are real servers: They can start them, pause them and stop them just like a real server.

But, they can also upgrade them in an instant. And, they tell you, it costs 25c per hour for 1GHz CPU, 1GB RAM, 1000GB disk and 100GB network traffic - $180/month. But they say that at night they can turn half the servers off so it would cost about $120/month on average. They can do the same on weekends and on public holidays too.

And if the company had a busy period, they could order more servers or upgrade the existing ones almost instantly and afterwards they would downgrade them.
Private Clouds

It sounds like magic. But these clouds can also be established using your existing infrastructure and if you need additional capacity you can lease it by the hour.

Making a Virtual Server


I wondered how hard it might be to make a disk image to run on a cloud. It turns out to be rather easy.

I have a Ubuntu linux virtual machine running in VMWare Fusion. It allows me to run linux on my Mac and it works well (I should mention Sun's VirtualBox here as well which does the same job and is free and Open Source).

Ubuntu provide a program to make the creation of virtual machines an amost trivial task.

sudo ubuntu-vm-builder kvm jaunty

'sudo' allows the program to run as an administrator.
'kvm' specifies that I want a virtual machine that will run on a kvm based cloud such as ElasticHosts. I could have used EC2 to make an image that would run on Amazon's EC2.
'jaunty' specifies the version of Ubuntu server that I wanted it to be.


(I used some additional command line options that are not needed.)

A little while later, I had a directory containing a disk image and the command necessary to run it. With some other configuration changes it it possible to create a VM in a few minutes.

Since my Ubuntu machine is itself a virtual machine running on a Mac I did not think that I could test the new VM. But I thought that I would try to run it to see what it might do. It began by complaining that it could find KVM support and then ran the VM in an emulator (QEMU).

The machine booted like a real PC and eventually gave me a login prompt.


Appliances


Some companies are now offering their applications or operating systems as a VM to download - ready to go. VMWare has a large selection of pre-build VMs.

The Future


It looks interesting. Some other work in this area focuses on standardising the management interface for a cloud of VMs, standardising the VMs so that any VM can run on and public or private cloud and schedulers so that an administrator can prioritise VMs and schedule the start-up and shutdown of any VM. Read more about OpenNebula and Haizea.

Sunday, 21 June 2009

Browser Benchmarks - too many variables

Mozilla Firefox IconFile:Apple Safari.pngFile:GoogleChromeLogo.png
Laptop Battery Benchmark

Slashdot has a link to an article which presents a claim by AMD that the MobileMark 2007 battery benchmarking specification does not represent typical laptop use - if fact, AMD claims that the test basically runs the laptop at idle with the screen dimmed and with wifi turned off.

I know I don't get anything near what Apple claimed for my MacBook Pro 2008 - and I run with a dimmed screen, Bluetooth off and with the under-volted processor tweaked down to 600MHz on idle. My laptop typically runs between 15 and 20 degrees C above ambient. For example it is 18 degrees inside and the CPU is running at 34 degrees C.

Whether the claims are true or not, I wonder how the current browser benchmark tests relate to typical use? Is there such a thing as typical use? And how does a laptop or desktop energy settings affect the result?

Browser Benchmark Tests

There are a number of browser JavaScript benchmark tests online: V8, Sun Spider and Dromaeo. Dromaeo takes too long to run so I have only used V8 and Sun Spider.

V8 benchmark is setup by Google. There are curently 4 versions of the test and it is a very quick test. Sun Spider is built for testing WebKit. WebKit is a branch of KHTML which Konqueror was built from. Apple bases Safari on WebKit. Dromaeo is built by Mozilla.

I like to experiment. I use Shiretoko (Firefox beta) for Mac mostly. I also have Firefox 3, Safari 4, and a suite of development or experimental browsers: WebKit, Stainless, Chrome and Chromium. All but Firefox are based on WebKit but Google has their own V8 JavaScript engine. I have most of these running on an old PowerBook as well.

I've been interested in how their JavaScript engines are performing so I occasionally download the latest nightly-build and run a quick test. It occurred to me that the results are affected by what else the laptop is doing and how the operating system has throttled the CPU. So I started to fix my CPU speed and shutdown most applications before I ran the tests. But in the background all sorts of updating and backup utilities are running and if they decided to start-up, the performance test results would be poorer.

I have not read about anyone else fixing their CPU speed before running the test. Perhaps it is not important, perhaps the CPU and throttling techniques know not to adjust CPU frequencies while benchmark tests are running, but somehow I doubt it.

I think we need benchmark tests that ignore other tasks, garbage collection and somehow ensure that CPU frequency and caching does not affect the result - ideally, every time the test is run, the result should be the same. Otherwise there are too many uncontrolled variables that prevent any useful comparison.

How about we put some scientific method back into Computer Science and Software Engineering?

My Results


For those that are interested, here are my rounded results and some graphs to visualize the data. With the lack of accuracy in the measurements, I suggest that a 20% error is as good as any other.

You can see that for me, Firefox is not performing as well on the tests as the other browsers. This doesn't mean that Firefox is not useable - I use it more than the others combined. It does mean that Mozilla can do better. All the WebKit based browsers do well. They are more than 10x faster than Firefox 3 on the v8 tests and take 1/5th the time on the Sun Spider tests.

The dual core MacBook Pro 2.4GHz is about 10x faster than the PowerBook G4 1.67GHz. This is probably due to the work being done in optimizing the JavaScript compilers for the Intel instruction set - it seems that the PowerPC is not a high priority.

All browsers (except Chrome) run well on the PowerBook which is our main machine.


Postscript

On reflection, if we are to be more scientific then we should have some predictions as well.

Perhaps we can predict the optimum performance of an algorithm or test case running on a particular CPU. We would then have a target for our JavaScript compilers to aspire to. Of course we need to take language overheads, if any, into account.

Friday, 12 June 2009

Google Wave: Killer App

I read an article a few days ago. It said: blah blah Google Wave blah ...

'What is Google Wave' I thought.
I watched this video and noted my thoughts which I edited later:
Different.
OK, nice.
Hmmm. Slick email. Bit scary (the idea of having the message on a server)

OK. I see, it is email and instant messaging (IM).
No wait. It is email, IM and blogging.... and blog feedback as well...

Hang on, it is now a document editor... but others can edit at the same time... and they can discuss points in the document... and its got version control...

Wow! a context spell checker too.
OK, you can publish docs and update them later.

With some sort of meeting acceptance thingy.
And it has multi-party games.

Spreadsheets and other content in the future.

I'm not surprised now when they add maps and video.

Why forms?
Nice, it can link to other social networking services.

Where are the ads?

I wonder what back-end XML database server they are using?

Stunning! Dynamic translation! They just got the whole world interested.
Ray Ozzie from Microsoft even had some things to say (which I re-state):

Ray starts out by praising 'those' that took it on.. it's nice. I don't think Ray used the Google word at all.

He thinks it is anti-web: that complexity is the enemy of the web. If something is complex - many roles, interconnections - then you need Open Source to have many instance since no one will be able to do an independent implementation.

Fundamental to the web is decomposing things to be simple so you don't need Open Source.

Ray says that the web is about open protocols, open data formats, no opaque packages and payloads being tunnelled. It is simple and out-there.

Later he says that Google Wave and Microsoft Groove are basically the same thing. That Mesh is based on Groove and that Mesh will not do all the things that Wave does or that Groove does but it will be sustainable.

Ray Ozzie built Lotus Notes. In its early days, it was beautifully simple. I liked it and I still do. Notes was way ahead of it's time. It was ahead of the web. It was strongly security minded, client-server based and Notes supported every relevant open standard that came along. At work we built an Operations Support System (1995 and onwards) with it and it is still supporting the job and fault management systems today.

Notes was great. Ray and his team did a great job. I think Google wave is what Ray would have liked Notes/Domino to be today.

Notes had a back-end database that seems to be like XML in structure and separated the presentation from the data. Wave has the advantage of virtually real-time synchronisation where as Notes, due to bandwidth limitations, was 'document' based and asynchronous. This is why Notes had to deal with replication conflicts - something they worked-around nicely - but Google it would appear doesn't have a problem since it is synchronising at a very low level.

Wave seems to have the following attributes:
  • Hierarchical database in XML.
  • Fine-grained time stamps.
  • Nothing deleted (partially solves replication conflicts and allows playback).
  • Remove edit history by publishing. Publiched docs retain links to source Wave and it has its own edit/update history.
  • Version control within document (allows playback).
  • Allows extensions (an XML data structure instead of a Blip - the content part of a Wave).
  • Each Wavelet has an Access Control List (ACL) of people and robots that can read the Blips within it.
  • I suspect that the Blips are signed with the author's private key and that other people in the ACL can read the Blip with the author's public key.
  • For security, during transmission, each fraction of a Blip would/could be encrypted with the reader's public key and decrypted with their private key - using TLS or SSL.
  • I think it would be possible to have no ACL so that the document would become a public document, but I would hope that Wave uses a white-list.
  • It seems that a reader is also an editor - no distinction. It may be simple enough the have reader and editor roles.
  • The GUI takes the Wave and formats it for display.
  • Spelling and translation seem to be in the GUI.
  • The Back-end manages replication and updates.
  • The scope of conflicts are eliminated by date-stamped single-character transactions and no actual deletes.
  • Front-end extensions can display other content.

I am wondering.

Has Google hit upon a partial solution for internationalisation (I18N)? Can a Wave, web and native applications use the Google translation service for window titles, menu items and help pages to eliminate bundling languages with an application?

Has Google enabled global collaboration of source code where, say, English comments and strings are translated into Chinese based on the browser user agent language setting?

Is Google Wave to beginning of the end of SPAM?

Monday, 25 May 2009

How to make memorable but secure passwords

Some people, perhaps most, have a system for making passwords. Some systems
involve the use of the same password everywhere - easy to remember but if
discovered their online life is easily accessed. Others have different
passwords and write them down.

My system is to maintain long, virtually unique passwords which I never need
to commit them to paper or electronic note.

My goals are:

* at least 8 characters
* the use uppercase, lowercase, digits and symbols/punctuation
* the discovery of the system should not compromise my passwords
* no need to record any password
* be able to quickly work-out my password for any site

The System

* Make up a memorable code with preferably uppercase, lowercase, numbers and
symbols/punctuation.
* For each site, consistently use some aspect of the site such as 3 or 4
letters/numbers of the site URL - modified in some systematic way - and add
it to your memorable code. Add it using any rule you like.

There is a problem with this system: sometimes sites change their name
which, for me, has happened once. In this case I have not needed to change
my password but since most sites will send your password to you, should you
forget, you can easily have your old password recovered and then you can
change your password - it doesn't happen often.

Examples

Assume your memorable code is Ab19#z.

Example 1: Use the first, second, second-last and last characters of the
site, added in reverse order, first and last capitalized, insert after the
4th character of your memorable code.

So a password for google.com would be Ab19EloG#z.

And for ibm.com it could be Ab19MbbI#z. (You should have some way to handle
site names that 'fail' your system or require longer passwords than that of
your system).

Example 2: Insert the memorable code into the first and last characters of
the site name.

So the password for google.com would be gAb19#ze.

It goes without saying (hopefully) that you should make up your own system
and you should probably not use my examples.

Ideas

* Consider using the organisation type or country code.
* Consider using multiple systems. One for important sites and a simpler
system for ad-hoc, single-use and other sites not containing personal data
* Consider a version of the system for your home PC accounts

Your should assume that your system could be discoverable, so you need to
choose a memorable code that is secure by itself.

If you want to document your system, do so with care. You should not write
it down verbatim - try to obscure it ;-)

Saturday, 23 May 2009

So, you don't use open source software because it is not well supported?

But what support do you get from software you pay for?

Lets start with Vista support.

If your Vista install has a bug or doesn't run some of your purchased applications or crashes, how will Microsoft help?

Their End User License Agreement (EULA) for Vista has the following:

(If you are interested, here is a simple commentary on Windows XP Home)

Length of Warranty: Basically 1 year as I read it.
B. TERM OF WARRANTY; WARRANTY RECIPIENT; LENGTH OF ANY IMPLIED WARRANTIES.
The limited warranty covers the software for one year after acquired by the first user. If you receive supplements, updates, or replacement software during that year, they will be covered for the remainder of the warranty or 30 days, whichever is longer.
Repair: Microsoft will repair or replace it or give you a refund.
D. REMEDY FOR BREACH OF WARRANTY. Microsoft will repair or replace the software at no charge. If Microsoft cannot repair or replace it, Microsoft will refund the amount shown on your receipt for the software. It will also repair or replace supplements, updates and replacement software at no charge. If Microsoft cannot repair or replace them, it will refund the amount you paid for them, if any. You must uninstall the software and return any media and other associated materials to Microsoft with proof of purchase to obtain a refund. These are your only remedies for breach of the limited warranty.
What they warrant it for: Nothing it seems. Microsoft don't warrant that Vista is fit for any task.
G. NO OTHER WARRANTIES. The limited warranty is the only direct warranty from Microsoft. Microsoft gives no other express warranties, guarantees or conditions. Where allowed by your local laws, Microsoft excludes implied warranties of merchantability, fitness for a particular purpose and non-infringement.
So... I hope you kept your receipt showing the amount you paid for Vista, otherwise will not get any refund, nor will Microsoft need to fix anything since you have no proof of purchase.

But you actually use more open source software than you think

Web sites

According to NetCraft, about 70% of the million busiest web sites/servers run open source software - Apache Web Server. Of all active web sites the figure is about 50%.

Google and Yahoo use mostly Open Source software to develop and run their services.

Operating Systems

If you run Linux then your Operating System is Open Source.
If you own an Apple Mac then your Operating System is Open Source.
Many companies and web service providers run Solaris. Solaris is now Open Source.

Mobile Phones

If you have a Nokia Symbian mobile phone - your phone's OS is Open Source.
If you have an iPhone, the OS is Open Source.
If you have an Android mobile phone, the OS is Open Source.

In fact, your mobile phone service provider is probably running equipment based on the ATCA standardized hardware platform running Carrier grade Linux (CGL) and other Open Source software.

For more information, see the IEEE SCOPE site.

Web Browsers

Firefox, Safari, Chrome and Webkit are Open Source web browsers.

Routers

Some ADSL routers use linux. eg. Netgear, Linksys, Huawei. Linux is Open Source.

Last but not least...

Almost every PC, mobile phone or PDA runs some version of Java. That is estimated to be installed on 5.4 Billion devices. Most of Java is Open Source.

So what do you have to worry about?

The universe runs on Open Source - Your work probably uses it - You already use it, so why not try it out on your current PC, or for that next work project or when you buy your next PC or laptop?

... and you will probably get more support than you do right now.

Real Support

The following companies and organisations develop, support or have donated commercial products as Open Source:
Google
Cisco/Linksys
Apple
IBM
Nokia
Yahoo
Sun (now Oracle)
Sony
Red Hat
Pixar
JBoss
Dell
LG
Samsung
Novell
Mozilla
HP
Intel
NVidia
HTC
Motorola
Texas Instruments
EMC (VMWare)
Microsoft - yes, they are helping as well!
Most (all?) universities
Perhaps it would be easier to list companies not supporting Open Source software.
Want more Open Source software?

Try here.

Some links to quality and popular Open Source software


What application do you want?

Anti virus? Try ClamAV for unix/linux or ClamXAV for Mac OS-X or ClamWin for Windows.

Word processor, Spreadsheet, Presentation etc.? Try OpenOffice

Web browser? Try Firefox, Webkit, Chrome (from Google), Safari (from apple - based on WebKit), Stainless

Mozilla Firefox runs on Windows, Mac OS-X, and Linux PCs
Webkit runs runs on (Windows and Mac OS-X PCs. Safari probably a better version for most people.
Google Chrome runs on just Windows for now. You can get an beta Mac OS-X version here.
Apple Safari runs on Windows and Mac OS-X PCs
Stainless (closed source?) runs on Mac OS-X. It is tiny, very fast and very simple. I added this because it is an interesting project.


Graphics editor? Try the GIMP

Media player? Try VLC

Bit Torrent client? Try Transmission (open source).

Sound editor? Try Audacity

Email client? Try Mozilla Thunderbird.

Virtual PC environment? Try Virtual Box

This allows you to run other operating systems (guests) on your current OS (host). For example you might like to run Linux on your Windows PC. Linux would run in a window or full-screen if you like and at the same time you can run all your other Windows applications.

Virtual Box allows you to easily install multiple copies of Linux and even other Windows versions on your current Windows PC. It also works on your Mac so you can run Windows and Linux on your Mac - this is what I do. You can start them up and shut them down just like real PCs.

Friday, 15 May 2009

Sending email to multiple recipients - a better way

Current Practice

Generally when people send email they list the recipients in the TO field. This means that each recipient gets a copy of the email and a list of all the other recipients.

Nothing wrong with this but if this email is forwarded, generally a copy of all the other email address is forwarded as well. Now if the email message is really interesting it may be forwarded with increasing lists of email addresses to people who may not know who any of the other email addresses are.

In a perfect world this may be fine, but should this email with lots of email addresses fall into the wrong hands it could end up on SPAM lists or worse: someone could use the chain of email addresses to establish relationships between people in order to launch a more believable attack.

For example if A sends an email to B, C and D then a SPAMer could send SPAM to B, C and D and make it appear that the email came from A (and vice-versa). Since B, C and D already know A the email may get passed their SPAM filters and opened - The SPAMer is now only one click away from launching an attack on their computer.

A Better Practice

Instead of using the TO field, simply use the BCC field and never use the TO or CC fields.

How Does This Help?


Addresses in the BCC field all get a copy of the email but they do not get the list of other people's email address - they only see their own email address.

Should they forward the email on, they only forward on their own email address.

If people begin to adopt this practice, there will be fewer email addresses falling into the hands of the SPAMers.

Sunday, 3 May 2009

Another Warrimoo Power Station Online



Our solar photovoltaic system was connected to the grid via feed-in meter last week.

Initially I was told that they would need to replace my existing 3 phase meters with a single poly-phase meter. But all the contractor did was install another meter.

Integral Energy, the regional electricity provider, installs the feed-in meter in a gross-feed-in configuration. This means that the feed-in meter counts all the energy that we generate and not just the excess energy at any point in time. So if feed-in rates are increased we will be credited for all the energy we produce.

Presently the feed-in rate is roughly the same as the usage rate: 14.62 c/kWh (excl. GST).

http://www.integral.com.au/wps/wcm/connect/8377a4804925a5499a279ff738d2752c/Sunpower+Interconnection+Agreement.pdf

http://www.integral.com.au/wps/wcm/connect/integralenergy/NSW/NSW+Homepage/forHomesNav/Sunpower/

Saturday, 2 May 2009

Online Documentation Conversion Services


I needed to convert a MS Publisher document into a PDF recently. I stumbled upon this site:

FreePDFConvert

It seemed to do a reasonable job.

A friend gave me two other sites that might be useful as well:

DocMorph

Media-Convert

Thursday, 30 April 2009

Costly Enterprise Junk Mail

While I was reading a weekly staff news letter I thought: what is the productivity lost if all staff actually read this and all the other news letters and corporate emails each year?

I did the calculation: 13000 staff; 3300 words worth reading per week; 200 words per minute; 8 hours a day; and an average pay of $AU250/day.

If all staff read the corporate junk mail, it would costs $AU5.4M each year!

But not everyone reads them and some just skim them. So if only 50% were actually read, and if only half of those were completely read on average then it adds up to $AU1.4M — still a large number.

I also read from one source that the average reader only comprehends 60% of what they read. So why do we bother if only 15% of the information is getting through?

So, tell management to keep it small and publish less often.

But we all send vast quantities of emails and we craft huge project plans and reports and strategies and requirements specifications and business cases and ... so keep it short for all our sakes.

Was that too long?

Wednesday, 22 April 2009

Light Wells


There is some construction work taking place on and under the ramp at Sydney Terminal Station. The work is, I am told, to refurbish rooms under the ramp for the Railway fire department.

Work above these rooms on the ramp revealed Light Wells that had been covered for years - probably over 80 years. These Light Wells consist of a metal frame filled with blocks of coloured glass which allow light to pass through into the rear of the voids under the ramp.


Some are square and others round.


Unfortunately I was too slow to get photos from above before they were covered up.

The work included the removal of the road and foot-path surface, excavation, water-proofing of the ceiling to the Light Wells, and the replacement of the footpath and road surface.

The Light Wells were covered with concrete and once again hidden from view.


Old photos of the ramp don't seem to show the Light Wells so they have been dark for a long time.

Other photos can be found here.

Friday, 3 April 2009

Warrimoo has a new power station

Our photovoltaic solar panels and inverter was installed last week (March 27, 2009).

We eventually selected a 1.5kW, 9 panel system. Our roof faces North and is generally not shaded by trees.

A 2kW inverter was installed in the garage to convert the DC power into AC. The output of the inverter is connected to one of our three-phase power circuits.

The inverter (I am told) switches off during a blackout for safety reasons. Otherwise if there is enough power from the panels it supplies power to our house and any excess is 'pumped' into the grid to supply our neighbors.

The unit seems to start working even if there is just 20W available.

Our household electric energy consumption is about 6 kWh per day. So this system should generate enough energy, on average to supply most of our electricity requirements.

Currently in NSW the energy fed into the grid is purchased by the electricity retailer at the same rate that they charge us so, in theory, every kWh we generate will reduce our bill by about 17c. I believe that this will rise to over 20c in July 2009.

It is hoped that NSW will establish a higher feed-in rate as some other states have done in Australia. Most pay at least 44c per kWh for excess generated capacity. This is called a Net feed-in tariff. The ACT pays 50c for every kWh generated which is called a Gross feed-in tariff.

The metering has not been done yet so we have one meter that generally runs backwards and the digits also count down so, although the feed-in meter is not installed, we are benefiting from our excess generated energy - if they take it into account. Our existing meters will be replaced by a poly-phase power meter and an additional feed-in meter will be installed.


Sunday, 15 March 2009

Ubuntu 8.10 (Intrepid) on a MacBook Pro (4,1)

As the title suggests, I decided to get Ubuntu 8.10 working on my MacBook Pro.

My aim, for a long time, has been to get linux running well on a Mac.


I think that the Ubuntu Team have packaged a brilliant distribution that supports the MacBook Pro very well, but a few tweaks are required.

These mods are documented here.

The process is long and not suitable for beginers. I thought that these changes could be turned into a script that would work for most users that are dual/multi-booting their Mac.

The scripts can be found on a Ubuntu post that I just posted.

Tuesday, 3 March 2009

Dual boot Windows XP/linux fails to boot Windows XP

I had a problem: my dual-boot PC would boot into linux (Ubuntu) fine, but I could not boot into Windows XP.



The Windows boot process would stop after loading hpdskflt.sys (in my case) and then it would restart.

It turns out to be my fault: I changed the hard drive configuration in BIOS to SATA rather than IDE. Switching it back to 'SATA Native=Disable' in my case allows Windows XP to boot normally.

Other BIOSs are likely to have a similar setting such as SATA Mode=IDE or SATA or RAID etc.

A friend, who had the same problem, later found this site which explains the problem in detail.

Monday, 2 March 2009

Fixing a Dick Smith G7659 (DTR7100) Digital Set Top Box

I offered to have a look at a friends digital TV set top box that was faulty. It was a Dick Smith branded unit (G7659?) that appears to be a DTR7100 made by Pacific Satellite.



A web search suggested that there were problems with the power supply. I ignored this information at first. Instead I noticed that the power supply voltages were printed on the main board. I checked them to find that they were all working, but not quite close enough to the stated voltage. I then examined the power supply board more carefully. I noticed 3 failed capacitors next to a heat sink. I could tell that they were faulty because their tops were domed rather than flat.

I replaced them and the box now works fine.

Sunday, 25 January 2009

HP 2710 All-In-One Easter Egg

My HP 2710 is complaining about the ink cartridges. I have not fixed it but I did discover an Easter Egg.

Press the * and # buttons at the same time.
The enter 62637

The printer will begin to print a page of photos that, probably, are the HP staff that worked on the printer or it's software.

Wednesday, 21 January 2009

DIY Super-Efficient Fridge Uses 0.1 kWH a Day - Can it really be that good?

I was skeptical that a chest freezer could be modified to perform the function of a refrigerator and only use 0.1 kWh per day. The home site for this 2005? modification is here. now on Wayback here. Firstly, the chest freezer selected is a very efficient one to begin with. The VestFrost SE255 is rated as a 5-star, 247L chest freezer that has been tested to AS/NZS4474 to use 237kWh per year. AS/NZS4474, as I understand it, operates the freezer at -15C in an ambient temperature of 32C. It probably cost Tom around $1500. So this chest freezer uses, according to the standard test, 237/365 or 649Wh per day. It's dimensions are 1260 mm W x 850 mm H x 600 mm D displacing a volume of 643L. Allowing for a compressor void of 600 mm x 300 mm x 300 mm, the thickness of the sides and door must be about 135mm. So this freezer does have some serious insulation to keep it cool. The basic conversion is to add another thermostat device that keeps the inside of the freezer at somewhere above freezing so it becomes a fridge. Tom, the author, set the internal temperature to about 5.5C on average. Tom is of the opinion that this idea works because chest freezers have better insulation and that when the door is opened, cold air does not rush out. Tom is right, but the real reason that his fridge consumes so little power is due to the temperatures he is operating it in. Chest freezers, especially the SE255, do have thicker insulation. Vertical refrigerators do loose cold air when the door is opened. But if this chest freezer were to be stood on end so the cold air could drain away, the fridge would only use a small fraction of 1Wh of energy to cool the ambient air back down to 5.5C. If the freezer is empty and all the air is replaced then the freezer needs to cool it down by 12.5C in Tom's case. 247L of air weighs about 0.32 kg. The heat capacity of air is about 1000J/kg/K. So the fridge needs to remove 4000J to cool this air down to 5.5C. Now, if I am right with my physics, the electrical energy required to pump energy out of a fridge is: E = Heat * ( T.hot - T.cold ) / ( T.cold * M.eff ) If the motor efficiency (M.eff) is 90%, Heat is 4000 J, T.hot is 291.15K and T.cold is 278.65 K and the the electrical energy required is 199J or 55mWh (that is 55 milli-Watt hours) - a small amount. Others have noted this small amount also. Now, the warm air may cool a little upon entering the fridge and this cooled air may also drain away, but since the 'energy' lost is so low it makes little difference. The real reason for the low energy use is the operating temperatures: average ambient of 18C and internal temperature of 5.5C on average. By my calculations, and I am no expert here, the freezer would consume just 6.6% of the energy it does when tested to AS/NZS4474. So it should use 0.066 * 649 = 43Wh. Tom stated that the fridge used 103Wh on the first day and that 30% of this was during the stocking/re-arranging period. So my math does not seem to be too far out. What would happen if we operated an energy efficient vertical fridge at Tom's temperatures? Let's pick an Electrolux ERM4307. It is a 6 star 400L fridge (actually it is better than that but the star scale only goes to 6) that is tested to consume 250 kWh per year or 685 Wh per day when operated (closed) at 32C with in internal temperature of 3C. Its volume is about 887L. Again allowing 63L for the compressor, the insulation is probably about 120mm thick. By my calculations it would use just 18.4% of 685Wh which is 126Wh per day and the fridge is about 70% larger inside than the SE255. So the chest freezer is still better, but a good vertical fridge can use very little energy too in Tom's environment. Electrolux also make a vertical freezer. The EFM3607 which is tested to use 403kWh per year which is 1.1 kWh per day. When operated in Tom's environment it should use just 6.6% of 1.1kWh which is 72Wh per day. This fridge/freezer pair will set you back roughly $3600. So, to reduce your fridge/freezer running costs you need to start with an efficient fridge and freezer. Then keep them in a cool place with good ventilation. If you want to you can raise the internal set point to further reduce running costs - I would stay under 8C and definitely under 10C. To get an idea of the energy saving that you can make by changing the operating temperatures of your fridge or freezer, I have made this graph that hopefully makes it easier to get an approximate energy reduction factor: Select the inside temperature line, draw a line up from the ambient temperature axis to the internal temperature line and read off the energy factor (as a percentage). To use it, just multiply your fridge's daily/yearly energy consumption by this percentage.