Thursday, 31 March 2016

Shoehorning Duallies Revisited



After a fortuitous eBay acquisition of a SuperMicro H8DCL, I felt it was time to revisit my compact duallie build. 

Case: CoolerMaster Elite 360
Motherboard; SuperMicro H8DCL-I
Processors: 2x AMD Opteron 4122 (Dirt cheap and good for testing, BIOS updating. Will replace later)
Coolers: AMD Stock quad-heatpipe - unfortunately the less good updated ones!
RAM: 4x Samsung 8GB PC10666 DDR ECC Reg Low Power
PSU: Corsair CX430M
Fans: 2x CoolerMaster 120mm (Top outlet and side inlet),
Fans: 2x 80mm Akasa Paxfan 80x80x25mm 3-Pin Fans (two rear outlet)
GPU: PowerColor Radeon 6570 1GB with Arctic Accelero L2 Plus cooler
SATA backplane: StarTech SATABAY425BK 4x2.5
Fan bus: Phanteks PWM Fan Hub from Overclockers.co.uk
SATA Cables: Scavenged from Supermicro 0316L cables sets from Boston

PSU Bracket is held on by one screw


Sadly, the Elite 360 case, and it's marginally larger brother, the 361 seem to increasingly scarce. However, the Powercool 3060 looks suspiciously similar with possibly a slightly better layout. First things first - we need to remove the plastic PSU mounting bracket which does nothing other than push the PSU further into the case so there's room for the rotating CoolerMaster logo - and ditch the logo too. The CoolerMaster 120mm fans are the stock ones that come with the 360 case which aren't too annoying at slow speeds.

Compared to last time, the H8DCL poses a few problems. Although at 12"x10" it is technically a little shorter than the K8N-DL, it mounts the two CPU's front to back rather than side by side which means the coolers impinge on the optical drive bay. Additionally, it requires 2x8 pin EPS power connectors which is generally only found on larger PSU's that certainly won't fit nicely into an Elite 360 case. It also only supports PWM fans and I have a caseload of 3-pin ones.

A quick calculation shows that the system above isn't going to draw more than about 350W at peak  As a home server the 6570 is frankly overkill but I had it lying around and it's idle power draw (under 10W) isn't that much different from the passively cooled 4350 I used with the K8N-DL build. As such, the Corsair CX430M has more than ample power capacity with the added advantages, for a tight build, of being modular and 140mm deep. This is 15-20mm shorter than most other modular power supplies which makes a big difference. The modular connector panel has an 8-pin socket labelled 4+4CPU/PCI-E but, of course, only comes with a PCI-E cable (there is a non-modular 8-pin EPS cable already).

Franken-PSU
However, one of my old XClio GreatPower's had finally gone to meets its maker, after 7 years near continuous operation driving dual Opteron rigs, so I had a pile of modular cables left over. As it was a CWT-based power supply, like the Corsair, it had the same modular connectors - among which was an 8-pin EPS cable. I had a look at TechPowerUp's teardown of  the CX600M to see how the modular board was wired up - which showed that while the XClio wiring was straight through, the Corsair was a crossover cable, swapping 12V and ground. Several minutes with a pin extractor and a pair of pliers made the necessary changes and I became the proud possessor of a 430W dual EPS12V 8-pin power supply. At the same time, I cut one of the XClio's hard drive Molex cables down to two connectors (no point hacking the nice new Corsair cables) and discovered that for those cables, I needed to swap the 12V and 5V wires. 

The 4-pin cable is routed under the 120mm fan

On with the build. Sequencing is everything since this is a tight fit. First, I get the rear and top fans into case and wind the cables round a screwdriver shaft to make them coil up - this stops them dangling aroundm in the way. The Phanteks PWM hub allows you to drive up to 6 3-pin fans at variable speed from a single PWM header. Although it does have an additional SATA power connection in case the fans draw too much current for the single 4-pin header I have a Supermicro board here so I'm not bothering. Supermicro specs these fan headers to drive the 7K+ rpm banshees in 1U server cases.





Now it's time to drop the motherboard into the case and it goes in quite easily except that it doesn't have the holes in quite the right places. I end up having to remove 3 standoffs (leaving 7 in place) and using a couple of plastic board supports I had leftover from the days of baby Baby-AT boards into holes which didn't match the ATX spec. There a fan header top right which is very convenient for the PWM hub connection. Just to check there are no shorts, I connect a PSU to the motherboard and fire it up. All seems well until I notice that only half the RAM is showing which could mean a problem with the second CPU or maybe damage to the underside of the board. After a small panic and a bit of poking around I realise that the two sticks of RAM in the top bank are in the black slots while the bottom bank has them in the blue - moving them to all blue results in everything working properly.

All the cabling hidden behind the front panel!
Time to slide the PSU in. This is a tight fit so best to remove the RAM from the lower bank to give more room to manoeuvre. Fold all the PSU cables (including the modular ones) under the PSU and feed them out through the slot where the front panel cables come out. Then route them round and feed them back into the case through the FDD slot. The EPS12V and ATX connectors will come out with just enough length to reach the connectors on the motherboard (although my Xclio franken-cable is only just long enough).

Now I can install the SATA backplane. I've noticed that the pegs on the sides of the mounting rails stick a little far into the casing and can snag on the HDD trays so I trim a mm off the end of each one with knife.  The molex power cable is just long enough for the second connector to reach the back of the backplane - there was a reason I cut it down to two even though I only need one!


Nearly there. Supermicro slim SATA cable sets are really useful here. They are basically very similar to Silverstone's CP11 ultra slim SATA cables but a lot cheaper - and the two cables are neatly sleeved together. You can get a set of 4 for the same price or cheaper as one Silverstone cable. They don't have Silverstone's fancy side-entry connectors but the connectors are odd in that the right angle connectors are top-entry. The pic shows 4 SATA cables from the backplane running along the top edge of the PSU - it is very compact.  I've also stuck the video card in now.

Finally the side/top panel needs a bit of work. The 120mm fan at the top blows down onto the CPU's and I'm coiling the cable for that so it doesn't snag in the other fans when it gets plugged into the PWM hub. The bottom vents let air in for the video card and the PSU. I've stuck fine stainless steel mesh over both vents. Cut the mesh 2cm larger than the vent all round and then fold 1cm over and fix with double sided tape to get the sharp ends of the wire out of the way. Use a strip of car trim double sided fixing tape to fix it to the case (it's designed to be weather and vibration-proof so should stay put).

Monday, 9 March 2015

Re-installing Windows 95 on the Omnibook 800CT

Having sold off some of my clutch of Omnibooks, I decided to rebuild the 800CT that remains as a Windows machine for running old software. There are still a lot of OB fans out there so they all (two 800CT/133's and a 600C) found good homes.

For starters, I pop an 8GB Lexar Platinum CF card into a CF-to-PATA adapter to do duty as a hard drive. Even a relatively slow CF card is faster than the 2.5-inch hard drives of the time and considerably lower in power consumption. All my Omnibooks run off CF cards, even the 300 now it has a BIOS 1.01 upgrade card.

I plug in the floppy drive and CDROM drive and boot up the machine. The Omnibook gets part way through loading MSDOS and then hangs so hard that I need to prod the hard reset nubbin on the side to get a hard reboot. The restore floppy disk is an original and getting on for 20 years old so lets try creating a new one since it's probably corrupt. Same problem. Fair enough, the floppy drive is getting on for the same age so lets try another another one of those. Still crashes. It could be the cable - but I don't have a spare.

Let's try another boot disk - I have Slackware floppies from previous experiments. These appear to work fine. Curiouser and curiouser. Time to sleep on it.

Next day, I extract the CF card and make it DOS bootable in another machine. I intend to copy the contents of the Omnibook CD onto it along with the image restore software from the Omnibook boot floppy. The Omnibook floppy checks that it is running on an Omnibook and then restores from an encrypted Windows image so it has to be run on the actual machine. Clever, but a real pain right now.

Absentmindedly, I fire up the Omnibook without the CF card in and it boots just fine from the FDD. The penny drops - all my other machines had Transcend CF cards in but this one has a Lexar. I put a Transcend CF card in and everything works just fine - the Lexar's TrueIDE mode is obviously not DOS compatible (but fine with the Slackware as I found out earlier).

So, now I have a windows 95 OSR 2 newly built, what do I put on it?

First of all some further system bits:
  1. erpdude8's Unofficial Windows 95 Service pack 1.05
  2. WinZip 8.0 (OldApps is your friend - I can't find it on their website). Not strictly a system component but it makes installing later stuff simpler. You'll need an old licence key as well, I don't know if a new one will work.      
  3. CPUIdle 5.8c to improve power usage - I should get an update but I've long since lost the download key and the new features are for things way newer than the 800CT.
  4. The ACITS LPR Client - Let's you print to network attached printer like Windows XP. The link is for Columbia U since the Texas U links are broken. It's free for non-commercial use.
  5. Lexmark Universal Printer Driver 1.X - I have a Lexmark Colour Laser, they provide Linux and Win 9X drivers which puts they way ahead of other providers in my books. The 9X driver is a bit generic so you have to manually configure things like duplex and colour.
  6. Drivers for my Xircom CEM56-100 - Well done Intel for keeping them online.
Then onto some applications:
  1. Netscape 6.23 - You can haul it off somewhere like OldApps but, thanks to AOL parentage it tries to install a load of guff as well, so go for a custom install and skip the AOL, RealPlayer and Mail. WinAmp is OK though. Also remember to remove the registration nag by deleting/renaming C:\Program Files\Netscape\Netscape 6\components\activation.dll 
  2. MS Office 97 - But remember to expunge the Fast Find feature since it will clobber your battery life (remove it from the Startup folder). I imagine it will nag me to register with some now-defunct mechanism in due course.
  3. Acroread 4.05 - OldApps again
  4. ACDSee 2.3 - All the later versions gained a lot of extra functionality and cruft (and expense) which I really don't need - just a simple image viewer is all I want.

Tuesday, 24 February 2015

SteamOS Fun

For a while I've been meaning to build a SteamOS box and I finally got round to getting a spare machine with a reasonably recent video card to try it out on.

Having read that SteamOS only install on UEFI systems and noting that my motherboard has a good old BIOS, I elected to download a VaporOS image onto my ZM-VE200, plugged it into a USB slot and fired up the PC. The machine booted from the ZM just fine and the VaporOS install proceeded just fine until it expired almost at completion with a GRUB installation error. I tried playing around with the HDD configuration a bit without any progress.

OK, maybe the VaporOS was borked somehow - let's try downloading a vanilla SteamOS iso and see how we go. Surprisingly, the SteamOS installer fires up just fine and works OK - they've added support for regular BIOS machines while I wasn't looking - that's nice. However, GRUB installation still fails.

After considerable head-scratching, I eventually burn a SteamOS installation CDROM and load it into the machine's internal drive rather than using the Zalman. Lo and behold the installation works just fine. That's good but I am still intrigued about why the Zalman route failed since it emulates an CDROM drive perfectly in my experience.

Some more investigation reveals that the GRUB install fails because it references /dev/cdrom0 so you can only install from the first CDROM drive on a machine. As my box already had a drive, the Zalman became /dev/cdrom1 and although the rest of the installation was fine, the GRUB installation scripts failed. Subsequently, unplugging the internal drive proved that an install from the Zalman would work fine provided it was the first drive.       

Tuesday, 18 February 2014

Owncloud Client Wrinkles

Just updated the Owncloud Client on one of my OpenSuSE boxes to 1.5.1 and got the dreaded "failed to initialize sync journal" error when starting up.

A look online reveals nothing for OpenSuSe specifically but notes that the error has occurred before, for example on a MacOSX build. However, an actual solution is short on the ground though it does hint at dependency problems in the build.

Sure enough, if I go into YAST Software Management in install libqt4-sql-sqlite then everything starts to work fine.

Monday, 9 September 2013

Fun with OpenSuse, VMWare and Firefox

Just rebuilt one of my VMWare workstation machines: AMD FX 8120, 16GB RAM, dual SSD boot drives etc. with OpenSuse 12.2. I always lag an OS version with VMWare as Workstation host OS support takes a little while to "bed in", in my expereince.

All is well until I fire up Firefox and start to experience huge lag spikes where the whole UI seizes up for 20-30 seconds at a time. I can CTRL-ALT-Fn to another console and text mode is fine but X/KDE is locked solid.

Looking at top shows that VM's, Firefox, kwin and khugepaged have pegged their respective CPU cores (or 4 cores in the case of the VM's) with little or no disk, swap or RAM activity. Killing firefox drops everything back to normal so I start to look online for reports of weird interactions between firefox (or flash/java within firefox) with OpenSuse and VMWare. Nothing.

Typing khugepaged into Google, however, was a bit of a revelation. Lots of reports of CPU stalls, 100% utilisation etc. with high core/RAM counts. I wouldn't have called 8-cores/16GB high in this day and age - at work I use 48-core/256GB VM hosts and they're getting to the end of their support lifetime already. However, my previous 4-core/12GB box did not have this problem.

To cut a long story short, it appears to be a problem with khugepaged attempting to defrag RAM to make space for the huge pages. For now I have just disabled defragging with:

echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo never > /sys/kernel/mm/transparent_hugepage/defrag

...why do they take completely different parameters when they have the same name? Logic, please.

...it seems that later kernel vesions have this fixed, so I may have been bitten by my "lag an OS version" principle above. Ho hum.

Saturday, 7 July 2012

Representing knowledge – metadata, data and linked data

Reposted Op-Ed from Wikipedia Signpost

This piece examines a key question that new Wikimedia projects such as Wikidata are concerned with: how to properly represent knowledge digitally at the most basic level. There is a real danger that an inflexible, proscriptive approach to data will severely limit the scope, capabilities and ultimate utility of the resulting service.

At one level, the textual representation of information and knowledge in books and online can be viewed as simply another serialisation and packaging format for information and knowledge, optimised for human rather than machine consumption. Within the Wikipedia community – Wikidata and elsewhere – there is a perceived utility in using more structured, machine-friendly formats to enable better information sharing and computer-assisted analysis and research. However, there remains a lot of debate about the best approach, to which I will contribute the views I have developed over nearly a decade of research and development projects at the Bodleian Library[1] and before that, through my involvement with knowledge management in the commercial domain.

My first point is that metadata and data are really different aspects of a continuum. In the majority of cases, data acquires much of its meaning only in connection with its context, which is largely contained within so-called metadata. This is especially true for numerical data streams, but holds even for data in the form of text and images: when and where a text was written are often critical elements in understanding the meaning.[2] Data and metadata should be considered not as distinct entities but as complementary facets of a greater whole.

Secondly, there will be no single unifying metadata "standard" (or even a few such standards), so deal with it! For example, biosharing.org lists just under 200 metadata standards for experimental biosciences alone. The notion of a single standard that led to the development of MARC, and latterly RDA, in the library sphere is simply not applicable to the way in which metadata is now used within the field of academic enquiry. This means that any solution to handling digital objects must have a mechanism for handling a multiplicity of standards, and ideally within an individual object – for example, bibliographic, rights and preservation metadata may quite reasonably be encoded using different standards.[3] The corollary of this is that if we have such a mechanism there is no need to abandon existing standards prematurely. This avoidance of over-proscribing and premature decision-making will be familiar to Agile developers. Consequently, Wikidata developers would be ill-advised to aim for a rigid, unitary metadata model – even at a basic level, representing knowledge is too complex and variable for such an approach.

So how do we balance this proliferation of standards with the desire for sharing and interoperability? We can find several key areas in which a consensus view is emerging, not through explicit standard-setting activities but through experience and necessity. This gives us a good indication that these are sensible points on which to base longer-term interoperability.
  1. An emergent data/object model. Besides the bibliographic entities, such as digitised texts, images and data, a number of key types of "context-object" recur when we start to try to build more complex systems for handling digital information. This can be seen in such diverse areas as the specifications for TEI, Freebase, CERIF and schema.org. The most important of these elements are people, places, vocabularies/ontologies and the notion of time dependency. Indeed, for many projects in the humanities, these objects actually form the basis for expressing ideas and framing discourse using the conventional bibliographic objects to provide an evidentiary base.
  2. Aggregations as a key organising tool for this expanded universe of digital objects. In many cases, these aggregations are also objects in their own right, representing content collections, organisations, geopolitical entities and even projects – each potentially with a history and other attributes. An essential characteristic of aggregations is that they need not be hierarchical, but rather a graph capable of capturing the more unstructured, web-like way people have of organising themselves and their knowledge.[4]
  3. Agreement on essential common properties. For each object type there is usually a general consensus on a minimal set of properties that are sufficient to both uniquely identify an object and provide enough information to a human reader that the object is the one that they are interested in. Often, the latter is actually a less strict requirement as a person can use circumstantial evidence such as the context in which an object occurs for disambiguation. While it is desirable to try to capture contextual information systematically, we have to accept that this is frequently not done. Sources for this common baseline could include Dublin Core (or dcterms to be explicit) records, DataCite records, gazetteers, and name authority lists, for example.
These common properties are obviously very amenable to storage and manipulation in a relational database. Indeed, for large-scale data ingestion with the following clean-up, de-duplication and merging of records/objects, this is likely to be the best tool for the job. However, once this task has been completed and we delve into the more varied elements of the objects, the advantages of a purely relational database approach are less clear-cut.

Instead, we can treat each object as an independent, web-addressable entity – which in practice is desirable in its own right as a mode of publication and dissemination. In particular, we can use search engines to index across heterogeneous fields – Apache Solr excels at faceting and grouping, while ElasticSearch can index arbitrary XML without schemas (i.e. all of the varied domain-specific metadata). These tools give users ways into the material that are much easier to use and more intuitive.

The objects alone are only a part of the picture – the relationships between objects are critical to the structure of the overall collection. In fact, in many cases (especially in the humanities) a significant proportion of research activity actually involves discovering, analysing and documenting such relationships. The Semantic Web or, more precisely, the ideas behind the Resource Description Framework (RDF) and linked data, provide a mechanism for expressing these relationships in a way that is structured, through the use of defined vocabularies, but also flexible and extensible, through the ability to use multiple vocabularies. While theoretically it is possible to express all metadata in RDF, this is not practical for performance[5] and usability[6] reasons, and is unnecessary.

This model of linked data, combining a mix of standardised fields and less-structured textual content, should not be entirely unfamiliar to people used to working with Semantic MediaWiki, sharing their metadata on Wikidata, or using data boxes in Wikipedia! However, when applying this model to practical research projects it emerges that a critical element is still lacking. Although we can describe relationships between objects using RDF, we are limited to making assertions of the form [subject][predicate/relationship][object] (the RDF "triple"). In practice, relatively few statements of this form can be considered universally and absolutely true. For example: a person may live at a particular address but only for a certain period of time; the copyright on a book may last for 50 years, but only in a particular country. Essentially, what is needed is a mechanism to define the circumstances under which a relationship can be considered valid. A number of possible mechanisms could do this – replacing RDF triples with "quads" that include a context object; annotation of relationships using OAC.

These examples are really just special cases of a more general requirement that is of great interest to scholars. This is the ability to qualify a relationship or assertion to capture an element of provenance. Specifically, we need to know who made an assertion, when, on the basis of what evidence, and under which circumstances it holds. This may be manifested in several ways:
  • Differences of scholarly opinion – it should be possible for there to be contradictory assertions in the data relating to an object, provided we can supply the evidence for each point of view.
  • Quality of the evidence – information can be incomplete, or just unclear if we are dealing with digitised materials. In this case we want to capture the assumptions under which an assertion is made.
  • Proximity of evidence – we may have an undated document but if we know the biography of the author we can place some limits on probable dates. This evidence is not intrinsic to the object but can be derived from its context.
  • Omissions – collections are usually incomplete for various reasons. It is important to distinguish the absence of material as a result of inactivity or specific omission from subsequent failures in collection building.
These qualifications become especially important when we try to use computational tools such as analytics and visualisation. Indeed, projects such as Mapping the Republic of Letters (Stanford University) are expending significant effort to find ways of representing uncertainty and omission in visualisations.

I believe there needs to be a subtle change in the mindset when creating reference resources for scholarly purposes (and, arguably, more generally). Rather than always aiming for objective statements of truth we need to realise that a large amount of knowledge is derived via inference from a limited and imperfect evidence base, especially in the humanities. Thus we should aim to accurately represent the state of knowledge about a topic, including omissions, uncertainty and differences of opinion.


Notes

  1. In particular, Cultures of Knowledge.
  2. Usefully, most books come with a reasonable amount of metadata (author, publisher, date, version etc.) encapsulated in the format, but this is represents somewhat of an anomaly. Before the advent of the book and, more recently, in online materials, metadata tends to be scarcer.
  3. However, I concede that it is not unreasonable to expect that things are generally encoded in XML with a defined schema.
  4. Our own experience of trying to model the organisational structure of the University of Oxford (notionally hierarchical) convinced us that this was essential.
  5. RDF databases (triple stores) currently scale to the order of billions of triples – this limit can be reached quite easily when you consider that the information in a MARC record for a book in a library may have well over 100 fields.
  6. RDF is a very verbose format. Existing domain-specific XML formats can be much easier to read and manipulate.