Monday, 16 July 2018

Mobile Device Table

A summary of the portable computing devices that I have used for work over the years for no reason other than general interest. There are others (such as the Omnibook 600C) that I have owned and toyed with but not used in anger.

The heaviest device was the AST Ascentia J30 at 2650g, a large chunk of which was battery as far as I can remember. The largest screen was the Dell XPS 13 Developer Edition which, with Linux out of the box, was a real delight apart from the trackpad.

I don't get on with trackpads of any sort (yes, I've tried Macs and I find they're even more annoying than PC ones). The Omnibook popout mouse was a great idea, trackpoints are fine and touchscreen are OK provided they have stylus support. The Samsung Tab S is interesting - it has a capacitative screen but works with a special Samsung stylus with a narrow tip. I don't know how they do it since the stylus won't work on other screens.

Mfg Model Year CPU RAM Storage Screen Size Screen Resolution
Casio FX 700P 1983 Hitachi HD61913A01 2K 12K ROM ~2 inch 12 chars
HP Omnibook 300 1993 Intel 386SXLV 16Mhz 4M ~12M ROM
10M SSD
9 inch 640x480 (16 grey levels)
AST Ascentia J30 1996 Intel Pentium 133MHz 40M 800MB HDD 10.4 inch 800x600 DSTN (256 colours)
HP Omnibook 800CT 1996 Intel Pentium MMX 166MHz 80M 2G HDD 10.4 inch 800x600 TFT (16 bit colour)
Fujitsu Lifebook B2154 2000 Intel Mobile Celeron 450MHz 192M 2G HDD 10.4 inch 800x600 TFT (16 bit colour)
Sharp Zaurus SLC1000 2005 Xscale ARM 416MHz 64M 128M SSD 3.7 inch 640x480 ICZ
Fujitsu Lifebook U810 2007 Intel A110 800MHz 1G 60G HDD 5.6 inch 1024x600 TFT
Toshiba NB100 2009 Intel Atom N270 1.6Ghz Hyperthreading 2G 120G HDD 8.9 inch 1024x600 TFT
Dell XPS 13 L322X 2013 Intel Core i73537U 2GHz (3.1 turbo) Dual core+HT 8G 256GB SSD 13.3 inch 1920x1080 IPS
Samsung Galaxy Tab S 8.4 2014 Exynos 5420 Octa 3G 32GB + 128GB MicroSDXC 8.4 Inch 2560x1600 OLED
GPD Pocket 2017 Intel Atom X8750 1.6GHz (2.56GHz turbo) Quad core 8G 128GB SSD + 256Gb MicroSDXC 7-inch 1920x1200 IPS

Mfg Model Year HxWxD (mm) Mass (g) Touch-screen Track-point Conver-tible Notes
Casio FX 700P 1983 71x165x10 116


BASIC Programmable Calculator
HP Omnibook 300 1993 163x282x36 1315


MSDOS 3.3, Windows 3.1, MSOffice in ROM
Popout mouse
AST Ascentia J30 1996 289x228x47 2650
X
Win 95
HP Omnibook 800CT 1996 185x282x40 1770


Win 95, popout mouse
Fujitsu Lifebook B2154 2000 308x274x40 1400 X X
Win 98
Sharp Zaurus SLC1000 2005 128x87x24 298 X
X Cacko Linux, Dpad
Fujitsu Lifebook U810 2007 150x168x33 712 X X X Win Vista (replaced with OpenSuse),
Toshiba NB100 2009 225x191x33 1000


Win XP (replaced with OpenSuse)
Dell XPS 13 L322X 2013 205x316x18 1360


Ubuntu preloaded (replaced with Kubuntu)
Samsung Galaxy Tab S 8.4 2014 214x142x8
(inc. kbd)
647 X
X Android, Removable Bluetooth Keyboard
GPD Pocket 2017 180x106x18.5 480 X X
Win 10

Sunday, 15 July 2018

Old Soundblaster Live Card and Windows 10

One of my Windows 10 boxes has an old noname soundcard based on the CMedia CMI8738SX which only has Windows 7 driver. When I allowsed Windows 10 to auto-upgrade the preceding Windows 7 installation, somehow it kept on working. However, at some point recently the Windows installation became borked to the point that it no longer updated - and no amount of repairing would fix it. There was nothing left but to re-install Windows from scratch (upgrade and repair installs both failed). Fortunately, everything of import is kept on my Nextcloud server so I can resync my data afterwards quite easily.

However, I could not persuade the C-Media card to work nicely with Widnows 10 (it being 64-bit didn't help). I cast around and found that I had a SoundBlaster Live 5.1 PCI card sat in one of my Linux boxes, mainly because it worked with the 3.3V PCI slot in an H8DCL motherboard. However, having worked out previously that C-Media chipsets support 3.3V PCI, and can be converted merely by filing a suitable notch in the PCI connector, I duly did a swap with the Linux box (Linux supports old CMedia shipsets just fine).

The SoundBlaster is such a standard that surely Windows 10 will support it...? Nope. A visit to the Creative site confirms that there is only a W7 driver. Do I sense a conspiracy to sell new kit when the old stuff works fine? Anyway, a search online turns up the KX Project and, more importantly, their GitHub site which ensure things will hang around a bit. Under Windows 10 64-bit the driver installs fine. Ignore the bit about using KXMixer since it does not, the W10 mixer seems to work OK for me.

Unfortunately, they are no longer accepting donations - but a big thank you from me.

Tuesday, 10 July 2018

Fun and Games with the SuperMicro X9SRI

Scored a Supermicro X9SRI off eBay as a useful way of using up my DDR3 memory once I start decomissioning some of the older machines in the house. It has 8 slots so I can get a tidy 64GB into it when fully loaded. It turns out that this board - or maybe Intel's server chipsets - are a little tempremental.

I paired it with a Xeon 1650 v2 which is slightly faster than an i7-4930K and is still quite respectable CPU. That's round about a Ryzen 2600 level in modern terms and a lot cheaper, especially when you factor in the price differenc ebetween DDR3 and DDR4 RAM. And there my problems started...

I reset the BIOS, loaded up defaults and proceeded to install Linux. Everything went fine, for a while, and then I started getting random hard freezes - but not associated with any particular activity. After swapping out RAM, video cards and everything else I was till no nearer a solution. In desperation, I acquired another Xeon (E5-2609 this time - cheap enough for a quick test) and sure enough, I dropped it in and everything worked fine. So, it's the CPU, I thought, but I was puzzled by the behaviour - working fine under heavy load (Phoronix CPU Benchmark Suite) and the freezing at idle didn't seem like any other failure I'd come across. So I worked my way through the BIOS Options and discovered that the culprit was in the CPU Power Management Settings. If I set it to Power Saving (the default) then I get freezes, but if I set it to Performance then all is well. Interestingly, Linux still seems to do power saving, dropping the clockspeed and voltage of the CPU se it runs quite cool at idle. So I have no idea what the setting does other than break v2 Xeons!

I dropped an old GT 710 card in so I didn't have to live with the 1280x1024 that the on board video can do. I hestiate to call it a GPU but it's only really meant for IPMI redirection so I'm being a little unfair. That works fine with nouveau but I install the proprietary nVidia drivres which have better power management.

Finally, I drop in an Intel Quad Port PRO/1000PT to give me a few more ethernet ports to play with and the board refuses to boot - giving me beep sequences instead. A few more card swaps and it transpires that the Intel card really hates being in PCI-E 3.0 slots and will only start up in the middle, PCI-E 2.0, slot. I was hoping to put it on the slot further away from the video card since the PT gets quite warm but there you are. Reminds me of the old DOS days fiddling around with interrupt combinations to get all your peripherals to work.

Friday, 19 January 2018

Delegating ORCID Tokens

This is the final blog posting arising from the ORCID Delegation workshop held on the 10th October at Jisc in London (with representatives from Oxford, Imperial, Leeds, LSE, ORCID and Jisc). The previous blogs (Delegating ORCID Tokens – background, Revoking ORCID tokens) covered other incidental outcomes, so I will now get to the main topic – the fact that institutional ORCID users needed to explicitly grant access to third party suppliers in addition to their own institution. This behaviour has a number of undesirable side effects (repeating myself somewhat):
  • Communicating this to users can be difficult since they are not always aware of these third parties
  • Getting consistent takeup across multiple systems can be difficult (user loses interest) which makes downstream integration more awkward than necessary
  • Institution has little visibility of these third party interactions – which can cause problems when suppliers are dropped or other issues arise
  • The only way currently round this is to let a supplier use the institutional key – which however then grants them *ALL* the rights can access that the institution has
ORCID uses the OAuth(2) mechanism for authentication and authorization. OAuth already provides the functionality that will allow an institution to generate secondary tokens for third parties and subsequently revoke them:
  • When an ORCID owner authorises institutional access, the institution receives two digital tokens:
    • The access token acts as a “key” that is used to access ORCID data via the ORCID API
    • The refresh token can be used to request additional tokens via the OAuth API
  • There are two primary ways that the refresh token can be used:
    • To request a new access token when it nears expiry, invalidating the current access token – this has been discussed in the previous posting on revocation
    • To request an additional access token which may have reduced permissions and/or differing expiry, which leaves the current access token intact
  • Multiple additional tokens can be requested
  • Additional tokens can be invalidated in a similar way to regular access tokens
Obviously, this additional token mechanism is exactly what is required to allow delegated access to third parties. The issue is that there is no standard mechanism for a third party to know that they should go to an institution to get a token instead of ORCID themselves, nor is there a standard way for an institution to deliver that that token securely. There are several approaches to solving this issue:
  • ORCID keeps track of additional tokens requested and who they are intended for, and issues them when requested.
    • ORCID will need API enhancements to allow institutions to indicate which tokens are for which supplier (and for a supplier to indicate which institution’s token they require)
    • These enhancements are likely to be ORCID specific rather than vanilla OAuth2 so existing code libraries will not exist
    • Institutions will need to implement and use this API, which may involve rather more information release than is desirable
    • Supplier will need to implement and use this API
    • It is unclear how a person with multiple institutional affiliations might be handled without adding significant complexity
    • Short lived tokens would require a three-way interaction between supplier, institution and ORCID – this will be complex and is unlikely to scale effectively
  • Institutions provide an endpoint which the third party always contacts when it needs an access token.
    • This could be implemented as an OAuth2 proxy type of service, which could forward token requests to ORCID on behalf of the institution if an additional token does not already exist, or return an institutional token if the institution in question was not interested in delegation. If developed as a standardised packaged software appliance suitable for running on virtual or cloud infrastructures this could be deployed at an institutional level with minimal effort.
    • Institutions could automatically request and issue short lived tokens relatively easily
    • Institutions retain control over third part interactions
    • Institutions would need a secure (SSL, VPN etc.) network channel to the supplier. This is quite likely to exist already, in practice
  • A third party could maintain a shared proxy-type service for use by multiple organisation
    • This would add another party that needs to be trusted into the interaction, adding effectively a second level of delegation.
    • Cost saving against a locally deployed appliance (as described above) may be minimal when the additional complexity of management is taken into account
Ideally, to make the system robust and workable, ORCID owners should have a verified institutional affiliation (or more than one if necessary) attached to their account so that a third party supplier knows who to contact for a delegation token (either directly or via ORCID). Otherwise, another mechanism would need to be found to indicate which institutions can issue tokens for a particular ORCID owners, which would be somewhat redundant. This relationship would also need to be deasserted promptly when the relationship ends. Any of these approaches will require some investment of effort by both ORCID and member institutions for development and ongoing maintenance. The key questions for the community are thus:
  1. Primarily, whether there is sufficient interest and desire for a solution to the delegation problem for such an investment to be worthwhile
  2. Secondarily, which route is the preferred one. On initial analysis on the day of the workshop, the feeling of the group was that an institutional proxy appliance seemed to be the most attractive option.
As a footnote, Will Simpson of ORCID noted during the meeting that using additional tokens internally for different systems within the institution was also a valid use case in that it allowed ORCID to separate requests from different sources in their logs. This greatly simplifies troubleshooting problems with interactions with multiple institutional systems.

This posting originally appeared on the UK ORCID Consortium blog.

Thursday, 16 November 2017

Retconning this Blog

I've got a load of bits and pieces laying around the internet that I've posted over the years. I'm going to collect them all here with their original dates. I would like to claim it's for preservation but frankly, it just that I can't keep track of the stuff.

Wednesday, 15 November 2017

Thoughts on Fixity Checking in Digital Preservation Systems

I would like to query the rationale for actually doing periodic fixity checking in isolation. This has bugged me for a bit so I am going to unload.

As far as I can see, the main reasons would be undetected corruption on storage and tampering that doesn’t hijack the chain of custody.

All storage media now have built-in error detection and correction using Reed-Solomon, Hamming or something similar which is generally capable of dealing with small multi-bit errors. In modern environments, this gives unrecoverable read error rates of at worst around 1 in 10^14 and generally several orders of magnitude better – which is around one in 12TB total read. Write errors are less frequent – they do occur but can be detected by device firmware and retried elsewhere on the medium. These are absolute worst case figures and result in *detectable* failure long before we even get to computing fixity. The chance of bit flips occurring in such a pattern as to defeat error correction coding is several orders of magnitude less – it is similar to bit flips resulting in an unchanged MD5 hash. Interestingly, in most cases the mere act of reading data allows devices to detect and correct future errors as the storage medium becomes marginal so there is value in doing that.

Consequently, however, undetected corruption is most likely when data moves from the error corrected environment of the medium to less robust environments. At an interconnect level protocols such as SCSI, SATA, Ethernet and FC are all error corrected as is the PCI-E bus itself. The most likely failure points are likely to be a curator’s PC or software. How many curators work on true workstation grade systems with error corrected RAM and error corrected CPU caches? How well tested are your hashing implementations (MD5 had a bug not so long ago)? How about all the scripts that tie everything together? How about every tool in your preservation toolchain? How many of these fail properly when an unrecoverable media error is encountered?

If we consider malicious activity then, again, we have to ask whether it is easier to attack the storage (which may require targeting several geographically dispersed and reasonably secure targets) or the curation workflow, which is localised, generally in a less secure location than a machine room, and can legitimise changes. A robust digital signature environment is the way to deal with this – and fixity hashes *can* be used to make this more efficient (sign the hash rather than the whole object).  

Locally computed hashes can be very useful as a bandwidth efficient way of comparing multiple copies of an object (rsync has done this for ages) to ensure that they are in sync.

So there are reasons to compute hashes, when needed, but fixity is not necessarily a compelling reason given the way modern systems are engineered.

In practice, these checks do detect failures but they are almost exclusively transmission errors as a result of uncontrolled (and unauditable) activities - often by sysadmins or third party suppliers not well versed in digital preservation. In these cases, the wrong data is actually written to storage so there is no fixity to lose. Periodic "fixity" checking can catch these cases, but ideally you want to have visibility of these processes and checking immediately after completion. If the errors are in automated processes, waiting for a periodic check to come round may allow significant damage to occur.

Originally posted to the PASIG mailing list, with updates as a result of discussion with Kyle Rimkus (University of Illinois at Urbana-Champaign).

...also now posted on the DPC blog.

Monday, 6 November 2017

ORCID Token Revocation

At the last Cultivating ORCIDs Meeting in Birmingham in June 2017, I ran a working group looking at different approaches to implementing ORCID IDs. One of the outcomes was the identification of a common issue when it came to ORCID implementations and third party suppliers, namely, that institutional users needed to explicitly grant access to third party suppliers in addition to their own institution. This behaviour has a number of undesirable side effects:
  • Communicating this to users can be difficult since they are not always aware of these third parties
  • Getting consistent takeup across multiple systems can be difficult (user loses interest) which makes downstream integration more awkward than necessary
  • Institution has little visibility of these third party interactions – which can cause problems when suppliers are dropped or other issues arise
  • The only way currently round this is to let a supplier use the institutional key – which however then grants them *ALL* the rights can access that the institution has
On the 10th October a small group of interested parties (with representatives from Oxford, Imperial, Leeds, LSE, ORCID and Jisc) were hosted by Jisc in a small gathering to look at this issue and identify a possible route forward for consideration by the UK ORCID Consortium.

At the meeting, Will Simpson of ORCID presented a very useful non-technical overview of how authentication and ORCID/OAuth tokens worked in terms of managing access permissions. Discussion then moved on to the main topic of how ORCID permissions might be delegated to third party providers and, in particular, how to handle the termination of third party arrangements. During these discussions, Will indicated that support for the optional OAuth functionality for token revocation was being considered by ORCID. OAuth is the technology/standard that ORCID uses for authorisation/access control. At the moment, tokens are granted by default for 20 years, or 1 hour for effectively single, short term, use. Naturally, neither of these match the typical duration of a scholar’s relationship with an institution. Minimising the number of active tokens would be good from both a security and “data hygiene” standpoint, so the ability for an institution to relinquish their token when a scholar leaves would be useful in its own right. Scholars can revoke their tokens manually when they leave but it is unrealistic to rely on them to remember to do so.

At the moment, it is possible to work around this situation by making creative use of the OAuth token refresh facility. This functionality is important since it is what will allow an institution to grant tokens to a third party on behalf of an individual researcher (which will be explored in the next posting), but, in this context, it does provide a slightly unorthodox method for effectively relinquishing a token. Intended for use when an existing token nears expiry, a replacement token may be requested with a new expiry date which then invalidates the previous token. However, this can *actually* be done at any time and a 20-year token *can* be replaced by a 1 hour token which can simply be allowed to expire, resulting in no active tokens.

This is a concatenation of two articles posted on the UK ORCID Consortium blog.