Central Computer and Telecommunications Agency: Difference between revisions
m →top: cleanup and begin deleting motto/organization_motto/pledge parameters no longer used by infobox bridges |
Roybot9573 (talk | contribs) m more PDF identification |
||
(43 intermediate revisions by 19 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Defunct UK government agency based in Norwich, England}} |
{{Short description|Defunct UK government agency based in Norwich, England}} |
||
{{EngvarB|date=June 2017}} |
{{EngvarB|date=June 2017}} |
||
{{Use dmy dates|date= |
{{Use dmy dates|date=May 2024}} |
||
{{Infobox organization |
{{Infobox organization |
||
|name = Central Computer and Telecommunications Agency |
|name = Central Computer and Telecommunications Agency |
||
Line 28: | Line 28: | ||
== History == |
== History == |
||
=== Formation === |
=== Formation === |
||
In 1957, the UK government formed the Technical Support Unit (TSU) of [[HM Treasury]] to evaluate and advise on computers, initially based around engineers from the telecommunications service. |
|||
=== Archived records === |
|||
As this unit evolved, it morphed into the Central Computer and Telecommunications Agency, which also had responsibilities as a central procurement body for government technological equipment. |
|||
CCTA records are held by [[The National Archives (United Kingdom)|The National Archives]].<ref name="CCTA Archived Information">{{cite web |title=CCTA Archive |url=https://discovery.nationalarchives.gov.uk/results/r?_q=Central+computer+and+Telecommunications+Agency&_sd=&_ed=&_hb= |
|||
|website=National Archives |access-date=21 June 2024}}</ref> |
|||
In 1957, the UK government formed the Central Computer Agency (CCA) Technical Support Unit (TSU) within [[HM Treasury]] to evaluate and advise on computers, initially based around engineers from the telecommunications service. |
|||
As this unit evolved, it morphed into the Central Computer and Telecommunications Agency, which also had responsibilities for procurement of [[United Kingdom Government]] technological equipment, and later, that centrally funded for [[University]] and [[Research Council]] systems. |
|||
=== Technical services === |
|||
Note that nearly all names and authors, quoted or referenced in this section, were CCTA engineers or scientists. |
|||
The first external technical publication was in 1960 by J. W. Freebody and J. W. Heron, as “Some engineering factors of importance in relation to the reliability of government A.D.P. systems”. Nearly 30 computer systems had been installed at that time.<ref>{{cite journal |last1=Freebody |first1=J.W. |last2=Heron |first2=K.M. |title=Some engineering factors of importance in relation to the reliability of government A.D.P. systems |journal=Institution of Electrical Engineers and the British Computer Society Ltd |date=January 1960}}</ref> The conclusion was that reliability was the most important single factor, identifying areas and activities that required investigation by the new organisation. |
|||
A later career review confirmed that John Freebody was promoted to Staff Engineer and set the task of founding the Technical Support Unit.<ref>{{cite journal |last1=Freebody |first1=J.W. |title=Notes and Comments |journal=The Post Office Electrical Engineers Journal |date=July 1966 |page=135 |url=http://www.samhallas.co.uk/repository/journals/POEEJ/POEEJ%20Vol%2059%20Pt%202%20July%201966.pdf |access-date=31 May 2024}}</ref> |
|||
In 1965 responsibility for TSU was transferred from [[HM Treasury]] to the [[Ministry of Technology]]. At that time telecommunications engineering staff comprised 8 dealing with Systems Evaluations, 6 with Peripheral Equipment and 10 in the areas of Accommodation,<ref>{{cite journal |last1=Stephenson |first1=M. |last2=Fiddes |first2=R.G. |title=Air-Conditioning in Computer Accommodation |journal=The Post Office Electrical Engineers Journal |date=April 1964 |url=https://www.blunham.com/Radar/SignalsMuseum/PDFs/PostOfficeJournals/POjournalV57Pt1.pdf |access-date=19 May 2024}}</ref> Testing, and Maintenance. Details of names, grades, qualifications, salary and relevant experience can be found in [[Hansard]] Volume 717: debated on Tuesday 27 July 1965.<ref>{{cite web |title=Hansard 1 |url=https://hansard.parliament.uk/Commons/1965-07-27/debates/84c84c27-0fdb-4abd-aab6-04b94b21bed8/Industries(TechnicalAndEconomicStudies) |website=UK Parliament |access-date=19 May 2024}}</ref> |
|||
=== Technical services reliability and acceptance trials === |
|||
Procurement contracts included guaranteed service levels where, at least in the early days, was monitored by TSU engineers, to whom all fault incident occurrences and system availability levels were submitted on a monthly basis. The contracts also included requirements to run on-site and sometimes predelivery acceptance trials of a specified format, designed and supervised by engineering staff. |
|||
The [[acceptance tests]] comprised a series of demonstrations to verify that everything had been delivered and appeared to function, followed by stress testing of up to 40 hours, over a few days, depending on system size. For the latter, engineering test programs were included and available user applications. Then, the criterion of success was to achieve a given level of uptime. In 1968, new procedures were introduced, particularly involving stress testing, where each main tests were aimed to run for 15 minutes, with criteria that, besides a maximum time limit, each test was required to run failure free six times in succession. |
|||
During this period, on invitation, five CCTA engineers presented papers on acceptance testing at the Institution of Electrical Engineers.<ref name="IEE71">{{cite news |last1=Iles |first1=S.H. |last2=Longbottom |first2=R. |last3=Thomson |first3=A.M.M. |last4=Skinner |first4=P.J. |last5=Clarke |first5=J. |title=Colloquium on the specification of acceptance testing of control computers for on-line applications |date=December 1971 }}</ref> |
|||
At this stage, concern was raised regarding how to test computers with the new [[Multiprogramming]] [[Operating Systems]]. The problem was solved by Roy Longbottom who, at various promotion levels between 1968 and 1982, was responsible for designing and supervising acceptance trials of the larger scientific systems. He produced 17 programs, written in the [[FORTRAN]] programming language, 5 for CPUs, 4 for disk drives, 3 for magnetic tape units and others for printers, card and paper tape punchers and readers. Program code listings are included in the book *Computer System Reliability* (Appendix 1).<ref name="RoyBook">{{cite book |last1=Longbottom |first1=Roy |title=Computer System Reliability |date=1980 |publisher=Wiley |isbn=0-471-27634-0 }}</ref> |
|||
By 1972, 800 acceptance tests of computers systems and enhancements had been carried out including 500 for complete systems, reported in The Post Office Electrical Engineers Journal.<ref name="poeJtrials">{{cite journal |last1=Longbottom |first1=R. |last2=Stoate |first2=K.W. |title=Acceptance Trials of Digital Computer Systems |journal=The Post Office Electrical Engineers Journal |date=July 1972 |page=91 |url=https://www.worldradiohistory.com/UK/POEEJ/70s/Post-Office-Electrical-Engineers-Journal-1972-07.pdf |access-date=20 May 2024}}</ref> |
|||
The latter tests included 100 using the new procedures from 11 different contractors. The first candidate was an [[IBM 360]] Model 65 at [[University College London]] in 1971, then in 1972 by trials on all [[mainframes]], [[minicomputers]] and [[supercomputers]] covered by CCTA contracts. Later that year, top end systems tested were the $5 million scalar supercomputers [[CDC 7600]] at [[University of London Computer Centre]] and IBM [[360/195]] at [[UK]] [[Meteorological Office]]. |
|||
Not included in these 100, but significant, 1973 trials included the [[Atlas (computer)]] at [[Cambridge University]], a latter day version of the 1962 UK supercomputer. During the 100 trials, 23 systems failed to meet the specified criteria, at the first attempt. |
|||
By 1979 more than 1600 acceptance tests of computers systems and enhancements had been carried out. For the latest 400 system tests, carried out under the new procedures, 14% were recorded as failures and 24% as having a conditional pass. Up to three attempts were allowed with none being completely rejected, albeit some accepted with penalty conditions. See Chapter 10 in the Longbottom book.<ref name="RoyBook"/> |
|||
Detailed analysis of fault returns, hands on observations during acceptance trials and system appraisal activities lead to a deeper understanding of reliability issues, published in a 1972 [[Radio and Electronic Engineer]] Journal, titled “Analysis of Computer System Reliability and Maintainability”, with probability considerations.<ref name="ref9">{{cite journal |last1=Longbottom |first1=Roy |title=Analysis of Computer System Reliability and Maintainability |journal=Radio and Electronic Engineer |date=December 1972 |volume=42 |issue=12 |page=537 |doi=10.1049/ree.1972.0092 |url=https://digital-library.theiet.org/content/journals/ree/42/12 |access-date=20 May 2024}}</ref> |
|||
Later, came a conference paper “Reliability of Computer Systems” (Archive) |
|||
<ref name="ref11">{{cite conference |last1=Longbottom |first1=R |title= Reliability of Computer Systems |conference=ECOMA-10; Munich 1982 |url=https://www.computinghistory.org.uk/det/6518/Box-219-ECOMA-Documents-Others/ |access-date=20 May 2024}}</ref> |
|||
and the Roy Longbottom book |
|||
<ref name="RoyBook"/> |
|||
that particularly acknowledges input provided by Ian Thomson on computer system maintainability and Trevor Jones on environmental aspects. |
|||
Trials in 1979 included the first [[Cray 1]] vector [[supercomputer]] to be delivered to the UK at [[Atomic Weapons Research Establishment]] and, by 1982, the [[CDC Cyber 205]] for [[UK]] [[Meteorological Office]], where total system costs could be $10 million. Both these systems had pre-delivery trials in the USA. For these, Roy Longbottom converted the [[Scalar (computing)|scalar]] CPU programs to fully exploit capabilities of the new [[vector processors]]. Results of the converted Vector [[Whetstone (benchmark)]] were included in the paper “Performance of Multi-User Supercomputing Facilities” presented in the 1989 Fourth International Conference on Supercomputing, Santa Clara.<ref name="GSRoy">{{cite web |last1=Longbottom |first1=Roy |title=Google Scholar References |date=1989 |volume=1 |page=30 |url=https://scholar.google.com/citations?view_op=view_citation&hl=en&user=n3zTPscAAAAJ&citation_for_view=n3zTPscAAAAJ:d1gkVwhDpl0C |access-date=18 May 2024}}</ref><ref name="4thsupercomputing">{{cite book |title=Proceedings, Fourth International Conference on Supercomputing and Third World Supercomputer Exhibition |date=April 1989 |publisher=International Supercomputing Institute |location=Santa Clara Convention Center, Santa Clara, CA, USA |url=https://books.google.com/books?id=vAgSAQAAMAAJ |access-date=18 May 2024}}</ref> |
|||
Details were also included in the June 1990 Advanced Computing Seminar at [[Natural Environment Research Council]] Wallingford. This led to [[Council for the Central Laboratory of the Research Councils]] Distributed Computing Support collecting results from running “on a variety of machines, including vector supercomputers, minisupers, super-workstations and workstations, together with that obtained on a number of vector CPUs and on single nodes of various MPP machines “. More than 200 results are included, up to 2006, in the report available on the [[Wayback Machine]] Archive in entries to at least the year 2007 section.<ref>{{cite web |title=The CCLRC Vector Whetstone Benchmark Results Upt To 2006 |url=https://web.archive.org/web/20070101000000*/http://www.cse.clrc.ac.uk:80/disco/Benchmarks/whetstone.shtml |access-date=10 June 2024}}</ref> |
|||
For the systems identified as supercomputers, there were nine acceptance testing sessions, two of which were failures, one due to excessive CPU problems and the other due to design issues on the I/O subsystem. Both of these were induced by the CCTA [[stress testing]] programs. |
|||
=== Technical services system appraisal === |
|||
During the early days there were considerations of future technology, including telecommunications in the 1970 book “Data transmission - the future : the development of data transmission to meet future users' needs” <ref>{{cite book |last1=Thomson |first1=A.M.M. |last2=Fiddes |first2=R.G. |title=Data transmission - the future : the development of data transmission to meet future users' needs |date=1970 |location=National Library of Australia |url=https://catalogue.nla.gov.au/catalog/169638 |access-date=22 May 2024}}</ref> found in National Library of Australia catalog 169638. But the main emphasis was appraisal of the latest computer system hardware and software. Initially, this involved collecting information on all appropriate new products, followed by more detailed investigation when being considered for a new project. This included a tour of the production factory and discussions with higher level engineering, design and quality control staff. |
|||
The National Archives CCTA records <ref name="CCTA Archived Information"/> include technical appraisal reports (at the time of writing), up to 1986 (search for quoted reports). The first in a finally standard format was “System Summary Notes” (range 5000 to 6999), starting in 1967, with such as early [[IBM 360]] mainframes and [[Digital Equipment Corporation]] [[PDP 8]] minicomputer, up to the last issue in 1980. These are based on standard forms with numerous entries. Other reports identified in the Archives are “Technical Notes” between 1975 and 1986, “Internal Technical Memoranda” 1973 to 1986 and “Technical Memoranda”1975 to 1986”. The number of reports cannot be easily determined from the provided data.. |
|||
=== Computer system performance === |
|||
Before cross the board standard benchmarks became available, average speed rating of computers was based on calculations for a mix of instructions with the result given in Kilo Instructions Per Second (KIPS). The most famous was the [[Gibson Mix]] for scientific computing. This was included in CCTA calculations that included those for an ADP Mix and a Process Control Mix, in CCTA Technical Note 3806 Issue 5 with 212 sets of results from 18 manufacturers, pre- 1960 to 1971. In 1977, later results were included in CCTA Technical Memorandum 1163, (both via <ref name="CCTA Archived Information"/>). All those results are also available in a 2017 PDF file.<ref name="mixes">{{cite journal |last1=Longbottom |first1=Roy |title=Computer Speeds From Instruction Mixes pre-1960 to 1971 |url=https://www.researchgate.net/publication/318793156_Computer_Speeds_From_Instruction_Mixes_pre-1960_to_1971.pdf |website=researchgate.net |date=2017 |access-date=23 April 2024 |doi=10.13140/RG.2.2.14182.93765}}</ref> |
|||
In 1972 Harold Curnow wrote the [[Whetstone (benchmark)|Whetstone Benchmark]] in the [[FORTRAN]] programming Language, based on the work of Brian Wichmann of the [[National Physical Laboratory (United Kingdom)|National Physical Laboratory]].<ref>{{cite journal |last1=Curnow |first1=H.J. |last2=Wichmann |first2=B.A. |title=A Synthetic Benchmark |journal=The Computer Journal |date=1976 |volume=19 |pages=43–49 |url=https://www.researchgate.net/publication/318755620 |doi=10.1093/comjnl/19.1.43 |access-date=24 May 2024}}</ref> |
|||
This executes 8 test functions, 5 of which involve floating point calculations that dominate running time. Overall performance was calculated in thousands of Whetstone instructions per second (KWIPS). The program became the first general purpose benchmark that set industry standards of computer system performance. Enhancements by Roy Longbottom provided self timing arrangements and calibration to run for a predetermined time on present and future systems, also performance of each of the 8 tests. The calibrated time was mainly for 10 seconds and is still applicable after 50 years. |
|||
In 1978, Roy Longbottom, who inherited the role of design authority ot the benchmark, also produced a version to exploit supercomputer processing hardware, covered in reports “Performance of Multi-User Supercomputing Facilities” <ref name="GSRoy"/> and “Whither Whetstone? The synthetic benchmark after 15 years” <ref>{{cite book |last1=Curnow |first1=H.J. |title=Whither Whetstone? The synthetic benchmark after 15 years |date=September 1990 |pages=260–266 |publisher=Chapman & Hall |isbn=978-0-442-31198-8 |url=https://dl.acm.org/doi/abs/10.5555/102486.102510 |access-date=26 May 2024}}</ref> in book.<ref>{{cite book |title=Evaluating supercomputers: strategies for exploiting, evaluating and benchmarking computers with advanced architectures |date=1990 |publisher=Chapman & Hall, Ltd. United Kingdom |id={{ASIN|0412378604|country=uk}} }}</ref> |
|||
Original Whetstone Benchmark results are in 1985 CCTA Technical Memorandum 1182, (via Archive <ref name="CCTA Archived Information"/>). where overall speed is shown as MWIPS (Millions). This contains more than 1000 results for 244 computers from 32 manufacturers. |
|||
On achieving 1 MWIPS, the [[Digital Equipment Corporation]] [[VAX-11/780]] [[minicomputer]] became accepted as the first commercially available 32-bit computer to demonstrate 1 MIPS (Millions of Instructions Per Second), [[CERN]],<ref>{{cite web |title=CERN-OBJ-IT-025 Computing and computers Model of the VAX-11/780 |url=https://cds.cern.ch/record/2273210?ln=en%20WHETS%20DHry |access-date=26 May 2024}}</ref> not really appropriate for a benchmark dependent on floating point speed. This had an impact on the [[Dhrystone]] Benchmark, the second accepted general purpose computer performance measurement program, with no floating point calculations. This produced a result of 1757 Dhrystones Per Second on the VAX 11/780, leading to a revised measurement of 1 DMIPS, (AKA Vax MIPS), by dividing the original result by 1757. |
|||
The Whetstone Benchmark also had high visibility concerning floating point performance of Intel CPUs and PCs, starting with the 1980 Intel 8087 coprocessor. This was reported in the 1986 Intel Application Report “High Speed Numerics with the 80186/80188 and 8087”.<ref>{{cite web |title=High Speed Numerics with the 80186/80188 and 8087 |url=https://0x04.net/~mwk/doc/intel/23159001.pdf |access-date=10 July 2024 |date=1986}}</ref> |
|||
The latter includes hardware functions for exponential, logarithmic or trigonometric calculations, as used in two of the eight Whetstone Benchmark tests, where these can dominate running time. Only two other benchmarks were included in the Intel procedures, showing huge gains over the earlier software based routines on all three programs. |
|||
Later tests, by a SSEMC Laboratory, evaluated Intel 80486 compatible CPU chips using their Universal Chip Analyzer.<ref>{{cite web |title=Investigating SSMEC's (State Micro) 486s with the UCA |date=25 June 2024 |url=https://x86.fr/category/electronics/universal-chip-analyzer/ |access-date=10 July 2024}}</ref> |
|||
Considering two floating point benchmarks, as used by Intel in the above report, they preferred Whetstone, stating “ Whetstone utilizes the complete set of instructions available on early x87 FPUs”. This might suggest that the Whetstone Benchmark influenced the hardware instruction set. |
|||
CCTA also influenced the programming code for [[Linpack]] and [[Livermore loops]] floating point benchmarks, initially for PC versions, where the original programs were unsuitable, particularly due to the PC low resolution timer. The new versions, in the [[C programming]] language, included the new CCTA automatic calibration function to run for a specified finite time, still applicable 50 years later. [[Netlib]] accepted the former, renaming it as linpack-pc.c.<ref>{{cite web |title=Linpack 100 Benchmark for PC Systems |url=https://www.netlib.org/benchmark/linpack-pc.c |publisher=Netlib |access-date=8 June 2024}}</ref> |
|||
For the Livermore benchmark, [[C programming]] code was available for executing the loops but extensive background code, for such as data generation, timing parameters and numeric results validation, were in [[FORTRAN]]. This was converted to C. At least one other organisation has published a claimed completely rewritten C version that incorporates the CCTA unique background code. with no attribution. |
|||
CCTA test programs used in acceptance trials had parameters to control running times, enabling valid comparisons of CPU performance of all systems tested. Following a request for information, these and Whetstone Benchmark results were included in the external publication “A Guide to the Processing Speeds of Computers”, over 100 different computers with more than 700 results.<ref name="CPUGuide">{{cite web |last1=Nott |first1=C.W. |last2=Wichmann |first2=B.A. |title=A Guide to the Processing Speeds of Computers |url=https://www.chilton-computing.org.uk/acl/literature/othermanuals/wichmann/overview.htm |access-date=27 May 2024 |date=1977}}</ref> |
|||
This included the acknowledgment “The authors would like to thank colleagues from the Central Computer Agency, namely Mr G Brownlee, Mr H J Curnow and Mr R Longbottom who have helped to collect much of the data making this system possible”. |
|||
From 1980 Roy Longbottom spent most of his time providing performance consultancy services to Departments and Universities. The latter included attending meetings of the Computer Board for Universities and Research Councils [[National Archives]].<ref>{{cite web |title=The Computer Board for Universities and Research Councils |url=https://discovery.nationalarchives.gov.uk/details/r/C830 |publisher=National Archives |access-date=27 May 2024}}</ref> |
|||
He became a member of the Technical Subgroup of the National Policy Committee on Advanced Research Computers and the Universities’ Benchmark Options Group. The latter involved leading a party to the USA including having discussions with Jack Dongarra and Frank McMahon, respectively authors of the Linpack and Livermore Loops, key benchmarks of the day for scientific applications. |
|||
In 1992, the [[Science and Engineering Research Council]] requested CCTA to provide independent observation and reporting on benchmarking a new supercomputer for [[University of London Computer Centre]], comprising a large sample of typical user applications. Roy Longbottom covered [[Fujitsu]] and [[NEC]] computers in Japan and Rob Whetnall overseeing [[Cray]] and [[Convex Computer Corporation]] systems, in the USA. The CCTA scalar and vector Whetstone Benchmarks were also run. A combination of the latter can help in evaluating performance of multi-user supercomputing operation,<ref name="GSRoy"/> where the one that can demonstrate superior performance on specific applications is not necessarily the best choice and the level of vectorisation and number of scalar processors can be more important. In this case, calculations from results of the CCTA programs indicated the same choice of system as that from the university's benchmark. |
|||
The aforementioned performance consultancy covered more than 45 projects between 1990 and 1993, mainly for data processing applications, with systems from 18 manufacturers, including mainframes, minicomputers and PCs. Activities included detailed sizing, modelling, user application based benchmarking, general advice and troubleshooting. CCTA's work was publicised at various conferences, starting with one on in-house software for benchmarking and capacity planning at ECOMA 12 in Munich, 1984,<ref name="ref12">{{cite conference |last1=Longbottom |first1=R |title= Performance Test Harness for Benchmarking and Capacity Planning of On-Line Systems |conference=ECOMA-12; Munich 1984 }}</ref> then benchmarking and workload characterisation at [[Edinburgh University]], 1986 (Page 5).<ref name="ref13">{{cite conference |title=Benchmarking and Workload Characterisation |last1=Longbottom |first1=R |conference=Second Computer and Telecommunications Engineering Workshop. University of Edinburgh. September 1986 |doi=10.1145/32100 |url=https://dl.acm.org/action/showFmPdf?doi=10.1145/32100 |access-date=28 May 2024}}</ref> |
|||
The next one, on Database System Benchmarks and Performance Testing was in a Conference on Parallel Processors, at [[National Physical Laboratory (United Kingdom)|NPL]] in 1992, providing a warning of the dangers for the supercomputer community, and published in a later book.<ref>{{cite book |last1=Dongarra |first1=J.J. |last2=Gentzsch |first2=W. |title=Computer Benchmarks |date=1993 |publisher=Elsevier Science Publishers |pages=339 |url=https://scholar.google.com/citations?view_op=view_citation&hl=en&user=n3zTPscAAAAJ&citation_for_view=n3zTPscAAAAJ:u-x6o8ySG0sC |access-date=28 May 2024}}</ref> |
|||
Finally, a new approach to performance management was suggested based on the assumption that initial sizing estimates would be incorrect and actions should be considered for application at each stage of procurement, presented at [[UKCMG]] Conference Brighton, in 1992.<ref name="ref16">{{cite conference |title=A Performance Management Methodology for IS Procurement |last1=Longbottom |first1=R |
|||
|conference=UKCMG International Conference; Management and Performance Evaluation of Computer Systems, May 1992 |location=Brighton}}</ref> It was proposed following performance issues on a number of new small systems using the [[UNIX]] operating system. In this case, the reasons were identified by measuring CPU, input/output, communications and memory utilisation of a number of transactions, using the UNIX SAR performance monitor. Then the first problems was mainly transactions using too much CPU time, requiring more efficient code or a CPU upgrade. Secondly it was the single disk drive, with adequate capacity, being unable to handle the high random access rate, the solution being to spread the data over more than one drive. To help in identifying solutions or “what if” considerations, a sizing model "A Spreadsheet Computer Performance Queuing Model for Finite User Populations" was produced, to instantly indicate the likely impact of changes in response times, throughput and hardware utilisation.<ref>{{cite journal |last1=Longbottom |first1=Roy |title=A Spreadsheet Computer Performance Queuing Model for Finite User Populations.pdf |url=https://www.researchgate.net/publication/320402569_A_Spreadsheet_Computer_Performance_Queuing_Model_for_Finite_User_Populations.pdf |website=researchgate.net |date=October 2017 |access-date=3 May 2024 |doi= 10.13140/RG.2.2.18376.01280}}</ref> |
|||
Other data processing benchmarks produced by CCTA Performance Branch included one measuring performance of mixes of processor bound activities, written in the [[COBOL]] programming language. A total of 129 sets of results over computers from 22 different manufacturers are in Internal Memo 5219. A second one is the Medium System Benchmark, with limited results in Internal Memo 5365 covering 35 systems from 8 manufacturers. This also indicates Technical Memoranda numbers of reports containing full results, in the range 15047 to 15247 (example [[International Computers Limited|ICL]] reports are 15147/1 to 15147/14) - see Archived Information for quoted reports.<ref name="CCTA Archived Information"/> The benchmark comprised six real representative programs with disk and magnetic tape input/output, covering updates, sorting, compiling and multi-stream operation, measuring CPU and elapsed times and the number of data transfers. |
|||
=== CCTA computer benchmarking and testing legacy === |
|||
After retirement, Roy Longbottom, as the latter day design authority of the Whetstone Benchmark, converted the latest FORTRAN code into the [[C programming]] language, also creating a new series of benchmarks and stress testing programs based on previous CCTA activities. These were freely available, produced in conjunction with the Compuserve Benchmarks and Standards Forum, see [[Wayback Machine]] Archive,<ref>{{cite web |url=http://community.compuserve.com/n/pfx/forum.aspx?folderId=9&listMode=13&nav=messages&webtag=ws-pchardware |title=Compuserve Benchmarks and Standards Forum |publisher=Wayback Machine Archive |archive-url=https://web.archive.org/web/20081206173739/http://community.compuserve.com/n/pfx/forum.aspx?folderId=9&listMode=13&nav=messages&webtag=ws-pchardware |access-date=4 June 2024|archive-date=6 December 2008 }}</ref> |
|||
covering PC hardware 1997 to 2008. |
|||
Later, with further development, programs and results were made freely available in a dedicated website (that will have a limited lifetime). Historic details from 2008 onwards are in Wayback Machine Archive,<ref name="rpywebsite">{{cite web |title=Archive roylongbottom.org.uk |url=https://web.archive.org/web/20030315000000*/http://roylongbottom.org.uk |publisher=Wayback Machine Archive |access-date=4 June 2024}}</ref> |
|||
where all files appear to be downloadable from most impressions. From 2017 onwards, the details were made available at [[ResearchGate]] in more referenceable PDF files. In 2024 there were 40 of these reports to read or download, when a total of more than 76,000 Reads and 79 Citations were reported. Brief descriptions of all files are included in an indexing file |
|||
<ref>{{cite web |last1=Longbottom |first1=Roy |title=Computer Benchmarks and Stress Tests and Performance History Index |url=https://www.researchgate.net/publication/367532477_Computer_Benchmarks_and_Stress_Tests_and_Performance_History_Index.pdf |access-date=4 June 2024}}</ref> (Download to open files). |
|||
The PDF files include 12 for [[Raspberry Pi]] computers, for which Roy Longbottom had been recruited by the [[Raspberry Pi Foundation]] as a voluntary member of Raspberry Pi pre-release Alpha Testing Team from 2019. |
|||
By the 1990s the Whetstone Benchmark and results had become relatively popular. A notable quotation in 1985 was in “A portable seismic computing benchmark” quoting "The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations" in the European Association of Geoscientists and Engineers Journal.<ref>{{cite journal |last1=Hatton |first1=H. |title=A portable seismic computing benchmark |journal=European Association of Geoscientists and Engineers Journal |date=1 Aug 1985 |volume=3 |issue=8 |doi=10.3997/1365-2397.1985016 |url=https://www.earthdoc.org/content/journals/10.3997/1365-2397.1985016 |access-date=5 June 2024}}</ref> |
|||
Then there was great interest in historic performance. Unlike the other Classic Benchmarks, [[Dhrystone]], [[Linpack]] and [[Livermore loops]], Whetstone result tables were not available in the public domain but, (in honour of CCTA, for this and other publications), was rectified from 2017. The first new report was “Computer Speeds From Instruction Mixes pre-1960 to 1971”.<ref name="mixes"/> |
|||
As with the following one, identified year of first delivery and purchase prices were added. |
|||
The second was “Whetstone Benchmark History and Results”,<ref>{{cite journal |last1=Longbottom |first1=Roy |title=Whetstone Benchmark History and Results |url=https://www.researchgate.net/publication/318755466_Whetstone_Benchmark_History_and_Results.pdf |website=researchgate.net |access-date=5 June 2024 |doi=10.13140/RG.2.2.26267.77603 |date=July 2017}}</ref> |
|||
with more detail and added results, particularly for PCs, up to 2013, and double the number of computers covered. The most notable citation, for this and Gibson Mix, was by Tony Voellm, then Google Cloud Performance Engineering Manager, entitled “Cloud Benchmarking: Fight the black hole”.<ref>{{cite web |last1=Voellm |first1=A.F. |title=Cloud Benchmarking: Fight the black hole |url=http://hpts.ws/papers/2013/CloudBenchmarkingFighttheblackhole.pdf |access-date=5 June 2024 |date=September 2013}}</ref> |
|||
This considered available benchmarks and performance by time with detailed graphs, including those from the Mix and Whetstone reports. |
|||
The first of other reports, attributable to earlier CCTA gained knowledge but not previously published, is “Computer Speed Claims 1980 to 1996”.<ref>{{cite journal |last1=Longbottom |first1=Roy |title=Computer Speed Claims 1980 to 1996 |url=https://www.researchgate.net/publication/318793086_Computer_Speed_Claims_1980_to_1996.pdf |website=researchgate.net |access-date=6 June 2024 |doi= 10.13140/RG.2.2.22571.54569 |date=July 2017}}</ref> |
|||
This covers more than 2000 mainframes, minicomputers, supercomputers and workstations, from around 120 suppliers, with main speeds in Millions of Instructions Per Second (MIPS), Millions of Floating Point Operations Per Second (MFLOPS) and CPU clock speed in MHz. Cost and production year are also included, when available. |
|||
Next, based on programming in [[Intel 8086]] [[assembly code]], learned earlier, is “PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code.<ref>{{cite journal |last1=Longbottom |first1=Roy |title=PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code |url=https://_PC_CPUID_1994_to_2013_plus_Measured_Maximum_Speeds_Via_Assembler_Code.pdf |website=researchgate.net |access-date=6 June 2024 |doi=10.13140/RG.2.2.12519.14247 |date=July 2017}}</ref> |
|||
This contains 27 pages of PC CPU identification numbers, operating speeds, range of models and cache sizes, by year, then performance of more than 30 types of processor over 12 CPU and memory benchmarks. Separate performance comparison tables are provided for handling data provided the CPU, caches and RAM. The diversity of results demonstrates the useless of general performance comparisons based on a single number. |
|||
The following reports highlight earlier unique CCTA experiences, without which they could not have been produced. The first is “Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets”.<ref>{{cite web |last1=Longbottom |first1=Roy |title=Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets |url= |
|||
https://www.researchgate.net/publication/359171179_Cray_1_Supercomputer_Performance_Comparisons_With_Home_Computers_Phones_and_Tablets.pdf |access-date=6 June 2024 |doi=10.13140/RG.2.2.27437.56804 |date=March 2022}}</ref> |
|||
Results are initially based on the Classic Benchmarks that were the first programs that set standards of performance for scientific computing, comprising the 1970 Livermore Loops, the 1972 Whetstone and the 1979 Linpack 100 benchmarks. Further results cover the 1979 Vector Whetstone performance, high speed floating point calculations and multiprocessing. The report includes the following comparison with the first version of the Raspberry Pi computer based on average Livermore Loop speeds, as this benchmark was used to verify performance of the first Cray 1. |
|||
"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1". |
|||
The later Pi 400 PC is shown to be 78.8 times faster and that could increase up to four times, using all CPU cores. |
|||
That quotation was reproduced in numerous Internet posts, some including a reference to the author worked for “the UK Government Central Computer Agency”, as quoted in the report. A total of more than 60 posts were found across [[LinkedIn]], [[X (Twitter)]] and [[Facebook]], with more than 30 thousand views. This was based on an HTML version of the comparisons on the author's website (Archive Copy),<ref>{{cite web |last1=Longbottom |first1=Roy |title=Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets |url=http://www.roylongbottom.org.uk/Cray%201%20Supercomputer%20Performance%20Comparisons%20With%20Home%20Computers%20Phones%20and%20Tablets.htm |website=Wayback Machine |publisher=Wayback Machine Archive |access-date=16 June 2024 |date=March 2022|archive-url=https://web.archive.org/web/20240407003942/http://www.roylongbottom.org.uk/Cray%201%20Supercomputer%20Performance%20Comparisons%20With%20Home%20Computers%20Phones%20and%20Tablets.htm |archive-date=7 April 2024 }}</ref> |
|||
where site [[Analytics]] registered almost 190,000 HTML file views between December 2023 and January 2024, with nearly 90% for the Cray report. Accesses were from North America 47%, Europe 37%, Asia 11%, Oceania 3% and Other 2%, the Agency's involvement being spread around the world. |
|||
CCTA influence is also highlighted in “Celebrating 50 years of computer benchmarking and stress testing”.<ref>{{cite web |last1=Longbottom |first1=Roy |title=Celebrating 50 years of computer benchmarking and stress testing |url=https://www.researchgate.net/publication/363539552_Celebrating_50_years_of_computer_benchmarking_and_stress_testing.pdf |website=researchgate.net |access-date=16 June 2024 |date=September 2022}}</ref> |
|||
=== IS/IT strategies === |
|||
CCTA's work during the 1970s, 1980s and 1990s was primarily to (a) develop central government IT professionalism, (b) create a body of knowledge and experience in the successful development and implementation of IS/IT within UK central government (c) to brief Government Ministers on the opportunities for use of IS/IT to support policy initiatives (e.g. "Citizen's Charter" / "e-government") and (d) to encourage and assist UK private sector companies to develop and offer products and services aligned to government needs. |
In this area, CCTA's work during the 1970s, 1980s and 1990s was primarily to (a) develop central government IT professionalism, (b) create a body of knowledge and experience in the successful development and implementation of IS/IT within UK central government (c) to brief Government Ministers on the opportunities for use of IS/IT to support policy initiatives (e.g. "Citizen's Charter" / "e-government") and (d) to encourage and assist UK private sector companies to develop and offer products and services aligned to government needs. |
||
Over the 3 decades, CCTA's focus shifted from hardware to a business oriented systems approach with strong emphasis on business led IS/IT Strategies which crossed Departmental (Ministry) boundaries encompassing several "Departments" (e.g. CCCJS – Computerisation of the Central Criminal Justice System). |
Over the 3 decades, CCTA's focus shifted from hardware to a business oriented systems approach with strong emphasis on business led IS/IT Strategies which crossed Departmental (Ministry) boundaries encompassing several "Departments" (e.g. CCCJS – Computerisation of the Central Criminal Justice System). This inter-departmental approach (first mooted in the mid to late 1980s) was revolutionary and met considerable political and departmental opposition. |
||
In October 1994, MI5 took over its work on computer security from hacking into the government's (usually the Treasury) network. In November 1994, CCTA launched its website. In February 1998 it built and ran the government's secure intranet. The MoD was connected to a separate network. In December 1998, the [[Department for Education and Skills (United Kingdom)|DfEE]] moved its server from CCTA at Norwich to NISS ([[National Information Services and Systems]]) in Bath when it relaunched its website.<ref>{{ |
In October 1994, MI5 took over its work on computer security from hacking into the government's (usually the Treasury) network. In November 1994, CCTA launched its website. In February 1998 it built and ran the government's secure intranet. The MoD was connected to a separate network. In December 1998, the [[Department for Education and Skills (United Kingdom)|DfEE]] moved its server from CCTA at Norwich to NISS ([[National Information Services and Systems]]) in Bath when it relaunched its website.<ref>{{Cite web |title=BBC News {{!}} Education {{!}} The virtual education department |url=http://news.bbc.co.uk/2/hi/uk_news/education/226689.stm |access-date=13 October 2023 |website=BBC News}}</ref> |
||
Between 1989 and 1992, CCTA's "Strategic Programmes" Division undertook research on exploiting Information Systems as a medium for improving the relationship between citizens, businesses and government. |
Between 1989 and 1992, CCTA's "Strategic Programmes" Division undertook research on exploiting Information Systems as a medium for improving the relationship between citizens, businesses and government. This parallelled the launch of the "[[Citizen's Charter]]" by the then Prime Minister, John Major, and the creation within the Cabinet Office of the "Citizen's Charter Unit" (CCTA had at this point been moved from HM Treasury to the Cabinet Office). The research and work focused on identifying ways of simplifying the interaction between citizens and government through the use of IS/IT. Two major TV documentaries were produced by CCTA – "Information and the Citizen" and "Hymns Ancient and Modern" which explored the business and political issues associated with what was to become "e-government". These were aimed at widening the understanding of senior civil servants (the Whitehall Mandarins) of the significant impact of the "Information Age" and identifying wider social and economic issues likely to arise from e-government.{{Citation needed|date=December 2023}} |
||
=== Merger === |
=== Merger === |
||
Line 58: | Line 187: | ||
* The CCTA Risk Analysis and Management Method (CRAMM),<ref>[http://www.cramm.com/overview/history.htm History of CRAMM<!-- Bot generated title -->] {{webarchive |url=https://web.archive.org/web/20080428215903/http://www.cramm.com/overview/history.htm |date=28 April 2008 }}</ref> developed at the request of the Cabinet Office in 1985 |
* The CCTA Risk Analysis and Management Method (CRAMM),<ref>[http://www.cramm.com/overview/history.htm History of CRAMM<!-- Bot generated title -->] {{webarchive |url=https://web.archive.org/web/20080428215903/http://www.cramm.com/overview/history.htm |date=28 April 2008 }}</ref> developed at the request of the Cabinet Office in 1985 |
||
The CCTA Security Group created the first UK Government National Information Security Policy, and developed the early approaches to structured information security for commercial organisations which saw wider use in the DTI Security Code of Practice, [[BS 7799]] and eventually [[ISO/IEC 27000]] |
The CCTA Security Group created the first UK Government National Information Security Policy, and developed the early approaches to structured [[information security]] for commercial organisations which saw wider use in the DTI Security Code of Practice, [[BS 7799]] and eventually [[ISO/IEC 27000]] |
||
CCTA also promoted the use of emerging IT standards in UK government and in the EU, such as OSI and BS5750 (Quality Management) which led to the publishing of the Quality Management Library and the inception of the [[TickIT]] assessment scheme with DTI, MOD and participation of software development companies. |
CCTA also promoted the use of emerging IT standards in UK government and in the EU, such as OSI and BS5750 (Quality Management) which led to the publishing of the Quality Management Library and the inception of the [[TickIT]] assessment scheme with DTI, MOD and participation of software development companies. |
||
Line 64: | Line 193: | ||
In addition to the development of methodologies, CCTA produced a comprehensive set of managerial guidance covering the development of Information Systems under 5 major headings: A. – Management and Planning of IS; B. – Systems Development; C. – Service Management; D – Office Users; E. – IS Services Industry. The guidance consisted of 27 individual guides and were published commercially as "The Information Systems Guides" ({{ISBN|0-471-92556-X}}) by John Wiley and Sons. The publication is no longer available. This guidance was developed from the practical experience and lessons learned from many UK Government Departments in planning, designing, implementing and monitoring Information Systems and was highly regarded as "best practice". Some parts were translated into other European languages and adopted as national standards. |
In addition to the development of methodologies, CCTA produced a comprehensive set of managerial guidance covering the development of Information Systems under 5 major headings: A. – Management and Planning of IS; B. – Systems Development; C. – Service Management; D – Office Users; E. – IS Services Industry. The guidance consisted of 27 individual guides and were published commercially as "The Information Systems Guides" ({{ISBN|0-471-92556-X}}) by John Wiley and Sons. The publication is no longer available. This guidance was developed from the practical experience and lessons learned from many UK Government Departments in planning, designing, implementing and monitoring Information Systems and was highly regarded as "best practice". Some parts were translated into other European languages and adopted as national standards. |
||
It also was involved in technical developments, for instance as the sponsor of ''Project SPACE'' in the mid |
It also was involved in technical developments, for instance as the sponsor of ''Project SPACE'' in the mid-1980s. Under ''Project SPACE'', the ICL Defence Technology Centre (DTC), working closely with technical staff from CCTA and key security-intensive projects in the Ministry of Defence (such as [[OPCON CCIS]]) and in other sensitive departments, developed an enhanced security variant of [[ICL VME|VME]]. |
||
It managed (ran the servers) of UK national government websites, including those such as the Royal Family's and www.open.gov.uk. |
It managed (ran the servers) of UK national government websites, including those such as the Royal Family's and www.open.gov.uk. |
Latest revision as of 11:10, 10 October 2024
Abbreviation | CCTA |
---|---|
Formation | 1957 (as the TSU) |
Dissolved | 2000 (subsumed into the OGC) |
Legal status | Defunct executive government agency |
Purpose | New telecommunications and computer technology for the UK government |
Location |
|
Region served | UK |
Membership | Electronics and computer engineers |
Parent organization | HM Treasury |
Website | www.ccta.gov.uk |
The Central Computer and Telecommunications Agency (CCTA) was a UK government agency providing computer and telecoms support to government departments.
History
[edit]Formation
[edit]Archived records
[edit]CCTA records are held by The National Archives.[1]
In 1957, the UK government formed the Central Computer Agency (CCA) Technical Support Unit (TSU) within HM Treasury to evaluate and advise on computers, initially based around engineers from the telecommunications service. As this unit evolved, it morphed into the Central Computer and Telecommunications Agency, which also had responsibilities for procurement of United Kingdom Government technological equipment, and later, that centrally funded for University and Research Council systems.
Technical services
[edit]Note that nearly all names and authors, quoted or referenced in this section, were CCTA engineers or scientists.
The first external technical publication was in 1960 by J. W. Freebody and J. W. Heron, as “Some engineering factors of importance in relation to the reliability of government A.D.P. systems”. Nearly 30 computer systems had been installed at that time.[2] The conclusion was that reliability was the most important single factor, identifying areas and activities that required investigation by the new organisation. A later career review confirmed that John Freebody was promoted to Staff Engineer and set the task of founding the Technical Support Unit.[3]
In 1965 responsibility for TSU was transferred from HM Treasury to the Ministry of Technology. At that time telecommunications engineering staff comprised 8 dealing with Systems Evaluations, 6 with Peripheral Equipment and 10 in the areas of Accommodation,[4] Testing, and Maintenance. Details of names, grades, qualifications, salary and relevant experience can be found in Hansard Volume 717: debated on Tuesday 27 July 1965.[5]
Technical services reliability and acceptance trials
[edit]Procurement contracts included guaranteed service levels where, at least in the early days, was monitored by TSU engineers, to whom all fault incident occurrences and system availability levels were submitted on a monthly basis. The contracts also included requirements to run on-site and sometimes predelivery acceptance trials of a specified format, designed and supervised by engineering staff.
The acceptance tests comprised a series of demonstrations to verify that everything had been delivered and appeared to function, followed by stress testing of up to 40 hours, over a few days, depending on system size. For the latter, engineering test programs were included and available user applications. Then, the criterion of success was to achieve a given level of uptime. In 1968, new procedures were introduced, particularly involving stress testing, where each main tests were aimed to run for 15 minutes, with criteria that, besides a maximum time limit, each test was required to run failure free six times in succession.
During this period, on invitation, five CCTA engineers presented papers on acceptance testing at the Institution of Electrical Engineers.[6]
At this stage, concern was raised regarding how to test computers with the new Multiprogramming Operating Systems. The problem was solved by Roy Longbottom who, at various promotion levels between 1968 and 1982, was responsible for designing and supervising acceptance trials of the larger scientific systems. He produced 17 programs, written in the FORTRAN programming language, 5 for CPUs, 4 for disk drives, 3 for magnetic tape units and others for printers, card and paper tape punchers and readers. Program code listings are included in the book *Computer System Reliability* (Appendix 1).[7]
By 1972, 800 acceptance tests of computers systems and enhancements had been carried out including 500 for complete systems, reported in The Post Office Electrical Engineers Journal.[8] The latter tests included 100 using the new procedures from 11 different contractors. The first candidate was an IBM 360 Model 65 at University College London in 1971, then in 1972 by trials on all mainframes, minicomputers and supercomputers covered by CCTA contracts. Later that year, top end systems tested were the $5 million scalar supercomputers CDC 7600 at University of London Computer Centre and IBM 360/195 at UK Meteorological Office. Not included in these 100, but significant, 1973 trials included the Atlas (computer) at Cambridge University, a latter day version of the 1962 UK supercomputer. During the 100 trials, 23 systems failed to meet the specified criteria, at the first attempt.
By 1979 more than 1600 acceptance tests of computers systems and enhancements had been carried out. For the latest 400 system tests, carried out under the new procedures, 14% were recorded as failures and 24% as having a conditional pass. Up to three attempts were allowed with none being completely rejected, albeit some accepted with penalty conditions. See Chapter 10 in the Longbottom book.[7]
Detailed analysis of fault returns, hands on observations during acceptance trials and system appraisal activities lead to a deeper understanding of reliability issues, published in a 1972 Radio and Electronic Engineer Journal, titled “Analysis of Computer System Reliability and Maintainability”, with probability considerations.[9] Later, came a conference paper “Reliability of Computer Systems” (Archive) [10] and the Roy Longbottom book [7] that particularly acknowledges input provided by Ian Thomson on computer system maintainability and Trevor Jones on environmental aspects.
Trials in 1979 included the first Cray 1 vector supercomputer to be delivered to the UK at Atomic Weapons Research Establishment and, by 1982, the CDC Cyber 205 for UK Meteorological Office, where total system costs could be $10 million. Both these systems had pre-delivery trials in the USA. For these, Roy Longbottom converted the scalar CPU programs to fully exploit capabilities of the new vector processors. Results of the converted Vector Whetstone (benchmark) were included in the paper “Performance of Multi-User Supercomputing Facilities” presented in the 1989 Fourth International Conference on Supercomputing, Santa Clara.[11][12]
Details were also included in the June 1990 Advanced Computing Seminar at Natural Environment Research Council Wallingford. This led to Council for the Central Laboratory of the Research Councils Distributed Computing Support collecting results from running “on a variety of machines, including vector supercomputers, minisupers, super-workstations and workstations, together with that obtained on a number of vector CPUs and on single nodes of various MPP machines “. More than 200 results are included, up to 2006, in the report available on the Wayback Machine Archive in entries to at least the year 2007 section.[13]
For the systems identified as supercomputers, there were nine acceptance testing sessions, two of which were failures, one due to excessive CPU problems and the other due to design issues on the I/O subsystem. Both of these were induced by the CCTA stress testing programs.
Technical services system appraisal
[edit]During the early days there were considerations of future technology, including telecommunications in the 1970 book “Data transmission - the future : the development of data transmission to meet future users' needs” [14] found in National Library of Australia catalog 169638. But the main emphasis was appraisal of the latest computer system hardware and software. Initially, this involved collecting information on all appropriate new products, followed by more detailed investigation when being considered for a new project. This included a tour of the production factory and discussions with higher level engineering, design and quality control staff.
The National Archives CCTA records [1] include technical appraisal reports (at the time of writing), up to 1986 (search for quoted reports). The first in a finally standard format was “System Summary Notes” (range 5000 to 6999), starting in 1967, with such as early IBM 360 mainframes and Digital Equipment Corporation PDP 8 minicomputer, up to the last issue in 1980. These are based on standard forms with numerous entries. Other reports identified in the Archives are “Technical Notes” between 1975 and 1986, “Internal Technical Memoranda” 1973 to 1986 and “Technical Memoranda”1975 to 1986”. The number of reports cannot be easily determined from the provided data..
Computer system performance
[edit]Before cross the board standard benchmarks became available, average speed rating of computers was based on calculations for a mix of instructions with the result given in Kilo Instructions Per Second (KIPS). The most famous was the Gibson Mix for scientific computing. This was included in CCTA calculations that included those for an ADP Mix and a Process Control Mix, in CCTA Technical Note 3806 Issue 5 with 212 sets of results from 18 manufacturers, pre- 1960 to 1971. In 1977, later results were included in CCTA Technical Memorandum 1163, (both via [1]). All those results are also available in a 2017 PDF file.[15]
In 1972 Harold Curnow wrote the Whetstone Benchmark in the FORTRAN programming Language, based on the work of Brian Wichmann of the National Physical Laboratory.[16] This executes 8 test functions, 5 of which involve floating point calculations that dominate running time. Overall performance was calculated in thousands of Whetstone instructions per second (KWIPS). The program became the first general purpose benchmark that set industry standards of computer system performance. Enhancements by Roy Longbottom provided self timing arrangements and calibration to run for a predetermined time on present and future systems, also performance of each of the 8 tests. The calibrated time was mainly for 10 seconds and is still applicable after 50 years.
In 1978, Roy Longbottom, who inherited the role of design authority ot the benchmark, also produced a version to exploit supercomputer processing hardware, covered in reports “Performance of Multi-User Supercomputing Facilities” [11] and “Whither Whetstone? The synthetic benchmark after 15 years” [17] in book.[18]
Original Whetstone Benchmark results are in 1985 CCTA Technical Memorandum 1182, (via Archive [1]). where overall speed is shown as MWIPS (Millions). This contains more than 1000 results for 244 computers from 32 manufacturers.
On achieving 1 MWIPS, the Digital Equipment Corporation VAX-11/780 minicomputer became accepted as the first commercially available 32-bit computer to demonstrate 1 MIPS (Millions of Instructions Per Second), CERN,[19] not really appropriate for a benchmark dependent on floating point speed. This had an impact on the Dhrystone Benchmark, the second accepted general purpose computer performance measurement program, with no floating point calculations. This produced a result of 1757 Dhrystones Per Second on the VAX 11/780, leading to a revised measurement of 1 DMIPS, (AKA Vax MIPS), by dividing the original result by 1757.
The Whetstone Benchmark also had high visibility concerning floating point performance of Intel CPUs and PCs, starting with the 1980 Intel 8087 coprocessor. This was reported in the 1986 Intel Application Report “High Speed Numerics with the 80186/80188 and 8087”.[20] The latter includes hardware functions for exponential, logarithmic or trigonometric calculations, as used in two of the eight Whetstone Benchmark tests, where these can dominate running time. Only two other benchmarks were included in the Intel procedures, showing huge gains over the earlier software based routines on all three programs.
Later tests, by a SSEMC Laboratory, evaluated Intel 80486 compatible CPU chips using their Universal Chip Analyzer.[21] Considering two floating point benchmarks, as used by Intel in the above report, they preferred Whetstone, stating “ Whetstone utilizes the complete set of instructions available on early x87 FPUs”. This might suggest that the Whetstone Benchmark influenced the hardware instruction set.
CCTA also influenced the programming code for Linpack and Livermore loops floating point benchmarks, initially for PC versions, where the original programs were unsuitable, particularly due to the PC low resolution timer. The new versions, in the C programming language, included the new CCTA automatic calibration function to run for a specified finite time, still applicable 50 years later. Netlib accepted the former, renaming it as linpack-pc.c.[22] For the Livermore benchmark, C programming code was available for executing the loops but extensive background code, for such as data generation, timing parameters and numeric results validation, were in FORTRAN. This was converted to C. At least one other organisation has published a claimed completely rewritten C version that incorporates the CCTA unique background code. with no attribution.
CCTA test programs used in acceptance trials had parameters to control running times, enabling valid comparisons of CPU performance of all systems tested. Following a request for information, these and Whetstone Benchmark results were included in the external publication “A Guide to the Processing Speeds of Computers”, over 100 different computers with more than 700 results.[23] This included the acknowledgment “The authors would like to thank colleagues from the Central Computer Agency, namely Mr G Brownlee, Mr H J Curnow and Mr R Longbottom who have helped to collect much of the data making this system possible”.
From 1980 Roy Longbottom spent most of his time providing performance consultancy services to Departments and Universities. The latter included attending meetings of the Computer Board for Universities and Research Councils National Archives.[24] He became a member of the Technical Subgroup of the National Policy Committee on Advanced Research Computers and the Universities’ Benchmark Options Group. The latter involved leading a party to the USA including having discussions with Jack Dongarra and Frank McMahon, respectively authors of the Linpack and Livermore Loops, key benchmarks of the day for scientific applications.
In 1992, the Science and Engineering Research Council requested CCTA to provide independent observation and reporting on benchmarking a new supercomputer for University of London Computer Centre, comprising a large sample of typical user applications. Roy Longbottom covered Fujitsu and NEC computers in Japan and Rob Whetnall overseeing Cray and Convex Computer Corporation systems, in the USA. The CCTA scalar and vector Whetstone Benchmarks were also run. A combination of the latter can help in evaluating performance of multi-user supercomputing operation,[11] where the one that can demonstrate superior performance on specific applications is not necessarily the best choice and the level of vectorisation and number of scalar processors can be more important. In this case, calculations from results of the CCTA programs indicated the same choice of system as that from the university's benchmark.
The aforementioned performance consultancy covered more than 45 projects between 1990 and 1993, mainly for data processing applications, with systems from 18 manufacturers, including mainframes, minicomputers and PCs. Activities included detailed sizing, modelling, user application based benchmarking, general advice and troubleshooting. CCTA's work was publicised at various conferences, starting with one on in-house software for benchmarking and capacity planning at ECOMA 12 in Munich, 1984,[25] then benchmarking and workload characterisation at Edinburgh University, 1986 (Page 5).[26]
The next one, on Database System Benchmarks and Performance Testing was in a Conference on Parallel Processors, at NPL in 1992, providing a warning of the dangers for the supercomputer community, and published in a later book.[27]
Finally, a new approach to performance management was suggested based on the assumption that initial sizing estimates would be incorrect and actions should be considered for application at each stage of procurement, presented at UKCMG Conference Brighton, in 1992.[28] It was proposed following performance issues on a number of new small systems using the UNIX operating system. In this case, the reasons were identified by measuring CPU, input/output, communications and memory utilisation of a number of transactions, using the UNIX SAR performance monitor. Then the first problems was mainly transactions using too much CPU time, requiring more efficient code or a CPU upgrade. Secondly it was the single disk drive, with adequate capacity, being unable to handle the high random access rate, the solution being to spread the data over more than one drive. To help in identifying solutions or “what if” considerations, a sizing model "A Spreadsheet Computer Performance Queuing Model for Finite User Populations" was produced, to instantly indicate the likely impact of changes in response times, throughput and hardware utilisation.[29]
Other data processing benchmarks produced by CCTA Performance Branch included one measuring performance of mixes of processor bound activities, written in the COBOL programming language. A total of 129 sets of results over computers from 22 different manufacturers are in Internal Memo 5219. A second one is the Medium System Benchmark, with limited results in Internal Memo 5365 covering 35 systems from 8 manufacturers. This also indicates Technical Memoranda numbers of reports containing full results, in the range 15047 to 15247 (example ICL reports are 15147/1 to 15147/14) - see Archived Information for quoted reports.[1] The benchmark comprised six real representative programs with disk and magnetic tape input/output, covering updates, sorting, compiling and multi-stream operation, measuring CPU and elapsed times and the number of data transfers.
CCTA computer benchmarking and testing legacy
[edit]After retirement, Roy Longbottom, as the latter day design authority of the Whetstone Benchmark, converted the latest FORTRAN code into the C programming language, also creating a new series of benchmarks and stress testing programs based on previous CCTA activities. These were freely available, produced in conjunction with the Compuserve Benchmarks and Standards Forum, see Wayback Machine Archive,[30] covering PC hardware 1997 to 2008.
Later, with further development, programs and results were made freely available in a dedicated website (that will have a limited lifetime). Historic details from 2008 onwards are in Wayback Machine Archive,[31] where all files appear to be downloadable from most impressions. From 2017 onwards, the details were made available at ResearchGate in more referenceable PDF files. In 2024 there were 40 of these reports to read or download, when a total of more than 76,000 Reads and 79 Citations were reported. Brief descriptions of all files are included in an indexing file [32] (Download to open files). The PDF files include 12 for Raspberry Pi computers, for which Roy Longbottom had been recruited by the Raspberry Pi Foundation as a voluntary member of Raspberry Pi pre-release Alpha Testing Team from 2019.
By the 1990s the Whetstone Benchmark and results had become relatively popular. A notable quotation in 1985 was in “A portable seismic computing benchmark” quoting "The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations" in the European Association of Geoscientists and Engineers Journal.[33]
Then there was great interest in historic performance. Unlike the other Classic Benchmarks, Dhrystone, Linpack and Livermore loops, Whetstone result tables were not available in the public domain but, (in honour of CCTA, for this and other publications), was rectified from 2017. The first new report was “Computer Speeds From Instruction Mixes pre-1960 to 1971”.[15] As with the following one, identified year of first delivery and purchase prices were added.
The second was “Whetstone Benchmark History and Results”,[34] with more detail and added results, particularly for PCs, up to 2013, and double the number of computers covered. The most notable citation, for this and Gibson Mix, was by Tony Voellm, then Google Cloud Performance Engineering Manager, entitled “Cloud Benchmarking: Fight the black hole”.[35] This considered available benchmarks and performance by time with detailed graphs, including those from the Mix and Whetstone reports.
The first of other reports, attributable to earlier CCTA gained knowledge but not previously published, is “Computer Speed Claims 1980 to 1996”.[36] This covers more than 2000 mainframes, minicomputers, supercomputers and workstations, from around 120 suppliers, with main speeds in Millions of Instructions Per Second (MIPS), Millions of Floating Point Operations Per Second (MFLOPS) and CPU clock speed in MHz. Cost and production year are also included, when available.
Next, based on programming in Intel 8086 assembly code, learned earlier, is “PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code.[37] This contains 27 pages of PC CPU identification numbers, operating speeds, range of models and cache sizes, by year, then performance of more than 30 types of processor over 12 CPU and memory benchmarks. Separate performance comparison tables are provided for handling data provided the CPU, caches and RAM. The diversity of results demonstrates the useless of general performance comparisons based on a single number.
The following reports highlight earlier unique CCTA experiences, without which they could not have been produced. The first is “Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets”.[38] Results are initially based on the Classic Benchmarks that were the first programs that set standards of performance for scientific computing, comprising the 1970 Livermore Loops, the 1972 Whetstone and the 1979 Linpack 100 benchmarks. Further results cover the 1979 Vector Whetstone performance, high speed floating point calculations and multiprocessing. The report includes the following comparison with the first version of the Raspberry Pi computer based on average Livermore Loop speeds, as this benchmark was used to verify performance of the first Cray 1.
"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1".
The later Pi 400 PC is shown to be 78.8 times faster and that could increase up to four times, using all CPU cores.
That quotation was reproduced in numerous Internet posts, some including a reference to the author worked for “the UK Government Central Computer Agency”, as quoted in the report. A total of more than 60 posts were found across LinkedIn, X (Twitter) and Facebook, with more than 30 thousand views. This was based on an HTML version of the comparisons on the author's website (Archive Copy),[39] where site Analytics registered almost 190,000 HTML file views between December 2023 and January 2024, with nearly 90% for the Cray report. Accesses were from North America 47%, Europe 37%, Asia 11%, Oceania 3% and Other 2%, the Agency's involvement being spread around the world.
CCTA influence is also highlighted in “Celebrating 50 years of computer benchmarking and stress testing”.[40]
IS/IT strategies
[edit]In this area, CCTA's work during the 1970s, 1980s and 1990s was primarily to (a) develop central government IT professionalism, (b) create a body of knowledge and experience in the successful development and implementation of IS/IT within UK central government (c) to brief Government Ministers on the opportunities for use of IS/IT to support policy initiatives (e.g. "Citizen's Charter" / "e-government") and (d) to encourage and assist UK private sector companies to develop and offer products and services aligned to government needs.
Over the 3 decades, CCTA's focus shifted from hardware to a business oriented systems approach with strong emphasis on business led IS/IT Strategies which crossed Departmental (Ministry) boundaries encompassing several "Departments" (e.g. CCCJS – Computerisation of the Central Criminal Justice System). This inter-departmental approach (first mooted in the mid to late 1980s) was revolutionary and met considerable political and departmental opposition.
In October 1994, MI5 took over its work on computer security from hacking into the government's (usually the Treasury) network. In November 1994, CCTA launched its website. In February 1998 it built and ran the government's secure intranet. The MoD was connected to a separate network. In December 1998, the DfEE moved its server from CCTA at Norwich to NISS (National Information Services and Systems) in Bath when it relaunched its website.[41]
Between 1989 and 1992, CCTA's "Strategic Programmes" Division undertook research on exploiting Information Systems as a medium for improving the relationship between citizens, businesses and government. This parallelled the launch of the "Citizen's Charter" by the then Prime Minister, John Major, and the creation within the Cabinet Office of the "Citizen's Charter Unit" (CCTA had at this point been moved from HM Treasury to the Cabinet Office). The research and work focused on identifying ways of simplifying the interaction between citizens and government through the use of IS/IT. Two major TV documentaries were produced by CCTA – "Information and the Citizen" and "Hymns Ancient and Modern" which explored the business and political issues associated with what was to become "e-government". These were aimed at widening the understanding of senior civil servants (the Whitehall Mandarins) of the significant impact of the "Information Age" and identifying wider social and economic issues likely to arise from e-government.[citation needed]
Merger
[edit]During the late 1990s, its strategic role was eroded by the Cabinet Office's Central IT Unit (CITU – created by Michael Heseltine in November 1995), and in 2000 CCTA was fully subsumed into the Office of Government Commerce (OGC).[42]
Successors
[edit]Since then, the non-procurement IT / Telecommunications co-ordination role has remained in the Cabinet Office, under a number of successive guises:
- The Office of the E-Envoy (OeE)
- The eGovernment Unit (eGU)
- The Transformational Government (TG) Group
- The Government Digital Service[43]
Activities
[edit]CCTA was the sponsor of a number of methodologies, including:
- Structured Systems Analysis and Design Method (SSADM)
- PRojects IN Controlled Environments (PRINCE, PRINCE2), which is an evolution of PROMPT, a project management method created by Simpact Systems Ltd in 1975 that was adopted by CCTA in 1979 for Government information system projects
- Information Technology Infrastructure Library (ITIL), which has largely evolved through BS15000 into the ISO20000 series
- The CCTA Risk Analysis and Management Method (CRAMM),[44] developed at the request of the Cabinet Office in 1985
The CCTA Security Group created the first UK Government National Information Security Policy, and developed the early approaches to structured information security for commercial organisations which saw wider use in the DTI Security Code of Practice, BS 7799 and eventually ISO/IEC 27000
CCTA also promoted the use of emerging IT standards in UK government and in the EU, such as OSI and BS5750 (Quality Management) which led to the publishing of the Quality Management Library and the inception of the TickIT assessment scheme with DTI, MOD and participation of software development companies.
In addition to the development of methodologies, CCTA produced a comprehensive set of managerial guidance covering the development of Information Systems under 5 major headings: A. – Management and Planning of IS; B. – Systems Development; C. – Service Management; D – Office Users; E. – IS Services Industry. The guidance consisted of 27 individual guides and were published commercially as "The Information Systems Guides" (ISBN 0-471-92556-X) by John Wiley and Sons. The publication is no longer available. This guidance was developed from the practical experience and lessons learned from many UK Government Departments in planning, designing, implementing and monitoring Information Systems and was highly regarded as "best practice". Some parts were translated into other European languages and adopted as national standards.
It also was involved in technical developments, for instance as the sponsor of Project SPACE in the mid-1980s. Under Project SPACE, the ICL Defence Technology Centre (DTC), working closely with technical staff from CCTA and key security-intensive projects in the Ministry of Defence (such as OPCON CCIS) and in other sensitive departments, developed an enhanced security variant of VME.
It managed (ran the servers) of UK national government websites, including those such as the Royal Family's and www.open.gov.uk.
Structure
[edit]CCTA's headquarters were in London at Riverwalk House, Vauxhall Bridge Road, SW1, since used by the Government Office for London. This housed the main divisions with a satellite office in Norwich which focused on IS/IT Procurement – a function which had been taken over from HMSO (the Stationery Office) when CCTA was formed.
The office in Norwich was in the east of the city, off the former A47 (now A1042), just west of the present A47 interchange near the former St Andrew's Hospital. The site is now used by the OGC.
The HQ in London had four divisions:
- Project support – major IT programmes – software engineering
- Specialist support – evaluation of individual items of hardware and software
- Strategic Planning and Promotion – project management and office technology (hardware and office automation)
- Advance Technology – telecommunications and advanced technology (latest generation of computers)
References
[edit]- ^ a b c d e "CCTA Archive". National Archives. Retrieved 21 June 2024.
- ^ Freebody, J.W.; Heron, K.M. (January 1960). "Some engineering factors of importance in relation to the reliability of government A.D.P. systems". Institution of Electrical Engineers and the British Computer Society Ltd.
- ^ Freebody, J.W. (July 1966). "Notes and Comments" (PDF). The Post Office Electrical Engineers Journal: 135. Retrieved 31 May 2024.
- ^ Stephenson, M.; Fiddes, R.G. (April 1964). "Air-Conditioning in Computer Accommodation" (PDF). The Post Office Electrical Engineers Journal. Retrieved 19 May 2024.
- ^ "Hansard 1". UK Parliament. Retrieved 19 May 2024.
- ^ Iles, S.H.; Longbottom, R.; Thomson, A.M.M.; Skinner, P.J.; Clarke, J. (December 1971). "Colloquium on the specification of acceptance testing of control computers for on-line applications".
- ^ a b c Longbottom, Roy (1980). Computer System Reliability. Wiley. ISBN 0-471-27634-0.
- ^ Longbottom, R.; Stoate, K.W. (July 1972). "Acceptance Trials of Digital Computer Systems" (PDF). The Post Office Electrical Engineers Journal: 91. Retrieved 20 May 2024.
- ^ Longbottom, Roy (December 1972). "Analysis of Computer System Reliability and Maintainability". Radio and Electronic Engineer. 42 (12): 537. doi:10.1049/ree.1972.0092. Retrieved 20 May 2024.
- ^ Longbottom, R. Reliability of Computer Systems. ECOMA-10; Munich 1982. Retrieved 20 May 2024.
- ^ a b c Longbottom, Roy (1989). "Google Scholar References". p. 30. Retrieved 18 May 2024.
- ^ Proceedings, Fourth International Conference on Supercomputing and Third World Supercomputer Exhibition. Santa Clara Convention Center, Santa Clara, CA, USA: International Supercomputing Institute. April 1989. Retrieved 18 May 2024.
- ^ "The CCLRC Vector Whetstone Benchmark Results Upt To 2006". Retrieved 10 June 2024.
- ^ Thomson, A.M.M.; Fiddes, R.G. (1970). Data transmission - the future : the development of data transmission to meet future users' needs. National Library of Australia. Retrieved 22 May 2024.
{{cite book}}
: CS1 maint: location missing publisher (link) - ^ a b Longbottom, Roy (2017). "Computer Speeds From Instruction Mixes pre-1960 to 1971" (PDF). researchgate.net. doi:10.13140/RG.2.2.14182.93765. Retrieved 23 April 2024.
- ^ Curnow, H.J.; Wichmann, B.A. (1976). "A Synthetic Benchmark". The Computer Journal. 19: 43–49. doi:10.1093/comjnl/19.1.43. Retrieved 24 May 2024.
- ^ Curnow, H.J. (September 1990). Whither Whetstone? The synthetic benchmark after 15 years. Chapman & Hall. pp. 260–266. ISBN 978-0-442-31198-8. Retrieved 26 May 2024.
- ^ Evaluating supercomputers: strategies for exploiting, evaluating and benchmarking computers with advanced architectures. Chapman & Hall, Ltd. United Kingdom. 1990. ASIN 0412378604.
- ^ "CERN-OBJ-IT-025 Computing and computers Model of the VAX-11/780". Retrieved 26 May 2024.
- ^ "High Speed Numerics with the 80186/80188 and 8087" (PDF). 1986. Retrieved 10 July 2024.
- ^ "Investigating SSMEC's (State Micro) 486s with the UCA". 25 June 2024. Retrieved 10 July 2024.
- ^ "Linpack 100 Benchmark for PC Systems". Netlib. Retrieved 8 June 2024.
- ^ Nott, C.W.; Wichmann, B.A. (1977). "A Guide to the Processing Speeds of Computers". Retrieved 27 May 2024.
- ^ "The Computer Board for Universities and Research Councils". National Archives. Retrieved 27 May 2024.
- ^ Longbottom, R. Performance Test Harness for Benchmarking and Capacity Planning of On-Line Systems. ECOMA-12; Munich 1984.
- ^ Longbottom, R. Benchmarking and Workload Characterisation. Second Computer and Telecommunications Engineering Workshop. University of Edinburgh. September 1986. doi:10.1145/32100. Retrieved 28 May 2024.
- ^ Dongarra, J.J.; Gentzsch, W. (1993). Computer Benchmarks. Elsevier Science Publishers. p. 339. Retrieved 28 May 2024.
- ^ Longbottom, R. A Performance Management Methodology for IS Procurement. UKCMG International Conference; Management and Performance Evaluation of Computer Systems, May 1992. Brighton.
- ^ Longbottom, Roy (October 2017). "A Spreadsheet Computer Performance Queuing Model for Finite User Populations.pdf" (PDF). researchgate.net. doi:10.13140/RG.2.2.18376.01280. Retrieved 3 May 2024.
- ^ "Compuserve Benchmarks and Standards Forum". Wayback Machine Archive. Archived from the original on 6 December 2008. Retrieved 4 June 2024.
- ^ "Archive roylongbottom.org.uk". Wayback Machine Archive. Retrieved 4 June 2024.
- ^ Longbottom, Roy. "Computer Benchmarks and Stress Tests and Performance History Index" (PDF). Retrieved 4 June 2024.
- ^ Hatton, H. (1 August 1985). "A portable seismic computing benchmark". European Association of Geoscientists and Engineers Journal. 3 (8). doi:10.3997/1365-2397.1985016. Retrieved 5 June 2024.
- ^ Longbottom, Roy (July 2017). "Whetstone Benchmark History and Results" (PDF). researchgate.net. doi:10.13140/RG.2.2.26267.77603. Retrieved 5 June 2024.
- ^ Voellm, A.F. (September 2013). "Cloud Benchmarking: Fight the black hole" (PDF). Retrieved 5 June 2024.
- ^ Longbottom, Roy (July 2017). "Computer Speed Claims 1980 to 1996" (PDF). researchgate.net. doi:10.13140/RG.2.2.22571.54569. Retrieved 6 June 2024.
- ^ Longbottom, Roy (July 2017). "PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code" (PDF). researchgate.net. doi:10.13140/RG.2.2.12519.14247. Retrieved 6 June 2024.
{{cite journal}}
: Check|url=
value (help) - ^ Longbottom, Roy (March 2022). "Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets" (PDF). doi:10.13140/RG.2.2.27437.56804. Retrieved 6 June 2024.
- ^ Longbottom, Roy (March 2022). "Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets". Wayback Machine. Wayback Machine Archive. Archived from the original on 7 April 2024. Retrieved 16 June 2024.
- ^ Longbottom, Roy (September 2022). "Celebrating 50 years of computer benchmarking and stress testing" (PDF). researchgate.net. Retrieved 16 June 2024.
- ^ "BBC News | Education | The virtual education department". BBC News. Retrieved 13 October 2023.
- ^ Office of Government Commerce Open for Business – OGC press release. Retrieved 28 August 2007
- ^ Government Digital Service. Retrieved 4 January 2014
- ^ History of CRAMM Archived 28 April 2008 at the Wayback Machine
- Computer science in the United Kingdom
- Computer science organizations
- Defunct executive agencies of the United Kingdom government
- Government agencies established in 1957
- Government agencies disestablished in 2000
- HM Treasury
- Information technology management
- Information technology organisations based in the United Kingdom
- Organisations based in Norwich
- PRINCE2
- Software engineering organizations
- Software engineering researchers
- Science and technology in Norfolk
- Scientific organisations based in the United Kingdom
- Telecommunications organisations in the United Kingdom