Jump to content

Wikimedia-Server

From Meta, a Wikimedia project coordination wiki
This page is a translated version of the page Wikimedia servers and the translation is 36% complete.
Outdated translations are marked like this.

Wikipedia und andere Wikimedia-Projekte laufen auf einer Vielzahl von Servern. Infos dazu finden sich im [$url1 technischen Blog der Wikimedia-Foundation] und in den [$url2 Blog-Posts von Phabricator-Usern].

System-Architektur

Vereinfachter Überblick über die MediaWiki-Software, wie sie von der Wikimedia Foundation bereitgestellt wird (Stand August 2022). Im Wesentlichen eine komplexe Version eines LAMP "stack".

Netzwerktopologie

Die Network topology wird in 'Network design' bei Wikitech beschrieben.

Software

  • Unsere DNS-Server laufen gdnsd. Wir verwenden geografisches DNS, um Anfragen je nach Standort des Kunden zwischen unseren fünf Rechenzentren (3x USA, 1x Europa, 1x Asien) zu verteilen.
  • Wir verwenden Linux Virtual Server (LVS) auf Commodity-Servern, um eingehende Anfragen auszugleichen. LVS wird auch als interner Load Balancer verwendet, um MediaWiki-Anfragen zu verteilen. Für Backend-Überwachung und Failover haben wir unser eigenes System namens PyBal.
  • Für reguläre MediaWiki-Webanfragen (Artikel/API) verwenden wir Varnish und Apache Traffic Server und Caching-Proxy-Server vor Apache HTTP Server.
  • Alle unsere Server laufen unter Debian GNU/Linux.
  • Für die verteilte Objektspeicherung verwenden wir Swift.
  • Unsere Haupt-Webanwendung ist MediaWiki, die in PHP (~70 %) und JavaScript geschrieben ist (~30 %).[1]
  • Our structured data is stored in MariaDB since 2013.[2] We group wikis into clusters, and each cluster is served by several MariaDB servers, replicated in a single-master configuration.
  • We use Memcached for caching of database query and computation results.
  • For full-text search we use Elasticsearch (Extension:CirrusSearch).
  • https://noc.wikimedia.org/ – Wikimedia configuration files.
Wikimedia server racks at CyrusOne

Hosting

As of April 2024, we have the following colocation facilities (each name except for Magru is derived from an acronym of the facility’s company and an acronym of a nearby airport):

eqiad
Application services (primary) at Equinix in Ashburn, Virginia (Washington, DC area).
codfw
Application services (secondary) at CyrusOne in Carrollton, Texas (Dallas-Fort Worth area).
esams
Caching at EvoSwitch in Amsterdam, the Netherlands.[3]
ulsfo
Caching at United Layer in San Francisco.
eqsin
Caching at Equinix in Singapore.
drmrs
Caching at Digital Realty in Marseille.
magru
Caching in São Paulo, Brazil.

Geschichte

The backend web and database servers are in Ashburn, with Carrollton to handle emergency fallback in the future. Carrollton was chosen for this as a result of the 2013 Datacenter RfC. At EvoSwitch, we have a Varnish cache cluster and several miscellaneous servers. The Kennisnet location is now used only for network access and routing.

Ashburn (eqiad) became the primary data center in January 2013, taking over from Tampa (pmtpa and sdtpa) which had been the main data centre since 2004. Around April 2014, sdtpa (Equinix – formerly Switch and Data – in Tampa, Florida, provided networking for pmtpa) was shut down, followed by pmtpa (Hostway – formerly PowerMedium – in Tampa, Florida) in October 2014.

In the past we've had other caching locations like Seoul (yaseo, Yahoo!) and Paris (lopar, Lost Oasis); the WMF 2010–2015 strategic plan reach target states: "additional caching centers in key locations to manage increased traffic from Latin America, Asia and the Middle East, as well as to ensure reasonable and consistent load times no matter where a reader is located."

EvoSwitch and Kennisnet are recognised as benefactors for their in-kind donations. See the current list of benefactors.

A list of servers and their functions used to be available at the server roles page; no such list is currently maintained publicly (perhaps the private racktables tool has one). It used to be possible to see a compact table of all servers grouped by type on icinga, but this is no longer publicly available. However, the puppet configuration provides a pretty good reference for the software that each server runs.

B-roll of servers in Texas in 2015

Status und Monitoring

You can check one of the following sites if you want to know if the Wikimedia servers are overloaded, or if you just want to see how they are doing.

If you are seeing errors in real time, visit #wikimedia-techverbinden on irc.libera.chat. Check the topic to see if someone is already looking into the problem you are having. If not, please report your problem to the channel. It would be helpful if you could report specific symptoms, including the exact text of any error messages, what you were doing right before the error, and what server(s) are generating the error, if you can tell.

Energieverbrauch

In 2017, the WMF board of trustees adopted a resolution containing a commitment to minimize the Foundation's overall environmental impact, especially around data centres through using green energy. The community-led Sustainability Initiative, created in 2015, aims at reducing the environmental impact of the servers by calling for renewable energy to power them.

The Wikimedia Foundation's servers are spread out in five colocation data centers in Virginia, Texas and San Francisco in the United States, Amsterdam and Marseille in Europe, and Singapore in Asia.

In 2021, the servers used 358.8 kW (kilowatts), summing up to about 3.143 GW h (gigawatt hours) of electrical energy per year. The total carbon footprint of the servers was 1,073 metric tons CO2-eq in 2021.[4]

Only the few servers in Amsterdam and in Marseille run on renewable energy, the other use different conventional energy mixes. In 2016, just 9% of Wikimedia Foundation data centers' energy came from renewable sources, with the rest split evenly between coal, gas and nuclear power (34%, 28%, and 28%, respectively). The bulk of the Wikimedia Foundation's electricity demand is in Virginia and Texas, which have both very fossil fuel heavy grids.

Servername Data center location Provider Date opened Average energy consumption (kW) Energiequellen Carbon footprint (CO2/year) Erneuerbare Option und Kosten
eqiad Ashburn, VA

20146-20149 USA

Equinix (Webseite)
February 2011 May 2016: 130

May 2015: 152

2016:
32% coal

20% Erdgas

25% nuklear

17% erneuerbar

1,040,000 lb = 520 short tons = 470 metric tons = 0.32 * 130 kW * 8765.76 hr/yr * 2.1 lb CO2/kWh for coal

+ 0.20 * 130 kW * 8765.76 hr/yr * 1.22lb CO2/kWh for nat gas

+ 0.25 * 130 kW * 8765.76 hr/yr * 0 lb CO2/kWh for nuclear

+ 0.17 * 130 kW * 8765.76 hr/yr * 0 lb CO2/kWh for renewable

In 2015, Equinix made "a long-term commitment to use 100 percent clean and renewable energy". In 2017, Equinix renewed this pledge.
codfw Carrollton, TX

75007 USA

CyrusOne (Webseite)
May 2014 May 2016: 77

May 2015: 70

2016:
23% coal

56% Erdgas

6% nuklear

1% hydro/biomass/solar/other

14% Wind (Oncor/Ercot)

790,000 lb = 400 short tons = 360 metric tons = 0.23 * 77 kW * 8765.76 hr/yr * 2.1 lb CO2/kWh for coal

+ 0.56 * 77 kW * 8765.76 hr/yr * 1.22lb CO2/kWh for nat gas

+ 0.06 * 77 kW * 8765.76 hr/yr * 0 lb CO2/kWh for nuclear

+ 0.15 * 77 kW * 8765.76 hr/yr * 0 lb CO2/kWh for renewables

?
esams Haarlem

2031 BE Netherlands

EvoSwitch (Webseite)
December 2008 May 2016: < 10

May 2015: 10

"a combination of wind power, hydro and biomass" 0 n.a.
ulsfo San Francisco, CA

94124 USA

UnitedLayer (Webseite)
June 2012 May 2016: < 5

May 2015: < 5

2016:
25% natural gas

23% nuklear

30% erneuerbar

6% hydro

17% nicht angegeben (PG&E)

13,000 lb = 6.7 short tons = 6.1 metric tons (+ unspecified) = 0.00 * 5 kW * 8765.76 hr/yr * 2.1 lb CO2/kWh for coal

+ 0.25 * 5 kW * 8765.76 hr/yr * 1.22lb CO2/kWh for nat gas

+ 0.23 * 5 kW * 8765.76 hr/yr * 0 lb CO2/kWh for nuclear

+ 0.36 * 5 kW * 8765.76 hr/yr * 0 lb CO2/kWh for hydro/renewable

+ 0.17 * 5 kW * 8765.76 hr/yr * ? lb CO2/kWh for unspecified

?
eqsin Singapore Equinix (Webseite) ? ? ? ? ?
drmrs Marseille Digital Realty (Website) ? ? ? ? ?

Siehe auch

Mehr Hardwareinfo

  • wikitech:Clusters – technical and usually more up-to-date information on the Wikimedia clusters

Admin-Protokolle

Offsite-Verkehrsseiten

Historical information

Einzelnachweise

  1. Siehe MediaWiki analysis, MediaWiki WMF-supported extensions analysis.
  2. "Wikipedia Adopts MariaDB" (text/html). blog.wikimedia.org. Wikimedia Foundation, Inc. 2013-04-22. Retrieved 2014-07-20. 
  3. Suffered a major DoS attack on September 6/7, 2019. See dedicated article on WMF website.
  4. Wikimedia Foundation Environmental Sustainability (Carbon Footprint) Report for 2021