This technical report reviews measurements of mass and volume, including a review of the SI for mass, length, and amount of substance; principles of mass measurement; calibration of masses and glassware; gravimetry; volumetry; and titrimetry. Measurement uncertainty, metrological traceability and aspects of quality assurance are also treated.
In this Technical Report we describe and define measurements of mass or volume that are used in chemistry. We begin by introducing units for mass, volume, and amount of substance in the International System of Units (SI) [VIM 1.16], then describe the operations and requirements to obtain accurate measurements of mass and volume. This will lay the groundwork for describing some specific analytical operations and measurements, mainly volumetric and gravimetric analyses. The text will become the basis for a chapter in the fourth edition of the IUPAC Orange Book .
Terms defined in the International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM)  are given in italics on first use in a section and are referred to as [VIM x.y]. Other terms in italics refer to terms defined within this paper. Please note that:
Terms included without change from previous PAC Recommendations are cited with their original number, e.g. Source:  1.1.01;
Terms that have minor changes will include a note, e.g. Source:  1.1.01 (with minor change); and
Terms with changes based on any Source will note this, e.g. Source: Adapted from .
2 Foundations of quantitative chemical measurements
The importance of Antoine-Laurent de Lavoisier (26 August 1743–8 May 1794) to science in different areas was expressed by Joseph Louis Lagrange, who lamented Lavoisier’s beheading in the political turmoil of the French revolution: “It took them only an instant to cut off this head, and one hundred years might not suffice to reproduce its like” (Il ne leur a fallu qu’un moment pour faire tomber cette tête, et cent années peut-être ne suffiront pas pour en reproduire une semblable).
The concept of conservation of matter was first outlined by Mikhail Lomonosov (1711–1765) in 1748 , but Lavoisier’s chemical research between 1772 and 1778 included the first truly quantitative chemical experiments, through which he became widely acknowledged as the Father of Chemistry and of Analytical Chemistry in particular. He discovered that, although matter may change its form or shape, its mass always remains the same: Nothing is lost, nothing is created, everything is transformed. We note that luminaries such as Albert Einstein (1879–1955) explained that the conservation of mass is not a basic law of nature, but only an excellent approximation of what we observe. However, chemically, Lavoisier is correct.
2.1 The decimal metric system
Near the end of the eighteenth century, in an effort to improve commercial practice, King Louis XVI of France ordered a new system of measurements. The king’s commission recommended what would become the decimal metric system promoted by Lavoisier. For mass measurements a new unit called a grave was proposed and defined as the mass of a litre of water at the ice point.
Then came the French Revolution. The new Republic adopted the metric system with a few changes. Instead of a grave, the gramme (British spelling gram) was defined as the absolute weight of 1 cm3 of water at its ice point . Since a 1-gram artefact made of water was impractical, a solid metal artefact a thousand times more massive, a kilogram, was chosen as the standard for mass instead.
The creation of the decimal metric system and the subsequent deposition of two platinum standards representing the metre and the kilogram in the Archives de la République in Paris on 22 June 1799 can be seen as the first step in the development of the present International System of Units.
In 1832, Carl Friedrich Gauss (1777–1855) strongly promoted the application of this decimal metric system, together with the second as defined in astronomy, as a coherent system of units for the physical sciences. The Metre Convention (Convention du Mètre), signed by delegates from seventeen countries on 20 May 1875, established, in Article 1, the Bureau International des Poids et Mesures (the BIPM) , charged with providing the basis for a single, coherent system of measurements to be used throughout the world. The General Conference of Weights and Measures (CGPM) was also established, and work began on the construction of new international prototypes of the metre and kilogram, sanctioned in 1889. Together with the astronomical second as the unit of time, these units constituted a three-dimensional mechanical unit system similar to the centimetre-gram-second (CGS) system, but with base units metre, kilogram, and second (the MKS system). The system developed and, in 1960 at the 11th CGPM, it was named the International System of Units (Système International d’Unités, SI).
Prefixes are used in order to express the values of quantities that are either much larger than or much smaller than the SI unit used without any prefix. They may be used with any of the base units [VIM 1.10] and with any of the derived units [VIM 1.11] with special names. When prefixes are used, the prefix name and the unit name are combined to form a single word. Similarly, the prefix symbol and the unit symbol are written without any space to form a single symbol, which may itself be raised to any power. When the base units and derived units are used without any prefixes, the resulting set of units is described as being coherent (coherent derived unit [VIM 1.12]). The use of the prefixes is convenient because it avoids the need to use factors of 10x to express the values of very large or very small quantities. For example, the length of a chemical bond is more conveniently given in nanometres, nm, than in metres, m, and the distance from London to Paris is more conveniently given in kilometres, km, than in metres, m. The kilogram, kg, is an exception, because although it is a base unit, the name already includes a prefix, for historical reasons. Multiples and sub-multiples of the kilogram are written by combining prefixes with the gram: thus, we write milligram, mg, not microkilogram, μkg.
As science advances, and methods of measurement are refined, their definitions have to be revised.
2.2 SI units for mass, volume and amount of substance
The kilogram (symbol kg) is the SI base unit for kind of quantity mass. The other SI base units are the metre (kind of quantity: length, symbol m), second (kind of quantity: time, symbol s), ampere (kind of quantity: electric current, symbol A), kelvin (kind of quantity: thermodynamic temperature, symbol K), mole (kind of quantity: amount of substance, symbol mol), and candela (kind of quantity: luminous intensity, symbol cd). The metre is related to the definition of volume. Relevant units related to mass, volume and amount of substance are given in Sections 2.2.1 and 2.2.2 and Table 2.2-1 below.
|Name||Symbol||SI unit||SI unit symbol|
|Mass||m||kilogram||kg (also gram, g=10−3 kg)|
|Volume||V||cubic metre||m3 (also allowed Litre, l L=1 dm3=10−3 m3)|
|Amount of substance (chemical amount)||n||mole||mol|
2.2.1 Mass: kilogram
Mass, quantity symbol m, dimension symbol M, which reflects the amount of matter within a body regardless of its volume or of any forces acting on it.
Mass is not to be confused with weight, which is the measure of the force of gravity acting on that body. There are two principal ways of referring to mass, depending on the law of physics defining it: gravitational mass and inertial mass.
The gravitational mass (weight) of a body is measured by comparing the body on a beam balance with a set of standard masses; in this way the gravitational factor is eliminated (except for air buoyancy corrections for differing densities – see 3.2.2).
Inertial mass is defined as the mass of a body as determined by its momentum. Inertial mass is obtained from the measured acceleration, a, when the object is subjected to a known force, F, by applying Newton’s Second Law, m=F/a.
The International Prototype of the Kilogram (IPK) is the artefact whose mass defines, at present (2017), the SI unit of mass. It is a cylinder with diameter and height of about 39 mm, made of an alloy of 90 % platinum and 10 % iridium. The IPK has been conserved at the BIPM since 1889, when it was sanctioned by the 1st General Conference on Weights and Measures (CGPM). Initially the IPK had two official copies; over the years, one official copy has been replaced and four others have been added, so that there are now six official copies. There are numerous calibrated copies throughout the world. The kilogram is still the only base unit defined by a prototype of limited accessibility, prone to be damaged or destroyed, and which undergoes drift in mass which cannot be measured directly. (In fact, the German National kilogram prototype, which was stored in Berlin, was never located following World War II.) Unlike definitions of the other base units the IPK is not linked to an unvarying property of nature. Because of these concerns the kilogram is now being redefined [7, 8]. Three of the other base units rely on the definition of the kilogram – the ampere, the mole, and the candela.
Present definition: The kilogram, symbol kg, is the unit of mass; it is equal to the mass of the international prototype of the kilogram.
Proposed draft definition : The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.626 070 040×10−34 when expressed in the unit J s, which is equal to kg m2 s−1, where the metre and the second are defined in terms of c and ∆νCs.
c is the speed of light in a vacuum and ∆νCs is the hyperfine splitting frequency of caesium. In the “New SI” all units will be defined in terms of a set of seven reference constants, to be known as the “defining constants of the SI”, namely the caesium hyperfine splitting frequency, the speed of light in vacuum, the Planck constant, the elementary charge (i.e. the charge on a proton), the Boltzmann constant, the Avogadro constant, and the luminous efficacy of a specified monochromatic source. This is known as the explicit-constant formulation. In the new formulation the definition of the kilogram is dependent on the definitions of the metre and the second.
An alternative to explicitly using the Planck constant in this scheme would be to express the kilogram in terms of the Avogadro constant and the mass of a carbon 12 atom ma(12C) .
At the 25th CGPM meeting in November 2014, four criteria were announced for the quality of measured data in the New SI project that would have to be fulfilled before the CGPM could adopt the revised SI. One criterion was a suitably precise value of the Planck Constant obtained using Watt balances. The name of a Watt balance comes from the fact that the weight of the test mass is proportional to the product of a current and voltage, which is measured in units of watts. The Watt balance links the Planck constant h to the mass of the IPK by measuring the mass of a cobalt-based superalloy referenced to the “best standard Pt-Ir alloy”, i.e. it measures the ratio h/mIPK. The goal is to attain a measurement uncertainty of 2 parts in 108, that is 20 μg in 1 kg. As of mid-2017 the criteria are fulfilled, with a value of 6.626 069 934×10−34 kg m2 s−1 with relative uncertainty 1.3×10−8 reported by NIST, and it is expected the New SI will be announced at the 26th CGPM in 2018.
IUPAC has confirmed its support for the redefinition project .
2.2.2 Volume and time: metre and second
Volume, quantity symbol V, dimension M3, of an object measures the amount of space occupied by that object. The unit for the quantity is cubic metre, m3=103 dm3=103 litres (l or L), a unit derived from the base unit of length.
Time, quantity symbol t, dimension symbol T, is measured in units of second, symbol s, which is defined as the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium–133 atom at 0 K.
Originally there were two competing approaches to the definition of the metre: the length of a pendulum having a half-period of one second, or one ten-millionth of the length of the earth’s meridian along a quadrant (one fourth the circumference of the earth (See Table 2.2-2). (The uncertainties in this table represent the uncertainty of the reported realization, i.e. the range in which the true value is presumed to be.) The latter was chosen based on the fact that the period of the pendulum is affected by the force of gravity which varies slightly over the surface of the earth. Thus, the metre was intended to equal 10−7 or one ten-millionth of the length of the meridian through Paris from the North Pole to the Equator. However, the first prototype was short by 0.2 mm because researchers miscalculated the flattening of the Earth due to its rotation. Still, this length became the standard, Mètre des Archives, in 1799. In 1889, a new international prototype was made of an alloy of platinum with 10 percent iridium that was to be measured to within a fraction of 0.0001 at the melting point of ice. In 1927, the metre was more precisely defined as the distance, at 0 °C, between the axes of the two central lines marked on the bar of platinum-iridium kept at the BIPM, and declared Prototype of the Metre by the 1st CGPM. This bar was subject to standard atmospheric pressure and supported on two cylinders of at least one-centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.
|Realization of definition||Date||Uncertainty of realization|
|1/10 000 000 part of one half of a meridian, measured by Delambre and Méchain||1795||0.5–0.1 mm|
|Length of first prototype Mètre des Archives platinum bar standard||1799||0.05–0.01 mm|
|Length of platinum-iridium bar at melting point of ice (1st CGPM)||1889||0.2–0.1 μm|
|Length of platinum-iridium bar at melting point of ice, atmospheric pressure, supported by two rollers (7th CGPM)||1927||n.a.|
|Measurement of 1 650 763.73 wavelengths of light from a specified transition in krypton–86 (11th CGPM)||1960||0.01–0.005 μm|
|Length of the path travelled by light in a vacuum in 1/299 792 458 of a second (17th CGPM)||1983||0.1 nm|
The 1889 definition of the metre, based upon the artefact international prototype of platinum-iridium (still kept at the BIPM under the conditions specified in 1889), was replaced by the CGPM in 1960 using a definition based on a wavelength of krypton–86 radiation. This definition was adopted in order to reduce the uncertainty with which the metre may be realised. In turn, to further reduce the uncertainty, in 1983 the CGPM replaced this latter definition with the current definition:
Metre (unit symbol m) is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.
This definition fixes the value for the speed of light in vacuum at exactly 299 792 458 m s−1.
As a consequence of the New SI, the wording of the definition of the metre is proposed  to be:
The metre, symbol m, is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum c to be 299 792 458 when expressed in the unit m s−1, where the second is defined in terms of the caesium frequency ∆νCs.
2.2.3 Amount of substance: mole
In 1971, after lengthy discussions between physicists and chemists, the 14th CGPM brought the number of base units to seven by adding the mole as the base unit for amount of substance, quantity symbol n, dimension symbol N. The mole will also be redefined in the new SI, following a campaign to measure the value of the Avogadro constant to the highest possible accuracy . After consultation with its stakeholders, IUPAC  has proposed a new definition of the mole to be submitted for consideration by the Consultative Committee on Amount of Substance (CCQM) .
Present definition: mole, unit symbol mol, is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12.
In addition, “when the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles.”
Proposed definition : The mole, symbol mol, is the SI unit of amount of substance. One mole contains exactly 6.022 140 76×1023 elementary entities. This number is the fixed numerical value of the Avogadro constant, NA, when expressed in mol−1, and is called the Avogadro number.
The amount of substance, symbol n, of a system is a measure of the number of specified elementary entities. An elementary entity may be an atom, a molecule, an ion, an electron, any other particle or specified group of particles.
The stipulated Avogadro number is the fixed numerical value of the Avogadro constant, NA=6.022 140 76×1023 mol−1, which is provided by the CODATA Task Group on Fundamental Constants .
3 Measurement of mass
3.1 Precision balances
The measurement of mass is a central point of the quantification of material substances. A balance measures mass by sensing the weight force that presses an object down on the balance pan. Weight is the force exerted on a body by the gravitational field of the earth, and is measured in the unit force newton, N. The weight force acting on 1 kg mass depends on geographic and cosmic factors. However, for mass measurements using mechanical balances, the weight of the unknown object is equilibrated at the same place and same time as the weight of an object of known mass (i.e. of a standard). For high precision measurements, the buoyancy caused by the surrounding air must be taken into consideration. This correction can easily be calculated if the density of the known and unknown mass and that of the air is known. See 3.2.2.
3.1.2 Types of balances
Balances are available with different capacities and sensitivities. The standard balance for many operations in analytical chemistry is the Analytical Balance. Electronic analytical balances can be purchased with different weighing ranges and readabilities. A standard analytical balance typically has a maximum capacity of 300 g and a readability of 0.1 mg. Semi-microbalances (capacity up to 200 g and readability 0.01 mg), microbalances (capacity up to 50 g, readability 1 μg,), and ultra-microbalances (capacity up to 20 g, readability 0.1 μg) are commercially available. Quartz crystal microbalances with 100 μg range that can detect 1 ng changes are available. These balances utilise a thin quartz crystal disk oscillating at, for example, 10 MHz. The frequency of oscillation changes with any change in mass, and this reading is converted into mass units. See Section 3.3.
Table 3.1-1 shows a classification of the types of balances used in chemistry and medicine  that are based on the OIML standard . The typical maximum capacity is taken from manufacturers’ descriptions of the ranges of balances.
|Type of balance||Typical maximum capacity/g||Number of digits after decimal position (g) ||Accuracy class |
|Technical balance||10 000||0–1||III|
Electronic balances and their accompanying weight sets are calibrated on the basis of conventional mass. The conventions are: the air density is 1.2 kg/m3 and the density of weights pieces is 8000 kg/m3 [17, 18]. Details for performing calibrations are given below.
The calibration certificate and the manufacturer’s recommendations on measurement uncertainty estimates (instrumental measurement uncertainty [VIM 4.24]) take into account three contributions: measurement repeatability [VIM 2.21], resolution of a displaying device [VIM 4.15] (also called readability) in relation to the balance scale, and any datum measurement error [VIM 4.27]. (See Section 7.2 and ).
3.2 Calibrating weights and volumetric glassware
3.2.1 Standard weights for calibration
Standards agencies specify tolerances for standard masses. The International Organization of Legal Metrology Recommendation OIML R-111 applies to weights with nominal values of mass from 1 mg to 5000 kg in the E1, E2, F1, F2, M1, M1–2, M2, M2–3, and M3 accuracy classes . The class designation of a weight or weight set meets metrological requirements intended to maintain the mass values within specified limits. The error in a weight used for the verification of a weighing instrument shall not exceed one third of the maximum permissible error for an instrument .
Tolerance limits  (maximum permissible measurement error [VIM 4.26]), T, for weights below 10 g of nominal mass W g are calculated from the equation:
where the unit of T is mg. Commercially available weights for calibration have masses metrologically traceable  to national standards and ultimately the SI.
3.2.2 Mass in air and a vacuum
Determination of mass on a common laboratory balance gives the mass in air. The object displaces its volume in air, being buoyed up by the mass of air displaced, according to Archimedes’ principle. The density of air is taken as 0.0012 g cm−3. If the density of the object being weighed and the density of the balance weights are the same, they will be buoyed the same amount, and the recorded mass will be that in a vacuum. The density of a typical weight is about 8 kg m−3, or 8 g cm−3. If the densities are markedly different, the differences in buoyancy will lead to a small error in the recorded mass. Examples are the weighing of very dense objects (e.g. platinum, 21.4 g cm−3 or mercury, 13.6 g cm−3) or light objects (e.g. water, 1.0 g cm−3). For very careful work, a correction should be made for this error. An example is found in the calibration of glassware by measuring the mass of water or mercury delivered or contained by the glassware (see below 3.2.3).
The mass of an object in air (mair) can be corrected to its mass in vacuum (mvac) by
where ρo=density of object, ρw=density of standard weights, and ρair=the density of air (taken as 1.225 kg m−3 or 0.001225 g cm−3). The density of steel weights is 7.8 g cm3 (kg m−3).
In many analytical measurements, a correction is not necessary, because buoyancy errors will cancel out in percent composition calculations. Also, the corrected value does not differ significantly from the non-corrected value for typical uses.
The concept of conventional mass aims to simplify the determination of the mass of weights under conditions in ambient air. It implies that conventional mass does not differ from the physical quantity mass by more than certain acceptable values when determined under specified limits of air density and the material densities of the weights. See [17, 22, 23, 24].
3.2.3 Volumetric glassware tolerances and calibration
EN ISO in Europe, and ASTM and USP (United States Pharmacopeia) in the USA publish tolerances for different volumetric glassware. The highest standard is Class A, with USP usually having the most exacting standards. These are sufficiently accurate for most quantitative measurements. Table 3.2-1 lists tolerances for Class A volumetric glassware.
|Capacity/mL (less than and including)||Tolerance intervals/mL
|Volumetric flasks||Transfer pipettes||Burettes|
For more accurate measurements, glassware that has been certified by standards agencies may be purchased. Alternatively, one can calibrate glassware to an accuracy generally as good as the standards specifications, correcting for weight in vacuum, glassware expansion or contraction, and water expansion or contraction. Table 3.2-2 lists the calculated volumes for one gram of water in air at atmospheric pressure at sea level for different temperatures, corrected for buoyancy with stainless steel weights of density 7.8 kg m−3. The unit of 7.8 g cm−3 is used in the formulas. The glass volumes are also calculated for the standard temperature of 20 °C, with small adjustments for borosilicate glass expansion or contraction with temperature changes. See also [25, 26].
This spreadsheet (Table 3.2-2) may be reproduced to perform calculations. Alternatively, see Supplementary Information for a copy of the spreadsheet. To run, enter the weight of water at temperature T to calculate volume at temperature T and at 20 °C.
Tolerance intervals for other types of volumetric measuring apparatus, such as displacement pipettes in the microlitre range, are available from manufacturers. They may be calibrated gravimetrically, as with glassware, or by delivering and measuring a reagent whose concentration is accurately known.
3.3 Piezoelectric measurement of mass
Microgravimetry can be accomplished by means of a quartz crystal microbalance (QCM), which is a piezoelectric mass-sensing device.
A method was developed by G. Sauerbrey in 1959  for correlating changes in the oscillation frequency of a piezoelectric crystal with the mass deposited on it. The method is used as the primary tool in quartz crystal microbalance experiments for the conversion of frequency to mass and is valid in nearly all applications.
The Sauerbrey equation is a linear relation between the changes in the resonant frequency, Δf, of a quartz crystal and the mass, Δm, of a thin rigid film added to its surface:
where f0 is the resonant frequency of the unencumbered (free) crystal, A is the active electrode area, ρQ is the density, μQ is the shear modulus, and νQ is the shear wave velocity of the quartz crystal.
The proportionality factor, known as mass sensitivity, SQ, is commonly used to define the frequency to mass conversion
Taking as valid the assertion that the thickness relative variation is equal to that of the quartz crystal mass, the mass-to-frequency correlation is largely independent of electrode geometry, allowing mass determination without calibration.
The basic Sauerbrey equation only applies to systems in which the frequency change Δf/f<0.02, and was developed for oscillation in air. It thus only applies to rigid masses attached to the crystal. Quartz crystal microbalance measurements can be performed in liquid, in which case a viscosity-related decrease in the resonant frequency is observed.
3.4 Definition of term
3.4.1 conventional mass, mc
Mass of a body that is balanced by a standard mass under conventionally chosen conditions.
Note 1: The Organization of Legal Metrology recommends that the conventionally chosen conditions are: reference temperature, tref =20 °C, density of air as a reference value, ρ0=1.2 kg m−3; reference density of standard weight ρc=8000 kg m−3.
Note 2: The unit of the kind of quantity “conventional mass” is the kilogram.
Source: Adapted from .
The great majority of chemical reactions which are the basis for the chemical analysis of multiple analytes in various material systems take place in solution, mostly aqueous solutions. Solution, solute, and solvent are defined in  and below at 4.2.2. See also . In this section, we concentrate on inorganic reagents used in common analytical operations.
4.1 Preparation of solutions
The choice of equipment, materials, and procedures to prepare a solution of a certain concentration has to be commensurate with the quality required from the result. The level of uncertainty (target measurement uncertainty [VIM 2.34]) associated with the concentration value ought to be fit for the purpose of its use. Reagents must be of the required purity. Exact values demand appropriate balances for mass measurements, and pipettes and volumetric flasks for volume measurements; for approximate values, graduated cylinders, beakers, and reagent bottles may be adequate.
After calculation of the necessary chemical amounts, solutions are prepared by mass (msolute/msolvent, or msolute/Vsolution) or by volume (Vconcentrated solution/Vfinal dilute solution). See also .
4.1.1 Solutions prepared by dilution
Solutions are often prepared by diluting a known volume, Vconc, of concentrated solution, Cconc, by the addition of solvent up to a higher volume, Vdil. Since the amount of solute is the same before and after dilution,
This is only valid for initial concentrated solutions prepared by volume.
4.1.2 Preparation of standard solutions
Standard solutions are often prepared by dissolving an accurately measured mass of a solute of certified purity in a known volume of solvent. Such a solution might fulfil the requirements of a primary measurement standard [VIM 5.4], namely that its property value is established by a method that requires no measurement standard [VIM 5.1] for a quantity [VIM 1.1] of the same kind [VIM 1.2] (known as a primary reference measurement procedure [VIM 2.8]). If the concentration of an intended standard solution is obtained by measurement, for example by titration with a standard solution, it is known as a secondary standard [VIM 5.5].
In addition to certified purity features of a material chosen to prepare a standard solution, features also include:
Stability in presence of air
Absence of any water of hydration which might vary with changing humidity and temperature.
Readily soluble to produce stable solutions in solvent of choice
A larger rather than smaller molar mass
These conditions are met by few materials. Anhydrous sodium carbonate, silver nitrate, and potassium hydrogen phthalate are a few materials that meet these conditions. Certified reference materials (CRM) [VIM 5.14] for preparation of standard solutions are available from providers including the National Institute of Standards and Technology (NIST) and other National Measurement Institutes. For example, NIST SRM 84L, potassium hydrogen phthalate, is a CRM sold as an acidimetric primary standard. The certified property is “mass fraction of total acid (replaceable H+) expressed as KHP” and is (0.999 934±0.000 076) g g−1, where 0.000 076 g g−1 is a GUM expanded uncertainty for a level of confidence of approximately 95 % .
Some examples of primary standards:
Arsenic trioxide for making sodium arsenite solution for standardisation of sodium periodate solution
Benzoic acid for standardisation of waterless basic solutions: ethanolic sodium and potassium hydroxide, tetrabutyl ammonium hydroxide (TBAH), and alkali methanolates in methanol, isopropanol, or dimethyl formamide (DMF)
Potassium bromate (KBrO3) for standardisation of sodium thiosulfate solutions
Potassium hydrogen phthalate for standardisation of aqueous bases and perchloric acid in acetic acid solutions
Sodium carbonate for standardisation of aqueous acids: hydrochloric, sulfuric, and nitric acid solutions (but not acetic acid solutions)
Sodium chloride for standardisation of silver nitrate solutions
Sulfanilic acid for standardisation of sodium nitrite solutions
Zinc powder, after being dissolved in sulfuric or hydrochloric acid, for standardisation of Na2H2edta solutions
Disodium oxalate for standardising potassium permanganate.
A solid standard is dissolved in a solvent (usually water) and the concentration may be expressed as a mass concentration (mass/volume) or amount concentration (amount/volume). Typical volume units are L, dm3, m3.
Two widely used standardised solutions are those of hydrochloric acid, HCl, and potassium hydrogen phthalate, KHC8H4O4. Three others which have some instability, but with appropriate precautions have proven to be excellent standard solutions, are sodium thiosulfate, Na2S2O3 (light sensitivity, susceptible to bacterial oxidation), silver nitrate, AgNO3 (light sensitivity), and potassium permanganate, KMnO4 (water oxidation catalysed by light, heat, Mn2+, and MnO2).
For a standard solution prepared by dissolving a known mass mA of component A (molar mass MA) and known purity P in a known volume Vsol, the amount concentration of A in the solution cA is calculated by
See section 7.4 for a discussion of uncertainties in preparing standard solutions.
4.1.3 Stoichiometry and equivalence in titrations
Chemical reactions in solution take place between different chemical species according to proportions defined by well-established equations (known as stoichiometry ); for example:
a is the number of entities of species A (the analyte) of molar mass MA in one solution (the sample solution) and the b is the number of entities of a species B of molar mass MB contained in a second solution which reacts with the first solution according to equation (8). Stoichiometric numbers (v)  of reacting entities are the coefficients of equation (8) for products and the negatives of the coefficients for reactants , vA=−a and vB=−b reacting in the theoretically predicted vB/vA proportion.
The concept of ‘equivalence’ between the amounts of reacting substances plays an important role in quantitative chemistry; vA is equivalent to vB when the masses vA×MA and vB×MB of the two species are equal. When volumes VA and VB of these solutions of concentrations cA and cB are mixed, the reaction is stoichiometric when vA×cA×VA=vB×cB×VB. Historically, v×c was known as the ‘normality’ of the solution designating the number of ‘gram equivalents’ per litre. A one-normal solution contained one equivalent per litre, and was written ‘1 N’. This use is discontinued and deprecated .
4.2 Definitions of terms
4.2.1 equivalent (in volumetric and gravimetric analysis)
Numerical value of the amount of substance providing one mole of a specified reacting species.
Example 1: 1 mol of sulfuric acid (H2SO4) has 2 mol of H+ to be titrated with a strong base. Therefore, one equivalent of H+ is provided by 0.5 mol of sulfuric acid.
Example 2: 1 mol of potassium permanganate (KMnO4) has 3 equivalents of transferable electrons when reduced to MnO2 in alkaline solution and 5 equivalents of transferable electrons when reduced to Mn2+ in acid solution.
Source:  section 6.2.
Liquid or solid phase containing more than one substance, when for convenience one (or more) substance, which is called the solvent, is treated differently from the other substances, which are called solutes.
Note 1: When, as is often but not necessarily the case, the sum of the amount fractions of solutes is small compared with unity, the solution is called a dilute solution.
Note 2: A superscript ∞ attached to the quantity symbol for a property of a solution denotes the property in the limit of infinite dilution.
Source:  with minor change
4.2.3 standard solution
Solution of accurately known concentration, prepared using a reference measurement standard [VIM 5.6].
Source:  with minor change.
Measurement of the concentration of a component of a solution by titration with a measurement standard for the preparation of a secondary measurement standard [VIM 5.5]
4.2.5 stock solution
Solution prepared by weighing an appropriate portion of a solid or by measuring out an appropriate volume of a liquid and dissolving it in the weighed mass of solvent or by adding solvent to a given volume.
Note: A stock solution is usually of a greater concentration than is needed for the chemical purpose and is diluted to give the required concentration before use.
Reference:  Chapter 2.
5 Methods of analysis depending on measurement of mass
5.1 Gravimetric analysis
Historically, gravimetry has formed the basis of analytical chemistry and remains an attractive method when it can be used, because it is considered a primary measurement procedure and is highly accurate, with relative uncertainties of about 0.05 % even for routine measurements. In fact, gravimetric analysis was used to determine the atomic masses of many elements to six-figure accuracy.
5.1.2 Gravimetry by direct weighing of the analyte
In a simple example, gas mixtures are prepared by successively adding each component of a mixture to a gas cylinder, which is weighed first empty and then after each addition, The concentration of each component is usually expressed as a mass fraction or mole fraction .
Total suspended solids (TSS), the mass of filterable solids in 1 L of a water sample, is measured gravimetrically. A known volume of water is filtered, and the collected solids are weighed.
5.1.3 Gravimetry of precipitates
Analysis by precipitation of a highly insoluble solid is typified by the measurement of chloride concentration by the addition of silver nitrate, precipitating silver chloride that is filtered, washed, dried, and weighed. Knowledge of the stoichiometry of the reaction (Cl−+Ag+→AgCl (s)) allows for a calculation of the amount of chloride precipitated and therefore the chloride content of the test solution.
Steps for gravimetric analysis after precipitation:
Prepare the solution. Separate or mask interfering materials. Adjust conditions for low solubility. Adjust pH.
Perform the precipitation. In order to minimise supersaturation and obtain larger crystals, precipitate from dilute solution, add the precipitating agent slowly, with stirring, to a hot solution, and then cool. Avoiding local excess of precipitating agent keeps Q, the concentrations of mixed reagents before precipitation, low, and heating keeps S, the equilibrium solubility of the precipitate, high. See von Wiemarn Ratio.
Digest the precipitate. This process, also called “Ostwald ripening”, improves the purity and crystallinity of the precipitate by allowing it to stand in contact with the mother liquor for a period of time. Fine particles tend to dissolve and re-precipitate on larger ones.
Filter and wash the precipitate. The wash solution should contain a volatile electrolyte to avoid peptisation, the formation of colloids. The filter is chosen to trap the precipitate; smaller particles are more difficult to filter. Depending on the procedure followed, the filter might be a piece of ashless filter paper in a fluted funnel, or it might be a filter crucible. Filter paper is convenient because it does not typically require cleaning before use; however, filter paper can be chemically attacked by some solutions (such as concentrated acid or base), and may tear during the filtration of large volumes of solution. Following filtration, the filter paper is charred off, leaving the precipitate.
Dry or ignite the precipitate. A precipitate may be ignited to a weighable form, for example, hydrous ferric oxide, Fe2O3·xH2O is ignited to Fe2O3.
Cool and weigh the precipitate. An ignited precipitate is cooled in a desiccator to avoid adsorption of atmospheric water.
Calculate the amount of analyte from the mass of precipitate and the gravimetric factor.
5.1.4 Gravimetry of vapours
It may also be possible to convert the analyte into a substance in a vapour form that can be collected and measured directly, or the sample weighed before and after the reaction; the difference between the two masses gives the mass of analyte lost. For example, water in a solid can be vaporised. The loss in mass can be measured. The water vapour may also be collected on an adsorbent, which is weighed. The analytical precipitate may be decomposed with loss of a gas. Ammonium ions may be determined by precipitating with H2PtCl6 as (NH)2PtCl6 and igniting the precipitate to platinum metal: (NH)2PtCl6→Pt+2NH4Cl (g)+2Cl2 (g).
5.1.5 Thermogravimetric Analysis
Thermogravimetric Analysis (TGA) or thermogravimety (TG) involves continuously measuring the mass of a sample as a function of its temperature in which changes in physical and chemical properties of materials are determined. The TGA instrument continuously weighs a sample as it is heated to temperatures of up to 2000 °C and plots the mass as a function of temperature (a thermogram).
Suitable samples for TGA are solids which either lose a volatile species (mass loss) or react with a gaseous species (mass increase). An example is the dehydration of copper sulfate pentahydrate. There are two points of mass loss, one due to the loss of four water molecules and the second to the loss of one. Such plots can be used to qualitatively identify a compound or to quantify it. The composition of a novel compound may be elucidated.
Differential thermal analysis (DTA), in which the difference in temperature between a sample and a reference is measured as they are heated, is more widely applicable than TGA, because it is not limited to reactions involving a change in mass.
A comprehensive terminology of thermal-based methods has been published by the International Confederation for Thermal Analysis and Calorimetry and IUPAC .
5.2 Definitions of terms
Simultaneous precipitation of a normally soluble component with a macro-component from the same solution by the formation of mixed crystals by adsorption, occlusion, or mechanical entrapment.
Note: In gravimetry involving precipitation, co-precipitation of an impurity is usually not desired. Proper washing may remove adsorbed impurities.
Gravimetry in which the material to be weighed is obtained by electrochemical reaction.
Note 1: A typical measurement involves the electrodeposition of a metal from its ions in solution.
Note 2: Measurement of current using a silver coulometer is an example of electrogravimetry.
5.2.3 gravimetric analysis
Methods of analysis based on the measurement of mass.
Note 1: Gravimetry may be operated as a primary reference measurement procedure [VIM 2.8], as the mass standard that calibrates a balance is not a mass of analyte.
Note 2: The analyte to be determined is separated from the sample in a weighable form (e.g. by precipitation), and its mass or amount of substance is calculated from the mass of the weighed compound whose stoichiometric composition must be exactly known.
Source: Adapted from 
5.2.4 gravimetric factor, gF
In gravimetry, mass of analyte per unit mass of precipitate.
where MA, MP are molar masses of analyte and precipitate, respectively, in g mol−1 and vA and vP are stoichiometric coefficients in the precipitation reaction.
Note 1: Historically, the gravimetric factor is termed GF, but to conform to the IUPAC convention  it is recommended that quantities should have a single symbol, gF.
Note 2: The gravimetric factor is used to calculate the mass fraction of an analyte in a sample by
Example 1: Sulfur trioxide (MSO3=80.0640 g mol−1) is precipitated as barium sulfate (MBaSO4=233.390 g mol−1), SO3→BaSO4. Therefore,
Example 2: Silver oxide (MAg2O=231.736 g mol−1) is dissolved and precipitated as silver chloride (MAgCl= 143.321 g mol−1), Ag2O→2 AgCl. Therefore,
Reference for molar masses 
5.2.5 homogeneous precipitation
Precipitation in gravimetry in which the precipitating agent is produced from a homogeneously dissolved precursor to minimise local excesses and supersaturation.
Initial formation of small crystals of precipitate, from which larger crystals will grow.
Note: In gravimetric analysis involving precipitation, supersaturation should be minimised to avoid large numbers of nuclei and small crystals. See von Wiemarn Ratio.
Source: Adapted from 
5.2.7 precipitation (in chemistry)
Sedimentation of a solid material (precipitate) from a liquid solution in which the material is present in amounts greater than its solubility in the liquid.
Source:  p 2207 with minor change.
5.2.8 solubility, s
Equilibrium concentration of a component of a saturated solution at a given temperature.
Note: The units of solubility are mol m−3, but may also be expressed in any units corresponding to quantities that denote relative composition, such as mass fraction, amount fraction, molality, volume fraction, etc.
Source:  p84.
State of an unstable system which has a greater concentration of a material in solution than would exist at equilibrium.
Source:  with minor change.
5.2.10 thermogravimetric analysis (TGA)
Technique that monitors the mass of the sample as a function of time or temperature while the temperature of the sample, in a specified atmosphere, is heated or cooled in a controlled manner.
Note 1: It is common to use the same abbreviation (TGA) for both the technique and a thermogravimetric analyser.
Note 2: Very often, TG is used in combination with DTA, Fourier transform-infrared spectroscopy (FT-IR), gas chromatography (GC), or mass spectroscopy (MS) in so-called hyphenated techniques
5.2.11 Von Wiemarn ratio
Ratio of the difference in concentration of mixed reagents before precipitation (Q) and the concentration of the precipitate at equilibrium (S), and the concentration of the precipitate at equilibrium.
Note: The particle size of a precipitate is inversely proportional to the relative supersaturation, so Q should be kept low and S high during gravimetric analysis by precipitation (see section 5.1.3).
Source: Adapted from  p420.
6 Methods of analysis depending on measurement of volume
6.1 Volumetry and titrimetry
It is seen from the definitions of the terms volumetry and titrimetry that, although often used interchangeably, they are not synonyms; the amount of titrant delivered during a titration can be measured either volumetrically or by its mass. Examples of titration given here will all refer to volumetry.
The titrant is almost always in a standardised solution (see standardisation). Titrant can also be added by electrolytic generation, as in coulometric titration, which is a primary measurement procedure based on Coulomb’s law. Volumetric methods are by nature relative methods, since the titrant is standardised against another reagent.
In this section we will describe the reacting species in the titration as A and the analyte as B. The general reaction is therefore vA A+vB B→products. The amount of analyte B, nB is calculated from the end-point volume (VA) and concentration of A (cA) by:
If the volume of solution containing nB is VB, then the concentration of B (cB) is
6.1.1 Kinds of titrations
Titrimetric analysis covers a quite large group of methods with a long tradition in quantitative analysis. Owing to their advantageous characteristics, they are still used in laboratories as definitive methods, especially when they can be used with instrumental end-point detection. These are the titration methods with electrochemical reagent generation (coulometric titration) and/or electrochemical detection (potentiometric, amperometric, conductometric), thermoanalytical, optical, or radiochemical detections. See the corresponding sections of [3, 25].
Terms for varieties of titration methods can reflect the nature of the reaction between analyte and reagent. Thus, there are acid-base, precipitation, complexometric, and oxidation-reduction (also redox) titrations (or titrimetries).
Alternatively, the term can reflect the nature of the titrant: acidimetric titration (acidimetry), alkalimetric titration (alkalimetry), and iodometric titrations (iodometry), or, additionally, coulometric titrations, in which the titrant is generated by electrochemical reaction, rather than being added as a standard solution.
6.1.2 Requirements for a titration
There are a number of desirable characteristics of all titrations that are listed below.
The reaction should be of known stoichiometry
The reaction should be rapid.
The reaction should be specific. Interfering substances must be removed or masked. For example, in acid-base titrations, carbon dioxide in air can be prevented from dissolving in the base by performing the titration under argon. Oxygen in air should also be excluded from some redox titrations.
There should be a marked change in some property of the solution when the reaction is complete, to mark the end-point, the observed completion of the reaction. The end-point should coincide with the equivalence-point, the point at which a stoichiometric amount of titrant is added.
The reaction should essentially go to completion, i.e. the equilibrium of the reaction ‘should be shifted far to the right, towards the products, to obtain a sharp end-point.
6.1.3 Direct and back titration
When the term titration is used without qualification, it usually indicates a direct titration.
A back titration is generally a two-stage analytical technique:
Analyte A of unknown concentration is reacted with excess reagent B of known concentration.
A direct titration is then performed to determine the amount of reagent B remaining (i.e. in excess).
Back titrations are used when:
one of the reactants is volatile, for example ammonia.
an acid or a base is an insoluble salt, for example calcium carbonate
a particular reaction is too slow
direct titration would involve a weak acid – weak base titration
the end-point of the direct titration is difficult to observe
The titrant, containing the reacting species with which a titration is made, is either a prepared standard solution, or it has been standardised, i.e. the concentration of the active agent is either known or measured by titration with a standard solution of accurately known concentration.
Provision must be made for some means of recognising (indicating) the point at which essentially all of A has reacted with B, the equivalence-point or stoichiometric or theoretical end-point. In order to recognise the end-point of the titrations, indicators can be used. These may be visual or instrumental. In the former, the indicator is a substance which participates in the titration reaction so as to give a visual change (colour, fluorescence, precipitate, or turbidity) at or near the equivalence-point of a titration.
6.1.4 Visual indicators
18.104.22.168 General characteristics
Visual indicators (colour indicators) are widely used for end-point detection in titrimetric analyses. The indicators used normally correspond to the titration reaction and have acid-base or complex, precipitate formation or oxidation-reduction properties, respectively. Thus, an acid-base indicator is itself an acid or base which exhibits a visual change on neutralization by the basic or acidic titrant at or near the equivalence-point. There are, however, titrations in which the indicator reaction type is different from that of the titration reaction. For example, some redox indicators can be used for end-point detection in complexometric titrations, some precipitate-forming indicators in oxidation-reduction titrations, and some acid-base indicators in precipitation titrations.
Similar terms apply to complexometry (metallochromic indicator), oxidation-reduction, and precipitation titrimetry. In the last case, substances which are adsorbed or desorbed, with concomitant colour changes at or near the equivalence-point, are termed adsorption indicators. Generally, visual indicators undergo a relatively gradual change over a range of concentrations of the relevant species (H+ ion, metal ion, etc.) involved in the titration, but are perceived to undergo sharp changes because of the very large concentration changes occurring at or near the equivalence-point. This range of concentrations over which the eye is able to perceive change in hue, colour intensity, fluorescence, or other property of a visual indicator arising from the varying ratio of the two relevant forms of the indicator is called its transition interval. It is usually expressed in terms of the negative decadic logarithm of the concentration (e.g. pH, pM) or, for oxidation-reduction titrations, in terms of a potential difference.
Colour indicators are classified as one or two-colour indicators, depending on whether they are colourless on one side of the transition interval or possess a different colour on each side of this range.
A mixed indicator is one containing a supplementary dye selected to heighten the overall colour change.
The spectral characteristics of the indicator used in visual titrations should include the observed colours, wavelengths of absorption maxima, and molar decadic absorption coefficients of all relevant species, i.e. of the hydronated (protonated) species of the indicator and that of its complexes with the metal being titrated. These data are of concern to the visible range only.
Purity of indicator. The indicators may be contaminated by the substances formed or remaining from the synthesis. Other sources of contamination are decomposition or transformation products of the indicator itself, as well as the various isomers formed as by-products in the reagent preparation, added diluents, or surfactants.
The preparation of the indicator should be described, whether as a solution or solid mixture, in cases where the indicator solution is unstable. Depending on the preparation and concentration, the amount of indicator used for titration should be given.
The titration error (indicator error), the difference between the end-point and equivalence-point for a titration, is due to two factors:
Systematic error (end-point error) occurring under the given conditions of the titration, which is the difference between the concentration of the titrant at the equivalence-point and that at the end-point, determined from the colour change of the indicator. For example, phenolphthalein (colour change between pH 8.2–10.0) should not be used for a strong acid-strong base titration with an equivalence-point of pH 7.0.
Systematic error (indicator consumption error) arising from the reaction of the indictor with H+ (in acid-base titrations) or metal ion (in complexometric titrations). This error has a negative value and depends on the amount of indicator present. For indicators exhibiting high colour intensities, which therefore may be used in smaller concentrations, this error decreases. A significant compensation of this error usually takes place, because the standardisation of the titrant is carried out in similar conditions to the analysis titration.
22.214.171.124 Acid-base indicators
Indicators which exhibit a visual change on neutralisation by a base or acid at or near the equivalence-point of a titration are given in Table 6.1-1. For characterisation and control of purity of acid-base indicators see .
|Name||Acid colour||pH Interval of colour change||Base colour|
|Alizarin yellow R||Yellow||10.1–12.0||Red|
126.96.36.199 Complexometric indicators
The action of indicators in visual complexometric titrations is based on changing a particular optical property (absorption, fluorescence, etc.) of the solution titrated in the conditions where the concentration of the free metal aquo ion approaches a defined borderline concentration level. This borderline concentration level should approach as closely as possible the concentration of the free metal aquo ion at the equivalence-point of a particular titration reaction. The change of the optical property extends over a range of metal aquo ion concentrations, which is often termed the transition range.
The mechanisms of complex indicator reactions are based on several principles:
The indicator forms a coloured complex with the metal ion to be titrated. The uncomplexed indicator may be colourless (one-colour indicators) or coloured in its various protonated forms (two-colour indicators). Such indicators are sometimes called metallochromic indicators.
When the complexation reaction of interest proceeds in another liquid phase (usually organic solvent) in equilibrium with the solution being titrated, the indicators are described as extraction indicators.
When the indicator is influenced by a redox system, whose equilibrium is controlled by removal of the metal ions being titrated, the indicators are called redox indicators. They are usually one-colour indicators.
The most typical complexometric indicators are metallochromic indicators. Because the change (or appearance) of colour is based on complex formation reactions, the behaviour is usually reversible, unless kinetic factors, mainly connected with the nature of the metal ions, are significant.
The reactions in complexometric titrations are mainly based on chelate formation. The most common and favourable case is when the titrant - analyte reaction proceeds in a stoichiometric ratio of 1:1. The formation of complexes with stepwise ligand attachment may give diffuse end-points unless the formation of the intermediate complexes is well separated. The same considerations apply when more than one metal ion may be bound by a multidentate ligand.
pMtrans is the negative decadic logarithm of the metal concentration at the colour transition point . For a simple 1:1 complex M+I⇌MI, The colour change will occur near where [MI]=[I] if it can be assumed that the human eye is equally sensitive to the colours of MI and I. Therefore, [M]trans≈1/K or pMtrans≈log10K. pMtrans provides a measure of the sensitivity of a titration using the indicator; a large value of pMtrans implies a high sensitivity.
Values of pMtrans are used in the calculation of the titration error (See 188.8.131.52). Because [M] depends on the solution conditions, pMtrans only defines the characteristics of the indicator in those particular conditions that specify the concentrations of all species influencing the free metal concentration. Those conditions should be strictly defined, otherwise those values have no significance.
184.108.40.206 Redox indicators
The indicator may be classified as reversible when the cycle of reactions in redox titration operations (reduction followed by oxidation) gives a product identical to the initial indicator. The relevant potentiometric titration curve should be within the limits of experimental error, the same in both directions. A truly reversible indicator should have both forms stable. However, in some instances the reversibility depends on the reagents used for oxidation. Ferroin and related indicators are examples of such indicators.
The indicator may be classified as pseudoreversible when the product from the cycle of reactions (as explained above) is different from the initial compound or when one of the forms is unstable and decomposes during titration but the colour of the product is the same, or nearly the same, as that of the initial product at the concentrations used in the titration. An example of such an indicator is N-phenylanthranilic acid.
The indicator may be classified as irreversible when in the cycle of reactions (as explained above) no reversal to the initial colour is observed. An example of such an indicator is Naphthol Blue Black.
Formal redox potential corresponds to the redox potential in solution at which the analytical concentrations of the reduced and oxidised forms of the indicator are equal. This should not depend on the concentration of the indicator, unless the stoichiometric coefficients are not equal. In such instances the formal redox potential should be replaced by the half-oxidation potential. The formal redox potential is a function of ionic strength and acidity and its value should be given under the specified conditions in which it is used for determinations. The formal redox potential should be given, at least for the acidity range in which the indicator is applicable. The formal redox potential has a precise meaning only for strictly reversible indicators. In the case of other indicators, it should be understood as the potential for half-oxidised indicator. Because of the difficulty of determination of the corresponding activity coefficients, the rigorous definition for formal redox potential based on activities is never used in practice – a more practical term used in parallel is half-oxidation potential.
Transition potential is often given instead of the formal redox potential. It corresponds to the colour change (its appearance or disappearance) at which the end-point is said to occur. It is a function of the formal redox potential, the total concentration of the indicator (especially for one colour indicators), the depth of the colour layer, the minimal observable absorbance (which depends on wavelength and eye sensitivity), and the absorption coefficient. In an ideal two-colour indicator, the “apparent absorption coefficients” of both forms should be equal. Then the transition potential approaches the formal one. This is never the case in one-colour indicators. As for formal redox potential, it should be given, at least for the acidity range of indicator application. The transition potential may be given for pseudo-reversible indicators. Because the transition point is usually different for oxidimetric and reductiometric titrations, it is sometimes useful to distinguish those two values.
Acid dissociation constants of the indicator for both reduced and oxidised forms are useful guides in considering the dependence of the potentials on pH values. The protonation of the oxidised form is sometimes difficult (or impossible) to evaluate because of its instability. This may not be the case for some reversible indicators.
The spectral characteristics of an indicator are important, e.g. the position of the absorbance maximum, the stability of the spectrum (constancy of absorbance with time) expressed as the half-life time of the absorbance decay at the maximum, the effect of acidity, and the presence of differently coloured intermediate or back-reaction products.
Other useful information about a redox indicator includes:
The reaction mechanism (in so far as it gives analytically useful information). Useful analytical information includes the intermediate steps in the oxidation or reduction, decomposition of the reaction product with time, the number of electrons consumed (or formed) per one mole of indicator. Such data are useful in predicting applications of the indicator, factors influencing its blank value, etc.
Purity of indicator, especially when it directly influences the practical utility of the indicator.
Preparation of indicator solutions, i.e. the solvent, desirable and practical useful concentration, the stability of such solutions (effect of oxygen, light, etc.).
The manner of use of the indicator: amount of solution for best colour change, the special conditions in which it works properly (e.g. temperature, pH range).
Systems in which the indicator has been used successfully.
Titration error (indicator error) in redox titrations is due to two factors which influence the accuracy of determination:
End-point error – the systematic error occurring because the equivalence-point potential differs from the end-point potential under the given conditions of titration. The equivalence-point potential depends on the formal potentials of the analyte and titrant and on the number of electrons participating in half-reactions. The end-point potential is the function of the indicator, the molar decadic absorption coefficients of both indicator forms, their concentrations (especially but not exclusively for one-colour indicators), solution layer depth, and the ability of the analyst’s eye to observe the colour appearance or change. When the transition potential, corresponding to the end-point, is close to the equivalence-point potential, the effect of the above-mentioned factors may be diminished.
Indicator consumption error – the systematic error occurring because of the finite consumption of the oxidant during oxidation of the indicator. This amount is easily determined for two-colour reversible indicators, being in those instances strictly proportional to the amount of indicator. This is not the case with irreversible, or even pseudo-reversible indicators, that form intermediate products or whose oxidised form is unstable and decomposes slowly. In such cases, the electrons lost by the indicator at local oxidant excesses will be not fully replaced by reaction with untitrated reductant. With those indicators the correction is always greater than for reversible indicators and depends on factors which are not readily evaluated. These are: the mechanism and rate of indicator oxidation; the rate of oxidant consumption by the analyte; the manner of oxidant addition (increments, rate); and the efficiency of stirring during titration.
See also .
220.127.116.11 Adsorption and precipitation indicators
An adsorption indicator is a dye in ionised form (usually anionic) that is adsorbed on the precipitate near the equivalence-point as a counter ion to adsorbed titrant primary ion. The adsorbed dye changes color. For example, fluorescein is adsorbed on the surface of AgCl at the first excess of Ag+ titrant. A precipitation indicator forms a colored precipitate with the titrant at or near the equivalence-point. An example is the Mohr method for titrating Ag+ with Cl− using K2CrO4 indicator. Red Ag2CrO4 forms near the equivalence-point.
6.1.5 Instrumental indicators
18.104.22.168 pH meters and ion selective electrodes
In an acid-base titration there is a rapid change in pH at the equivalence-point, particularly for strong acid – strong base titrations. This is the principle of colour indicators for such reactions. A pH electrode gives a potential which is proportional to pH and so, suitably calibrated, can track the progress of the titration. Differentiation of the pH /titrant volume curve, and then second differentiation, can give an accurate value of the equivalence-point. See Fig. 6.1-1.
Similarly, potentiometric measurements with ion-selective electrodes as indicator electrodes can be used to detect end-points for titration reactions in which the titrant or analyte concentration is sensed by the electrode, with a logarithmic dependence of potential on concentration. A silver ion selective electrode could be used for titrations with silver nitrate.
22.214.171.124 Measurement of potential in redox titrations
The equivalence-point in a redox titration can be identified with a redox indicator or via an electrical measurement, such as the potential of an electrode that can monitor the relative concentrations of redox active species against a reference electrode. For example, Fe(II) titrated by Ce(IV) has equilibrium amounts of Fe(II)/Fe(III) before the equivalence-point, at which essentially all Fe(II) is converted to Fe(III) and then to Ce(IV)/Ce(III) after the equivalence-point. The potential of a platinum wire in the titration solution follows Fe(II)/Fe(III) according to the Nernst Equation
And then Ce(IV)/Ce(III)
A large difference in the standard electrode potentials of these redox couples (E0(FeIII/II)=+0.68 V; E0(CeIV/III)=+1.44 V) leads to a clear and sharp change in potential at the equivalence-point. See Fig. 6.1-2.
Some titrations may not exhibit an inflection point at the equivalence-point, for example when the redox reaction is unsymmetrical. An example is the titration of Fe(II) with potassium permanganate:
An unsymmetrically-shaped titration curve is produced, with the equivalence-point near the top of the potentiometric break.
6.2.1 Measurement of alkalinity of natural waters
The alkalinity of water is its acid-neutralizing capacity, and is considered the sum of all titratable bases. Two titrations with a strong acid are made to pH 4.5 (methyl orange, or methyl red end-point) and to pH 8.3 (phenolphthalein end-point). For many surface waters, alkalinity values are primarily a function of carbonate, hydrogen carbonate, and hydroxide content. The measured values may also include contributions from borates, phosphates, silicates, or other bases, if these are present. Phenolphthalein end-point alkalinity, AP, attributed to all the hydroxyl and carbonate, is also known as composite alkalinity. Methyl red (methyl orange) end-point alkalinity, is also known as total alkalinity . Alkalinity is often expressed as an equivalent mass concentration of calcium carbonate, mg/L as CaCO3.
6.2.2 Mohr method for chloride
Chloride concentration is measured by titration by Ag+ with a small amount of K2CrO4 added to the solution. The end-point is detected by the formation of a reddish-brown precipitate of Ag2CrO4. The pH should be between 6 and 10. Below pH 6, chromate is present as HCrO4− instead of CrO42− and the Ag2CrO4 end-point is delayed. Silver hydroxide is precipitated above pH 10. For titrations at pH≤6, a back titration is recommended (Volhard method) for the excess Ag+. The second titrant is KSCN and the indicator is Fe3+. The end-point is given by the reddish-coloured Fe(SCN)2+ complex.
6.2.3 Measurement of water hardness
Water hardness, the concentration of titratable calcium and magnesium, is measured by complexiometric titration using the blue dye Eriochrome Black T (ErioT) as the indicator. When added to water, the indicator reacts with Ca2+ and Mg2+, exhibiting a wine-red colour. Upon reaction with the titrant EDTA (ethylenediaminetetraacetic acid) the colour becomes blue. Water hardness is expressed as an amount concentration of calcium and magnesium or an equivalent mass concentration of calcium carbonate or calcium oxide.
6.2.4 Karl Fischer titration of water
Traces of water may be determined by the Karl Fischer titration, using iodine as a titrant. In the classical Karl Fischer titration, the sample is dissolved in anhydrous methanol and is titrated with the Karl Fischer reagent, which contains iodine, sulfur dioxide, and pyridine dissolved in methanol. The iodine and sulfur dioxide exist as addition compounds with pyridine (C5H5N) and an overall reaction can be written:
Thus, each molecule of iodine is equivalent to one molecule of water.
The end-point corresponds to the first appearance of iodine. This can be detected visually by the change of the pale yellow of the reaction mixture to a permanent reddish-brown tinge with free iodine, or a few drops of methylene blue in methanol. More often, an electrometric (amperometric) detection of the iodine is utilised. Commercial coulometric titration instruments are available in which the iodine is generated coulometrically at a constant current and Coulomb’s law is used to calculate the quantity of iodine generated.
The Karl Fischer titration can be used for the direct determination of water in a wide variety of organic substances, including alcohols, unsaturated hydrocarbons, acids and acid anyhydrides, esters, ethers, amines, sulfides, and ntiroso and nitro compounds.
6.3 Definitions of terms
For historical definitions see .
6.3.1 adsorption indicator
Visual indicator adsorbed or desorbed, with concomitant colour change, at the end-point of the titration.
Example: The yellow dye fluorescein added to chloride titrations, which adsorbs on the silver chloride precipitate at the end-point, giving it a pink colour.
Source:  p 48 (with minor change).
6.3.2 alkalinity, A
Measure of the capacity of aqueous media to react with hydrogen ions.
Note 1: Alkalinity is used to assess the buffering capacity of natural waters and is measured by titration using methyl red (pH 4.5, total alkalinity) or phenolphthalein (pH 8.3, composite alkalinity).
Note 2: Alkalinity is often expressed as an equivalent mass concentration of calcium carbonate, for example alkalinity=3.2 mg L−1 as CaCO3.
Source:  (with minor change).
6.3.3 back titration
Titration of remaining reagent after addition of excess reagent to the analyte.
6.3.4 colour indicator
Visual indicator that changes colour at the end-point of a titration.
Note: A one-colour indicator changes between colourless and a colour (e.g. phenolphthalein and many complexometric indicators). A two-colour indicator changes between two colours (e.g. litmus, red-acidic: blue-alkaline).
Source: Adapted from 
6.3.5 composite alkalinity, AP
Alkalinity measured by titration with phenolphthalein as the visual indicator.
Note: The pH of the end-point is about 10.0 and therefore
Source:  (with minor change).
6.3.6 coulometric titration
Titration in which the reactant is generated by an electrochemical reaction and the time of titration at a fixed electric current is measured.
Note 1: For a redox reaction A+ne→B where species B is the reactant in the titration, the amount of B generated by passage of current I A for time t s is It/nF mol, where F is the Faraday constant.
Note 2: Coulometric titration fulfils the criterion for a primary measurement procedure, as it does not require a standard of the quantity being measured.
Source: Adapted from  p 47.
6.3.7 direct titration
Indication [VIM 4.1] of the equivalence-point of a titration.
Note 1: The conditions of the titration should be chosen so that the end-point is as close as possible to the equivalence-point.
Note 2: The difference between the end-point and equivalence-point is the titration error.
Source: Adapted from .
6.3.9 end-point error
Difference in the amount of titrant, or the corresponding difference in the amount of substance being titrated, between the end-point value and equivalence-point value.
Note: End-point error is usually expressed as a volume of titrant solution.
Stage of a titration at which the titrant has completely reacted with the receiving solution according to the stoichiometry of the reaction.
Source: Adapted from  p 47.
6.3.11 visual indicator
Substance which interacts with species in a titration, giving a visual change at the end-point.
Note 1: Visual indicators may be described by the nature of the indication (colour indicator, one-colour indicator, two-colour indicator, adsorption indicator) or by the nature of the reaction (acid-base indicator, redox indicator).
Note 2: If the indicator takes part in the reaction, for example an acid base indicator, it must be added in sufficiently small amount to avoid disrupting the equivalence-point through its interaction with the reacting species.
Note 3: When there is no ambiguity, the term ‘indicator’ may be used for visual indicator.
Example: phenolphthalein changes from colourless to pink in the pH range 8.2–10.0 and so may be used as an indicator for a strong base-weak acid titration.
6.3.12 indicator consumption error
Systematic error arising from the reaction of a visual indicator with a reactant in a titration.
Note 1: The magnitude and sign of this error depends on the amount of indicator used and the nature of the interaction between the indicator and the analyte.
Note 2: A significant compensation of this error usually takes place because the standardization of the titrant is carried out in similar conditions to the analysis titration.
6.3.13 mixed indicator
Visual indicator containing a supplementary dye selected to heighten the overall colour change at the end-point of a titration.
Example: A solution of bromocresol green (1 g/L) and methyl red (0.2 g/L) gives a sharper colour change for acid titrations of the weak base morpholine.
6.3.14 precipitation indicator
Visual indicator precipitated, with concomitant colour change, by reaction with excess titrant at the end-point of a titration.
Example: Mohr method for chloride analysis using chromate. (See 6.2.2)
6.3.15 redox indicator
Visual indicator that reacts with excess titrant or analyte with a colour change at the end-point of a titration.
Example: Titration of iodine by thiosulfate uses a starch solution to produce a strong blue colour. At the end-point, when all iodine is reacted, the solution becomes colourless.
6.3.16 redox titration
Titration depending on the oxidation or reduction of the analyte by the titrant.
Note: The end-point may be observed using a redox indicator.
Examples: Titrations using iodine as an oxidising agent or as the product of oxidation of iodide are known as iodometry.
Reactant solution being titrated.
Source: Adapted from  p 47.
Reactant solution added in titration.
Source: Adapted from  p 47.
Addition of a reactant A (known as the titrant) in measured increments to a solution containing a second reactant B with provision for some means of recognizing (indicating) the end-point at which essentially all of B has reacted. The stoichiometry of the reaction and the amount of one reactant (A or B) is known, allowing the calculation of the amount of the other reactant.
Note 1: The titrant can be added by volume (volumetry) or by mass.
Note 2: If the titrant reacts directly with the analyte, the process is known as a direct titration. In a back titration a known, excess amount of reactant is added, followed by titration of the remaining reactant.
Note 3: If the volume of the solution containing the analyte is known, the concentration of the analyte solution may be calculated.
Note 4: The term ‘titre’, meaning a measure of the concentration of the solution of known concentration or the titration volume, is considered obsolete, and its further use deprecated.
Source: Adapted from  p 47.
6.3.20 titration error
Sum of end-point error and indicator consumption error.
6.3.21 titrimetric analysis
Methods of analysis employing titration.
Note: The titrant added is usually measured by volume (volumetry) but it can be weighed.
Source: Adapted from  p 47.
6.3.22 total alkalinity, AT
Alkalinity measured by titration with methyl red as the visual indicator.
Note: The pH at the end-point is about 4.8 and therefore
Source:  (with minor change).
6.3.23 transition potential (of a redox indicator)
Potential at the end-point of a redox indicator.
Note: The transition potential is a function of the formal redox potential, the total concentration of the indicator (especially for one colour indicators), the depth of the colour layer, the minimal observable absorbance change (which depends on wavelength and eye sensitivity), and the absorption coefficient.
6.3.24 universal indicator
Visual indicator for acid-base titrations composed of a solution of several pH indicators (main components are thymol blue, methyl red, bromothymol blue, and phenolphthalein) that changes colour through a range of pH values, usually from 1 to 14.
Note: Universal indicators are commercially available in the form of a solution or in paper strips accompanied by a colour matching chart.
Titration when the titrant is added by volume.
Note: Titrant is usually added from a calibrated burette.
Source: Adapted from .
7 Quality of results
7.1 Aspects of quality
To have value, measurement results must be metrologically traceable  to an appropriate reference, which in the cases treated in this chapter are SI units of mass, volume, and amount of substance. A statement of measurement uncertainty always accompanies a traceable result. Methods must be validated and verified for use by a particular operator at a particular time. Accreditation to an appropriate standard, such as ISO 17025 , is overseen by organisations usually with governmental or quasi-governmental status. Gaining accreditation for a particular method shows that a laboratory is using validated methods by competent personnel, but of course can never guarantee a reliable result . In this section we will review the components of measurement uncertainty of mass and volume measurements and then apply this to the preparation of a standard solution and a typical titration. It is noted that the metrological traceability chain will involve multiple branches , often through amount fraction or mass fraction. For more information, see the chapter on quality assurance in the forthcoming 4th edition of the Orange Book , or in .
7.2 Uncertainty associated with mass measurement
Guides to estimating the uncertainty of measurement are published by EURACHEM/CITAC  and JCGM . The examples given here are essentially the bottom-up GUM approach , because the measurements are very straightforward in a metrological sense and can be described in sufficient detail to allow uncertainty contributions to be assessed and combined.
In general, instrumental chemical analysis – the uncertainty arising from mass measurements, assuming an appropriate balance has been used – is often negligible compared with other sources of uncertainty.
7.2.1 Uncertainty components
For tared weighing, uncertainty sources are i) repeatability, ii) readability (digital resolution) of the balance scale, and iii) the contribution due to the uncertainty in the calibration function of the scale. The last has two components, the sensitivity of the balance and its linearity. Of these, sensitivity (the slope of the calibration) gives a negligible contribution to uncertainty, because the mass by difference is done on the same balance over a very narrow range. In this discussion, corrections (and their uncertainties) for the buoyancy of air will be also neglected (see 3.2.2), although even when a buoyancy correction is deemed unnecessary, buoyancy may still add a non-negligible uncertainty .
Standard uncertainty due to repeatability is taken as the standard deviation of repeated measurements (ur=sr) and is established for a given operator as part of their induction or taken from validation data. That the repeatability still applies may be checked from time to time by duplicate weighing and ensuring that the difference in duplicate measurements is within the repeatability limit (r=2√2 sr), which is the 95 % interval about zero for the difference .
Errors due to linearity are given in the calibration certificate as maximum permissible errors for different ranges of weighing, ±a. This guarantees that weighing a mass m many times will not give a mean result outside m±a, but makes no statement about the probability of values within the range. Therefore, a uniform distribution is assumed and the standard uncertainty is u=a/√3.
The readability of the scale of an electronic balance is±half the smallest digit. The standard uncertainty is this value divided by √3.
The components are combined in quadrature:
The standard uncertainty can be multiplied by a coverage factor to obtain an expanded uncertainty with 95 % coverage probability, but as masses (and volumes) are usually used as part of an ongoing calculation, the combined standard uncertainty is often quoted.
Calculated values in these examples are rounded to two significant figures. It is recommended that, when calculating uncertainties, the manipulation of results be done in a spreadsheet that records full precision. Only when reporting results should rounding be applied.
A four-figure tared balance measuring a sample of 0.5 g has a linearity of ±0.15 mg, and the least significant digit is 0.1 mg. Repeated weighing of a standard mass had a standard deviation of 0.18 mg.
The combined standard uncertainty is
In this example, repeatability represents about 80 % of the total uncertainty. The relative combined uncertainty is 0.0002/0.5=0.0004 or 0.04 %.
7.3 Uncertainty associated with volume measurement
Volume measurement has three major components of uncertainty: calibration linearity, repeatability, and temperature. Calibration and repeatability are treated as for mass measurements, except that the nominal volume of a pipette or burette is found to be most likely within the quoted maximum permissible error range. In this case a triangular distribution is more appropriate and the standard uncertainty is a/√6, i.e. as we are more confident of the nominal value the uncertainty is less than if we assume that all values in the range are equally probable.
Subsumed in repeatability is the operator’s ability to accurately fill a pipette or volumetric flask to the mark, or burette or measuring cylinder to the desired volume graduation, and then dispense that volume correctly.
Glassware is calibrated at 20 °C. Temperature changes affect the volume of the liquid in the glassware, rather than the glass itself. The volume expansion of water (this value is used for any aqueous solution) is 0.00021 °C−1. Thus, for each 1 °C away from 20 °C, water in a 100 mL volumetric flask increases or decreases by 0.021 mL. If the standard uncertainty of the temperature of the laboratory is uT (in °C) then the uncertainty in a volume V, uV=0.00021×V×uT. The temperature of a laboratory is routinely monitored and a conservative range is taken with a uniform distribution.
In these examples we estimate the uncertainty of delivering an aqueous solution from a Grade A 100 mL volumetric flask. In the first calculation, uncertainty components from temperature changes and from the calibration error of the flask are included. Fill-and-weigh experiments are then used to measure the volume of the flask and to correct the nominal volume. Thirdly, the temperature is measured and the volume corrected back to 20 °C. Each step improves the uncertainty.
A 100 mL Grade A volumetric flask is certified to within ±0.1 mL. The standard uncertainty of this calibration, assuming a triangular distribution, is 0.1 mL/√6=0.041 mL. Repeatability standard deviation from ten fill-and-weigh measurements is 0.022 mL and is used directly as a standard uncertainty. If the laboratory temperature is known to vary between 16 and 24 °C, the standard uncertainty of the temperature is 4/√3 °C=2.3 °C. This gives a standard uncertainty in volume of 0. 00021×100×4/√3=0.048 mL.
The three contributions are combined to give the combined standard uncertainty uc(V) of a single measurement of the volume V=100 mL
The relative combined standard uncertainty is 0.067 mL out of 100 mL, or 0.07 %.
Suppose we now take the results from the fill-and-weigh experiment to correct the volume of the flask. The mean of the ten fill-and-weigh experiments is 100.03 mL and standard deviation, as before, 0.022 mL. To simplify the calculation, we ignore any uncertainty of the weighing. It is now known that the flask delivers 100.03 mL with standard deviation of the mean of 0.022/√10=0.0070 mL. This new standard uncertainty of 0.007 mL replaces the standard uncertainty calculated from the manufacturer’s maximum permissible error (0.04 mL). Now for a single delivery of 100.03 mL, the uncertainty is
By performing a calibration on a particular piece of glassware we have reduced the uncertainty somewhat. However, uncertainty of temperature is now the major component.
We now measure the temperature of the laboratory while we use the volumetric flask as 22.3 °C with a standard uncertainty of 0.5 °C. The volume of 100.03 mL at 22.3 °C may now be corrected back to 20 °C (if this is required for comparison and other calculations)
with uncertainty from the temperature of 0.00021×100×0.5=0.011 mL. The new combined standard uncertainty for delivery of a single volume is
Having corrected systematic effects, the remaining uncertainty is dominated by repeatability.
The calculated uncertainties for these three examples are shown in Table 7.3-1.
|No correction||Calibrated||Calibrated and temperature corrected|
u cal for calibration error (±0.1 mL); uT for temperature range ±4 °C; sr repeatability standard deviation.
7.4 Uncertainty associated with preparing standard solutions
The preparation of a standard solution by dissolving a mass of material of known purity in a volume is the first step in many analytical measurements. The standard solution might be used for titration, or to make a series of reference solutions used for calibration.
Recalling equation (4) uncertainties are combined as the square of relative uncertainty, which leads to an expression for the combined standard uncertainty of cA
The uncertainties of mA and Vsol are obtained by the analysis outlined in 7.2 and 7.3, respectively. Uncertainty of molar mass is considered negligible, the relative uncertainty being usually much less than 0.01 %. However, in the following example, we include u(MA) calculated from atomic weight data .
Potassium hydrogen phthalate (KHP) is a commonly used standard for acid-base titrations (see 4.1.2). An approximately 0.1 mol L−1 solution is obtained by dissolving 2.0 g in water and making up to 100 mL. The NIST certificate informs that the purity is (99.9934±0.0076) %, where 0.0076 % is the expanded uncertainty with a coverage factor [VIM 2.38], k=2.04. The standard uncertainty of the purity (as a fraction) is therefore 0.000076/2.04=0.000037.
Assume the measured mass is 2.051 g. The calculated concentration is 0.100423 mol L−1. In this example the uncertainties of mass and volume measurements are 0.2 mg and 0.067 mL, respectively, and MKHP is 204.223 with u=0.004. The combined standard uncertainty is 0.000068 mol L−1. Almost all arises from the uncertainty in volume. A spreadsheet calculation is shown in Table 7.4-1. The standard uncertainties are entered in the column ‘u’, converted to relative uncertainties (u/x) by dividing by the value of the quantity (note volume is in litres), squared and summed (Excel function SUMSQ), and then the square root is taken to give the combined relative uncertainty. This is multiplied by the calculated concentration to give the combined standard uncertainty.
|M KHP/(g mol−1)||204.223||0.004||0.000020|
|c KHP/(mol L−1)||0.100423||0.000068||0.00068|
Note that in this calculation the repeatability contributions are contained within the uncertainties for mass and volume.
The concentration should therefore be reported as 0.10004 mol L−1 with standard uncertainty u=0.00007 mol L−1.
As a further example, the EURACHEM guide Quantifying Uncertainty in Analytical Measurement  gives the example of estimating the uncertainty in the mass concentration of a cadmium standard (see Appendix 1 of ). Here 0.1 g of cadmium is weighed with a relative standard uncertainty of 0.0005, some five times greater than the relative uncertainty in the example of potassium hydrogen phthalate. The combined standard uncertainty now has similar contributions from mass and volume and an overall relative uncertainty of 0.0009.
7.5 Uncertainty associated with titration
Titrating to an end-point with a standardised titrant leads to the concentration of the test solution by equation 10: As the measurement function involves multiplication and division, we sum the squares of the relative standard uncertainties.
Note that the stoichiometric coefficients are exact whole numbers with no uncertainty.
Assuming A is titrated into B, VB is a fixed volume delivered by pipette and VA is the end-point volume, the value of which includes titration error. Because titration error is a systematic effect, if it is considered a correction should be applied with some assessment of the uncertainty of the correction. Titrations are replicated to obtain the repeatability standard deviation of cB, which is treated as another component, with no random effects in u(VA) and u(VB). The EURACHEM Guide  recommends including temperature effects, because thermal equilibrium is unlikely to be attained in the solutions, so this effect will not cancel.
Using the standard solution of potassium hydrogen phthalate (7.4.1), a sodium hydroxide solution was standardised to be 0.10214 mol L−1 with u=0.00010 mol L−1 (see Example A2 in ). Four 10 mL-aliquots of a solution of hydrochloric acid of unknown concentration were titrated with the standard sodium hydroxide solution giving end-point volumes of 10.91 mL, 10.86 mL, 10.88 mL, and 10.89 mL, with an average of 10.885 mL and standard deviation 0.0208 mL.
The concentration of the hydrochloric acid solution is
Taking each term and evaluating the uncertainty:
c NaOH. The standard uncertainty is given as 0.00010 mol L−1.
V NaOH. The maximum permissible calibration error for a 25 mL burette is ±0.03 mL, assuming a triangular distribution u=0.03/√6=0.0122 mL. If the temperature range in the laboratory is ±4 °C, the standard uncertainty for the mean titration volume is u=0.00021×4/√3×10.885=0.0053 mL (see also 7.3.1). The repeatability standard uncertainty is calculated from the standard deviation of the four replicate titration volumes by assuming a scaled-and-shifted t-distribution 
Note that this replaces the assumption of a Gaussian distribution, which then requires a Student-t value at (n–1) degrees of freedom. Therefore u=0.01803 mL. Combining these uncertainties gives
Titration error for a strong acid – strong base is considered negligible.
V HCl. The maximum permissible calibration error for a 10.00 mL pipette is ±0.02 mL, assuming a triangular distribution u=0.02/√6=0.0082 mL. By a similar calculation to that of the previous section, the uncertainty arising from temperature is u=0.00021×4/√3×10=0.0049 mL. Random effects are included in u(VNaOH). Therefore,
The combined standard uncertainty is, therefore,
The relative combined standard uncertainty is 0.33 %, showing titration is an exceptionally accurate analytical method. Figure 7.5-1 shows the contributions made by the components of the titration, where repeatability of the titration is shown separately from the other components of u(VNaOH).
References to ISO standards that have been adopted by IUPAC, such as the International Vocabulary of Metrology  use the ISO approach for terminology work. Thus, definitions are phrases that can substitute for the term, and have neither initial capital nor final period.
7.6.1 combined standard measurement uncertainty
combined standard uncertainty
standard measurement uncertainty that is obtained using the individual standard measurement uncertainties associated with the input quantities in a measurement model
Note: In case of correlations of input quantities in a measurement model, covariances must also be taken into account when calculating the combined standard measurement uncertainty; see also  9-2.3.4.
Source: [VIM 2.31]
7.6.2 coverage factor
number larger than one by which a combined standard measurement uncertainty is multiplied to obtain an expanded measurement uncertainty
Note: A coverage factor is usually symbolised k (see also  2.3.6).
Source: [VIM 2.35]
7.6.3 expanded measurement uncertainty
product of a combined standard measurement uncertainty and a coverage factor larger than the number one
Note: The coverage factor depends upon the type of probability distribution of the output quantity in a measurement model and on the selected coverage probability.
Source: [VIM 2.35]
7.6.4 maximum permissible error
extreme value of measurement error, with respect to a known reference quantity value, permitted by specifications or regulations for a given measurement, measuring instrument, or measuring system.
Source: [VIM 4.26]
7.6.5 measurement uncertainty
uncertainty of measurement
non-negative parameter characterising the dispersion of the quantity values being attributed to a measurand, based on the information used
Note 1: Measurement uncertainty includes contributions arising from systematic effects, such as contributions associated with corrections and the assigned quantity values of measurement standards, as well as the definitional measurement uncertainty. Sometimes estimated systematic effects are not corrected for but, instead, associated measurement uncertainty contributions are incorporated.
Note 2: The parameter may be, for example, a standard deviation termed standard measurement uncertainty (or a specified multiple of it), or the half-width of an interval, having a stated coverage probability.
Note 3: Measurement uncertainty comprises, in general, many contributions. Some of these may be evaluated by Type A evaluation of measurement uncertainty from the statistical distribution of the quantity values from series of measurements and can be characterised by standard deviations. The other contributions, which may be evaluated by Type B evaluation of measurement uncertainty, can also be characterised by standard deviations, evaluated from probability density functions based on experience or other information.
Note 4: In general, for a given set of information, it is understood that the measurement uncertainty is associated with a stated quantity value attributed to the measurand. A modification of this value results in a modification of the associated uncertainty.
Source: [VIM 2.26]
degree to which a set of inherent characteristics of an object fulfils requirements
Note 1: The term “quality” can be used with adjectives such as poor, good, or excellent.
Note 2: “Inherent”, as opposed to “assigned”, means existing in the object.
Source: , 3.6.2
7.6.7 standard measurement uncertainty
standard uncertainty of measurement
measurement uncertainty expressed as a standard deviation
Source: [VIM 2.30]
8 Index of terms
adsorption indicator 6.3.1
alkalinity, A 6.3.2
back titration 6.3.3
colour indicator 6.3.4
combined standard measurement uncertainty 7.6.1
combined standard uncertainty. See combined standard measurement uncertainty 7.6.1
composite alkalinity, AP 6.3.5
conventional mass, mc 3.4.1
coulometric titration 6.3.6
coverage factor 7.6.2
direct titration 6.3.7
end-point error 6.3.9
equivalent (in volumetric and gravimetric analysis) 4.2.1
expanded measurement uncertainty 7.6.3
expanded uncertainty. See expanded measurement uncertainty 7.6.3
gravimetric analysis 5.2.3
gravimetric factor, gF 5.2.4
gravimetry. See gravimetric analysis 5.2.3
homogenous precipitation 5.2.5
indicator consumption error 6.3.12
indicator error. See titration error 6.3.20
indicator. See visual indicator 6.3.11
maximum permissible error 7.6.4
measurement uncertainty 7.6.5
mixed indicator 6.3.13
precipitation (in chemistry) 5.2.7
precipitation indicator 6.3.14
redox indicator 6.3.15
redox titration 6.3.16
relative supersaturation. See Von Wiemarn ratio 5.2.11
solubility, s 5.2.8
standard measurement uncertainty 7.6.7
standard solution 4.2.3
standard uncertainty of measurement. See standard measurement uncertainty 7.6.7
standard uncertainty. See standard measurement uncertainty 7.6.7
stock solution 4.2.5
thermogravimetric analysis (TGA) 5.2.10.
thermogravimetry (TG). See thermogravimetric analysis (TGA) 5.2.10.
titration error 6.3.20
titrimetric analysis 6.3.21
titrimetry. See titrimetric analysis 6.3.21
total alkalinity, AT 6.3.22
transition potential (of a redox indicator) 6.3.23
uncertainty of measurement. See measurement uncertainty 7.6.5
uncertainty. See measurement uncertainty 7.6.5
universal indicator 6.3.24
visual indicator 6.3.11
Von Wiemarn ratio 5.2.11
9 Index of abbreviations
A. See alkalinity 6.3.2
A P. See composite alkalinity 6.3.5
A T. See total alkalinity 6.3.22
g F. See gravimetric factor 5.2.4
m c. See conventional mass 3.4.1
s. See solubility 5.2.8
TG. See thermogravimetric analysis 5.2.10
TGA. See thermogravimetric analysis 5.2.10
Membership of sponsoring bodies
Membership of IUPAC Analytical Chemistry Division (Division V) at the start of this project, in 2015, was as follows:
President: D. B. Hibbert; Vice-President: J. Labuda; Secretary: M. Zoltán; Past President: M. F. Camões; Titular Members: C. Balarew, Y. Chen; A. Felinger, H. Kim, M. C. Magalhães, H Sirén; Associate Members: R. Apak, P. Bode, D. Craston, Y. H. Lee, T. Maryutina, N. Torto; National Representatives: O. C. Othman, L. Charles, P. DeBièvre, M. Eberlin, A. Fajgelj, K Grudpan, J. Hanif, D. Mandler, P. Novak, and D. Shaw.
The 2016–2017 membership of Division V was as follows:
President: Jan Labuda; Vice President: Zoltan Mester; Secretary: Attila Felinger; Past President: D. Brynn Hibbert; Titular Members: Derek Craston, Tatyana Maryutina, Sandra Rondinini, David Shaw, Heli M. M. Sirèn, Takae Takeuchi; Associate Members: M. Filomena Camões, Érico M. M. Flores, Hasuck Kim, M. Clara Magalhães, Slavica Ražić, National Representatives: Medhat Al-Ghobashy, Resat Apak, Muhammad Athar, Huan-Tsung Chang, Ales Fajgelj, Wandee Luesaiwong, Stefan Tsakovski, Lea Vilakazi, Earle Waghorne.
This work was prepared under project 2015-028-2-500, “Methods of analysis depending on measurement of mass and volume - revision of the Orange Book Chapter 3”, with membership M. Filomena Camões (Task group chair) and Gary Christian.
International Union of Pure and Applied Chemistry, Funder Id: 10.13039/100006987, Grant Number: 2012-005-1-500.
 D. B. Hibbert, ed. IUPAC Compendium of Terminology in Analytical Chemistry (Fourth edition of the Orange Book), Royal Society of Chemistry, London (in preparation).Search in Google Scholar
 Joint Committee for Guides in Metrology. International vocabulary of metrology – Basic and general concepts and associated terms VIM, JCGM 200:2012 BIPM, Sèvres.Search in Google Scholar
 Republique Francaise. Decree on weights and measures, France (April 7, 1795).Search in Google Scholar
 International Bureau of Weights and Measures (BIPM). Draft of the ninth SI Brochure (11 December 2015), BIPM, (2016).Search in Google Scholar
 N. Fletcher, R. S. Davis, M. Stock, M. J. Milton. arXiv preprint arXiv:1510.08324 (2015).Search in Google Scholar
 B. Andreas, Y. Azuma, G. Bartl, P. Becker, H. Bettin, M. Borys, I. Busch, M. Gray, P. Fuchs, K. Fujii. Phys. Rev. Lett.106, 030801 (2011).10.1103/PhysRevLett.106.030801Search in Google Scholar PubMed
 D. B. Newell, F. Cabiati, J. Fischer, K. Fujii, S. G. Karshenboim, H. S. Margolis, E. de Mirandes, P. J. Mohr, F. Nez, K. Pachucki, T. J. Quinn, B. N. Taylor, M. Wang, B. Wood, Z. Zhang. “The CODATA 2017 Values of h, e, k, and NA for the Revision of the SI,” Metrologia, accepted, online 20 Oct 2017, https://doi.org/10.1088/1681-7575/aa950a (2017).10.1088/1681-7575/aa950aSearch in Google Scholar
 European Directorate for the Quality of Medicines and Health Care. Qualification of equipment, Annex 8: Qualification of balances:2013 OMCL Network of the Council of Europe, Paris.Search in Google Scholar
 Organization for Legal Metrology (OIML). R-76; Non-automatic weighing instruments, Part 1: Metrological and technical requirements – Tests: 2006 OIML, Paris.Search in Google Scholar
 Organization for Legal Metrology (OIML). D-28; Conventional value of the result of weighing in air: 2004 OIML, Paris.Search in Google Scholar
 Organization for Legal Metrology (OIML). R 111-1; Weights of classes E1, E2, F1, F2, M1, M1–2, M2, M2–3 and M3, Part 1: Metrological and technical requirements:2004 OIML, Paris.Search in Google Scholar
 Joint Committee for Guides in Metrology. Evaluation of measurement data – The role of measurement uncertainty in conformity assessment, JCGM 106:2012 BIPM, Sèvres.Search in Google Scholar
 J. Inczedy, T. Lengyel, A. M. Ure. IUPAC Compendium of Analytical Nomenclature. Definitive Rules 1997. (Third Edition of the Orange Book.), Port City Press, Baltimore, USA (1998).Search in Google Scholar
 G. D. Christian, P. K. Dasgupta, K. A. Schug. Analytical Chemistry, 7th ed., Wiley & Sons, New York (2014).Search in Google Scholar
 NIST. Certificate of Analysis Standard Reference Material -potassium hydrogen phthalate (84L): 2010 National Institute for Standards and Testing, Gaithersburg, MA.Search in Google Scholar
 E. R. Cohen, T. Cvitas, J. G. Frey, B. Holmstrom, K. Kuchitsu, R. Marquardt, I. Mills, F. Pavese, M. Quack, J. Stohner, H. L. Strauss, M. Tamaki, A. Thor. Quantities, Units and Symbols in Physical Chemistry (IUPAC Green Book), 3rd ed., The Royal Society of Chemistry, Cambridge (2007).10.1039/9781847557889Search in Google Scholar
 E. Bishop. Indicators: International Series of Monographs in Analytical Chemistry, Elsevier, Amsterdam (2013).Search in Google Scholar
 ISO. 9963-1: Water quality – Determination of alkalinity – Part 1: Determination of total and composite alkalinity:1994 International Organization for Standardization, Geneva.Search in Google Scholar
 ISO/IEC. General requirements for the competence of calibration and testing laboratories, 17025:2005 International Organization for Standardization, Geneva.Search in Google Scholar
 EURACHEM/CITAC. CG4 Quantifying Uncertainty in Analytical Measurement 3rd Edition: 2012 Laboratory of the Government Chemist, London.Search in Google Scholar
 Joint Committee for Guides in Metrology. Evaluation of measurement data - Guide to the expression of uncertainty in measurement, JCGM 100:2008 BIPM, Sèvres.Search in Google Scholar
 ASTM International. E177-14 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods: 2014 American Society for Testing and Materials, Philadelphia.Search in Google Scholar
 J. Meija, B. Coplen Tyler, M. Berglund, A. Brand Willi, P. De Bièvre, M. Gröning, E. Holden Norman, J. Irrgeher, D. Loss Robert, T. Walczyk, T. Prohaska. Pure Appl. Chem.88, 265 (2016).10.1515/pac-2015-0305Search in Google Scholar
 ISO. Quality management systems – Fundamentals and vocabulary, 9000:2015 International Organization for Standardization, Geneva.Search in Google Scholar
The online version of this article offers supplementary material (https://doi.org/10.1515/pac-2017-0410).
©2018 IUPAC & De Gruyter. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. For more information, please visit: http://creativecommons.org/licenses/by-nc-nd/4.0/