Issue Date: March 2015 | PDF for this issue.
A Time Before Nipples

What was milk like long ago in evolutionary history? In the absence of a time machine, the next best way to answer this question is to take what is known about the diversity of living mammals and work backwards using deductive logic, just like Sherlock Holmes. Recently, progress in this area has received a major boost from two papers about the different sugars found in monotreme milk—monotremes being the wackiest and most ancestral-like of the classes of mammals, with membership so exclusive it is limited to only two kinds of creature, the platypus and the echidna.
These two recent papers investigate the mid-sized sugar molecules, called oligosaccharides, that occur in platypus [1] and echidna [2] milk, respectively. The chemical structures of these oligosaccharides would have likely been innocuous details of milk evolution if it weren’t for one thing. In both the platypus and the echidna oligosaccharides, a very unusual structure was observed: one that goes by the tag line ‘4-O–acetylation’, meaning that an oxygen molecule in a particular position is attached to acetyl group.
Why would this matter? On the face of things, it didn’t make biological sense. Scientists consider one of the main roles of the oligosaccharides in milk to be acting as a foodstuff for certain species of bacteria in order for these bacteria to become more common in the infant gut, and thus outcompete pathogenic species. At least, that is what happens in humans. But bacteria cannot chomp through oligosaccharides with 4-O–acetylation. Which sets up the question of why monotremes (and, most likely, also ancestral mammals) would bother to devote resources to making such strange chemical arrangements. At this point in the puzzle, it helps to think about nipples—in fact, eggs and nipples.
One of the things that make monotremes unique among modern mammals is that they lay eggs. Their eggs are not hard on the outside like chickens’ eggs, but have shells composed of a material rather like parchment, leaving the developing embryo inside prone to dehydration. Creatures called synapsids (the ancestors of mammals) also laid eggs of this nature, and female synapsids are thought to have secreted a kind of not-yet-evolved milk from glands in their skin, to stop their eggs from drying out [3]. Likely, their permeable eggs would have taken up nutrients like calcium from these secretions.
As in synapsids, monotreme milk oozes out from glands and runs all over the skin because platypuses and echidnas do not have nipples. No one really knows when in evolution the nipple came about, but nipples certainly evolved after milk. In this sense, nipples matter because—rather like drinking beer from a bottle, as opposed to an open glass prevents a drink from being spiked—nipples keep milk away from all the stuff in the environment that might pollute or metabolize it. Without nipples for suckling, newly hatched monotremes drink milk in a way that is more akin to licking a nightclub floor—they lap or vaccum up what has been secreted on the warm, exceedingly hairy, probably sweaty, skin of their mother’s pouch. (Echidnas have proper pouches, while platypuses wrap their beaver-like tails under their abdomens to form temporary pouches.)
So, the cost-benefit analysis of producing oligosaccharides with structures that bacteria cannot eat stacks up differently for monotremes than it does for mammals with nipples. If monotreme mothers were to make milk that bacteria thrived upon, bacteria would run amok in their pouches, probably threatening the survival of their young.
Curiously, there is a strong hint from an old piece of research that indicates that young monotremes have a particular enzyme capable of breaking down oligosaccharides with 4-O–acetylation, thus potentially enabling them to use these sugars for energy. Many years ago, a researcher called Michael Messer found this enzyme in a slurry of echidna gut cells. While his finding needs backing up with more modern laboratory equipment, if supported, it suggests a new way of viewing the oligosaccharides in milk. In modern humans, oligosaccharides may be mainly there to feed good bacteria as opposed to infants, but the signs are that their historical role in the mammal lineage is the other way round. The young of early mammals probably metabolized oligosaccharides, while the bacteria in their environment could not.
References
1. Urashima, T. et al. 4-O-acetyl sialic acid (Neu4,5Ac2) in acidic milk oligosaccharides of the platypus (Ornithorhynchus anatinus) and its evolutionary significance. Glycobiology.2. Oftedal, O. T. et al. Can an ancestral condition for milk oligosaccharides be determined? Evidence from the Tasmanian echidna (Tachyglossus aculeatus setosus). Glycobiology 24(9), 826–839 (2014)3. Oftedal, O. T. The Origin of Lactation as a water source for parchment-shelled eggs. Journal of Mammary Gland Biology and Neoplasia, 7 (3), 253-266 (2002)
Gout, Diet, and Dairy
- Since ancient times, gout has been linked to excess intake of food and alcohol.
- Dietary intervention is a common approach to managing and preventing gout, but recommendations are based on limited evidence.
- A risk factor for gout is high blood levels of uric acid—a metabolic byproduct of purine, which comes from the body’s own cells, as well as from the diet.
- Purine-rich foods including meats, seafood, and alcohol are associated with higher blood levels of uric acid and higher incidence of gout.
- Dairy products, vegetables, fruits, grains, and some other foods are associated with lower blood levels of uric acid and lower incidence of gout.
- However, more research is needed to clarify the role of dietary factors in the risk of gout.

Gout is one of the many diet and lifestyle related diseases on the rise globally. But the link between gout and food intake is far from new. One of the oldest known forms of arthritis, gout was connected with gluttony, drunkenness, and obesity way back in the 5th century BC. Hippocrates, the ancient Greek ‘father of western medicine’, attributed the disease to excessive intake of food and wine, and recommended dietary restriction and reduction of alcoholic beverages as treatment.
While modern medicine is far removed from Hippocratic medicine, today’s physicians still uphold specific dietary recommendations for gout patients. For instance, the American College of Rheumatology Guidelines for Gout Management list a number of foods to avoid or limit, and encourage intake of vegetables and low or non-fat dairy products. But what’s the foundation behind these dietary recommendations?
Culprits and key ingredients
The culprit behind the classic symptoms of gout—pain and inflammation—is a chemical called uric acid. Uric acid is a natural byproduct of our metabolism, but in some people, it accumulates in the blood due to genetic or environmental risk factors. When blood levels get very high, the uric acid solidifies and forms salt crystals. These crystals build up in certain areas of the body—typically the joints—causing pain and inflammation.
Our body makes uric acid when breaking down substances called purines, which are found in all plant and animal cells. That means we metabolize purines coming from our own cells as well as from our diet, especially meats, seafood, and alcoholic beverages.
Around a third of the uric acid in our bodies comes from purines in our food and drink; this is the portion that can be changed to some extent and that varies between individuals. However, only a minority of those with hyperuricemia—a fancy way of saying too much uric acid in the blood—develop gout.
So, has modern science turned up any hard evidence for the ancient gout-and-gluttony link? And can our culinary customs reduce the risk, or even the symptoms, of gout?
Bad diet, bad health, bad uric acid
Several studies have shown a relationship between diet and cases of hyperuricemia, as well as gout. Consumption of purine-rich foods and alcohol has been consistently associated with higher blood levels of uric acid. In contrast, dairy products, vegetables, fruits, grains and some other foods are associated with lower blood levels of uric acid, as reviewed recently by Ekpenyong and Daniel (1).
For instance, a new study from Korea compared nutrient intake and diet quality between a group of people with hyperuricemia and a control group (2). The study was a type of observational study that involved analyzing data from more than 9000 people who underwent health examination during a specific period. They all had completed a detailed three-day food record, a questionnaire on dietary habits, and laboratory tests.
The researchers found many clear differences between people with hyperuricemia and those with normal blood levels of uric acid (the controls). The hyperuricemia sufferers had higher body mass index, thicker waists, and unhealthier cholesterol levels than the controls. So it was perhaps not surprising that the hyperuricemia subjects drank more alcohol and ate less veggies. They also had lower intake of dairy products and several important vitamins and minerals than their healthier counterparts.
The Korean study also found that hyperuricemia was around five times more common in men than in women. This is consistent with the fact that gout is more common in men than in women.
Of meat and men
In an earlier study seeking to examine the link between gout and various foods, US researchers followed a group of nearly 50,000 men over a 12-year period (3). At regular intervals, the men answered questionnaires about their eating habits and medical conditions. The researchers found that the men who ate more meat and seafood had a higher risk of gout, whereas those who ate more dairy products had a lower risk.
For example, those who ate more than 1.9 daily servings of meat had a 41 per cent greater risk of gout than those who ate less than 0.8 daily servings. An additional daily serving of meat increased the risk by 21 per cent. As for dairy, those who consumed more than 2.88 daily servings had a 44 per cent lower risk of gout than those who consumed less than 0.88 daily servings; an additional daily dose of dairy further reduced the risk by 18 per cent.
However, as is typically the case for this type of observational study—and similarly for the Korean study (2)—it was not possible to determine the cause and effect in the observed links between gout, hyperuricemia, and diet.
Trials and tribulations
To dissect the alleged uric acid-lowering effects of dairy products, scientists from New Zealand (4) recruited 16 healthy male volunteers for a randomized controlled trial—the gold standard for directly testing the health effects of various diets or treatments.
Each participant consumed doses of various milk products and a soy control in random order. For each test product, they had their uric acid levels measured immediately before ingestion and then hourly over the following three hours. It turned out that their uric acid blood levels rose by about 10 per cent after intake of the soy control product, but fell by about 10 per cent after intake of the milk products.
Having confirmed that milk could reduce the amount of uric acid in the blood, possibly by increased excretion in the urine, the scientists then tested whether milk products could also help alleviate gout symptoms. In a new randomized controlled trial involving 120 gout patients (5), they found that a daily dose of skimmed milk enriched with two milk components—glycomacropeptide and milk fat extract—somewhat reduced the frequency of gout flare-ups. The patients drinking the enriched skimmed milk also reported greater improvement in pain—a reduction of 10 per cent—compared with controls.
The researchers suggested that the enriched milk might have reduced gout flares not only by increasing excretion of uric acid in the urine, but also by inhibiting the body’s inflammatory response to uric acid crystals in the joints. However, the exact mechanisms and effective doses of bioactive milk components—alone or in combination—remain uncertain.
Whether the effects observed during the three-month trial period would be sustained in the long term is also uncertain. An even more important question is whether the small effects observed in the study—the first of its kind and one of the most carefully designed to date—would be clinically relevant. Scientists who independently evaluated the results (6) doubted their clinical significance and deemed the evidence ‘low-quality’.
The verdict?
As we have seen, several observational studies show a clear association between various dietary factors, hyperuricemia, and gout. But the evidence behind widely used dietary recommendations for alleviating gout is scarce (6). More and larger intervention studies are needed to clarify the role of dietary supplements in managing the risk and symptoms of gout.
Some scientists argue that low-purine diets are not only questionable in terms of their long-term efficacy in gout patients but also too hard to stick to and therefore less likely to succeed than drug-based therapies (7).
And while the jury is still out on the exact the role of diet in hyperuricemia and gout, it seems that the general recommendations for a healthy diet may also help prevent high blood levels of uric acid—an important risk factor for not only gout, but also kidney and cardiovascular disease, obesity, and diabetes. As always, prevention is better than cure.
References
1. Ekpenyong CE, Daniel N (2014). Roles of diets and dietary factors in the pathogenesis, management and prevention of abnormal serum uric acid levels. PharmaNutrition: In Press.
2. Ryu KA, Kang HH, Kim SY, Yoo MK, Kim JS, Lee CH, Wie GA (2014). Comparison of nutrient intake and diet quality between hyperuricemia subjects and controls in Korea. Clin Nutr Res 3:56–63.
3. Choi HK, Atkinson K, Karlson EW, Willett W, Curhan G (2004). Purine-rich foods, dairy and protein intake, and the risk of gout in men. N Engl J Med 350:1093–1103.
4. Dalbeth N, Wong S, Gamble GD, Horne A, Mason B, Pool B, Fairbanks L, McQueen FM, Cornish J, Reid IR, Palmano K (2010). Acute effect of milk on serum urate concentrations: a randomised controlled crossover trial. Ann Rheum Dis 69:1677–1682.
5. Dalbeth N, Ames R, Gamble GD, Horne A, Wong S, Kuhn-Sherlock B, MacGibbon A, McQueen FM, Reid IR, Palmano K (2012). Effects of skim milk powder enriched with glycomacropeptide and G600 milk fat extract on frequency of gout flares: a proof-of-concept randomised controlled trial. Ann Rheum Dis 71:929–934.
6. Andrés M, Sivera F, Falzon L, Buchbinder R, Carmona L (2014). Dietary supplements for chronic gout. Cochrane Database Syst Rev 10:CD010156.
7. Robinson PC, Horsburgh S (2014). Gout: Joints and beyond, epidemiology, clinical features, treatment and co-morbidities. Maturitas 78:245–251.
School Children Prefer Their Milk with Added Flavor
- Milk provides valuable nutrients for school children.
- Chocolate milk is favored by school children.
- Removal of chocolate milk from schools reduces total dairy consumption by children.
- The nutritional consequences of removing chocolate milk from schools should be studied in the context of total diet.

School lunches have been a focal point of childhood nutrition for almost a century. Many of my peers recall the school-based bottle-a-day approach to complement our dietary needs. In recent years, the composition of all foods that is offered in schools has attracted close scrutiny—especially regarding the consumption of high sugar drinks. Consideration of total caloric intake has led to a revision of available school beverages in many places around the world, and bans on the sale of drinks based on their sugar content are becoming widespread. This change includes flavored milk products, prompting a series of studies that have assessed the impact of chocolate milk withdrawal on total milk consumption by school children, and the consequence for nutrient intake. The latest to publish results on this effect is from a study conducted in Saskatoon Canada, by Henry et al [1].
Milk provides valuable nutrients to consumers, especially protein, vitamins, calcium, and other minerals [2]. Flavored milk contains these same nutrients and, despite the increased sugar, is not associated with increased weight gain in children and adolescents [2]. In efforts to reduce sugar consumption, the removal of flavored milk could be like “throwing out the baby with the bathwater.” Indeed, studies have demonstrated that removing chocolate milk from schools decreases school milk consumption and that there are additional nutritional effects that can follow [4].
When there is a fundamental change in the way milk is accessed by school children, nutritionists immediately think of the bigger picture, that is, what are the broader nutritional consequences [3]. The landscape is not simple because there is a diverse range of public and private school programs that affect this subject. Indeed, the intersection between food access and the behavior of children of all ages is terribly complex.
The study by Henry et al, [1] was concerned with these issues in a Canadian context. The scientists were interested in both the impact of a ban on providing chocolate milk in schools, as well as looking behind the scenes at some of the drivers that alter patterns of milk consumption. The drivers that emerged from the study were most clearly around taste and consumer behavior; in this case, how children choose to spend their money. Interestingly, even young children are very discerning with their expenditure.
The study focused on elementary school children in Saskatoon and compared patterns of milk consumption during periods when chocolate milk was available, and the same schools when it was not available. The first goal was to measure the children’s response to drinking plain milk when chocolate milk was removed, and then calculate the impact on nutrient intake. The scientists recruited children from six schools in the Saskatoon area, both urban and rural. In total, there were over 12200 students. Around one quarter of all students chose milk to drink (22-26%) when chocolate milk was provided; this dropped to 14% when it was withdrawn and only plain milk was available, with a consistent decrease of 40-50% across the schools. Younger children (Yr 1-4) were more likely to drink milk than the older children (Yr 5-8), and surprisingly, more urban kids chose to drink milk than rural kids. There was also a small drop in the volume of milk consumed by those who drank the plain milk. Clearly, there was a preference for chocolate milk over plain, but approximately half of the children who preferred chocolate milk swapped to plain milk when there was no choice, and the number consuming plain milk increased from 2% to 14%.
As part of a wider school program, some of the schools in the study had milk available at no cost. Free milk was consumed by over 50% of students when on offer, but their choice was overwhelmingly for chocolate milk. It dropped by 20% when only plain milk was offered, even when it was free. For those children who had to pay for their milk, approximately 13% chose to drink milk, but as few as 1% chose plain milk. Only 6% chose to buy plain milk when chocolate milk was not available to these children.
The scientists then spoke to representative groups of children from the schools. They asked a series of questions to try to understand what affected their decisions when choosing to drink milk. Taste was a major factor. Sweetness is a taste sensation that is conditioned early in life, but milk has a natural sweetness because of its lactose content. This could mean that the children had a preference for a heightened sweet taste, or that they preferred the cocoa flavor. Another finding that was quite insightful was that children avoided using their money to purchase white milk. Presumably, chocolate milk was perceived as something more “special”, whereas they could access plain milk at home without spending their precious funds. This was reinforced when schools that had a free milk program were compared to those that did not. Free school milk results in four times the level of consumption [1].
Most important was translating the change in milk consumption patterns into understanding the consequences for nutrient intake. Clearly, decreased consumption of milk leads to a decrease intake of nutrients derived from that milk, but availability of milk and dairy products outside of school will also have an influence. Do children compensate when they no longer drink milk at school? This was addressed in a survey of the children, which provided a more complete picture of dairy intake. The survey showed that there was an overall decrease of slightly less than 10% in total dairy intake when chocolate milk was removed from school menus. There was no measure of whether children increased intake of other foods to compensate for any decrease, but the scientists conclude that, based on modeling, it was unlikely that all nutrients would be easily substituted.
The Canadian study aligns with other studies performed in USA, which have shown a decrease in overall milk consumption of approximately 10% when chocolate milk is withdrawn from schools [4,5]. The resulting cautionary message is that a ban on chocolate milk sales should be a carefully considered decision, and weighed against the total dietary impact [6, 7]. This impact is likely to be much more significant for some children than others, and the contribution of dairy to a balanced diet needs to be considered in this context. Rather than ban chocolate milk, it may make more sense to reduce the cost of plain milk options. The recent reintroduction of free milk programs in New Zealand schools could be one program to watch [8].
References
1. Henry C, Whiting SJ, Phillips T, Finch SL, Zello GA, et al. (2015) Impact of the removal of chocolate milk from school milk programs for children in Saskatoon, Canada. Applied Physiology Nutrition and Metabolism http://nrcresearchpress.com/doi/abs/10.1139/apnm-2014-0242#.VOrD9XyUd8E.
2. Murphy MM, Douglass JS, Johnson RK, Spence LA (2008) Drinking flavored or plain milk is positively associated with nutrient intake and is not associated with adverse effects on weight status in US children and adolescents. J Am Diet Assoc 108: 631-639.
3. Black RE, Williams SM, Jones IE, Goulding A (2002) Children who avoid drinking cow milk have low dietary calcium intakes and poor bone health. Am J Clin Nutr 76: 675-680.
4. Hanks AS, Just DR, Wansink B (2014) Chocolate milk consequences: a pilot study evaluating the consequences of banning chocolate milk in school cafeterias. PLoS One 9: e91022.
5. Patterson J, Saidel M (2009) The removal of chocolate milk in schools results in a reduction in total milk purchases in all grades, K-12. J Am Diet Assoc 109: A97.
6. Frary CD, Johnson RK, Wang MQ (2004) Children and adolescents’ choices of foods and beverages high in added sugars are associated with intakes of key nutrients and food groups. J Adolesc Health 34: 56-63.
7. Johnson RK, Appel LJ, Brands M, Howard BV, Lefevre M, et al. (2009) Dietary sugars intake and cardiovascular health: a scientific statement from the American Heart Association. Circulation 120: 1011-1020.
8. Russell T (2013) Schools thirsty for milk scheme. http://wwwstuffconz/national/education/9314340/Schools-thirsty-for-milk-scheme.
Infants Take Active Role in Passive Immunity
- Human breast milk contains numerous immune factors that vary in concentration both within and across mothers.
- While maternal differences explain some of this variation, researchers are now investigating how milk composition may change in response to infant illness.
- A new study found that a higher concentration of milk secretory immunoglobulin A (sIgA) was associated with lower rates of infant illness, while higher milk lactoferrin concentration was associated with higher rates of infant illness.
- This suggests that the mammary gland may increase the production of milk lactoferrin, and potentially other immune factors, in response to specific infant needs during times of illness.

The transfer of immune components from a nursing mother to her offspring is called passive immunity. Calling this system passive, however, wrongly implies that antibodies, macrophages, and other anti-microbial factors in milk are simply along for the ride with the nutritional factors that transfer from the maternal blood stream. Numerous studies have demonstrated that maternal factors such as nutrition, stress, and illness influence the concentration of immunological constituents in milk (1-3). And now, a growing body of research, including a new study by Breakey et al. (4), indicates that passive immunity may be actively influenced by the health status of the breastfeeding infant (5-7). Can a sick infant actually increase the quantity of particular immune factors in their mother’s milk to help fight off infection?
Changing needs, changing milk
Decades of research on human milk have firmly established that its composition is dynamic and variable across mothers. What is less clear is an understanding of the sources of this variation, and the subsequent effects of variation on offspring growth, development, and health, particularly in milk bioactive factors, such as antibodies and other immune components. Hoping to fill in some of the gaps, Breakey et al. (4) test the hypothesis that variations in human milk immunological constituents are related to infant illness.
They present two frameworks to understand the relationship between milk composition and infant health. First, milk immune factors are modeled as protective. An increase in particular immune factors either prevents infection, or successfully fights infection to minimize symptoms of illness in offspring. This model predicts a negative relationship between the concentration of the immune factor, and symptoms of illness in the infant (higher concentrations in milk, healthier infants). In contrast, the responsive framework predicts a positive relationship between symptoms of infant illness and milk immune factor concentration (higher concentrations in milk, more symptoms of illness). These factors are not causing the illness, but rather responsive components are called into action while the infant is symptomatic to help fight off the infection (4).
Mother knows best
Both frameworks require that the mother (and her mammary gland) have access to information about the infant’s health status. But just how might this work? One possibility is that the infant communicates illness directly to the mammary gland while nursing (4). Specifically, infant saliva may contain heightened levels of immune factors when first encountering a pathogen or during times of infection. In response, the mammary gland may increase production of particular immune factors or increase the transport of other immune factors that are produced in the mother’s gut or lymphatic system. Although theoretically and physiologically possible, direct empirical evidence for this scenario is currently lacking. Even if the mammary gland does not come into direct contact with infant saliva (or other bodily fluids), mothers most certainly do, particurlarly if the infant is ill. Kissing, sharing utensils, changing diapers, co-sleeping and many other nurturing activities could transmit immune cells (or pathogens) from offspring to mother, which could then influence production of milk immunological components.
Another possibility is that the mother anticipates (or assumes) illness in the infant because she is exposed to the pathogen. However, this scenario also presents an empirical challenge —determining the “intention” of the elevated immune components. Changes in immune components could be actively targeting the needs of the infant, or passively reflecting the increased concentration in the maternal bloodstream (4). And, of course, it also is plausible that it could be a little bit of both. Teasing these two apart does not change the interpretation of an immune factor as protective or responsive, however, as Breakey et al. (4) are looking only at the influence of infant illness on milk composition.
Protective or responsive?
In this case, the milk samples and data on infant health came from the Toba population in northeastern Argentina. Previous research on the effect of infant illness on milk immune factors came from populations living in developed, Western countries (5-7) with lower pathogen exposure than that experienced by the Toba. Breakey et al. (4) propose that the increased pathogen exposure of the Toba may highlight any changes in milk composition that could possibly be obscured in more hygienic environments.
Thirty mother-infant pairs were visited once a month for 4-5 months during 2012 and 2013. Milk was collected at each visit, as was the information about the infant’s health at the time of the visit and in the month prior to the visit. Specifically, researchers requested information about symptoms related to gastrointestinal (vomiting, diarrhea) or respiratory illness (cough, cold, mucus). In order to test the protective and responsive frameworks, results were analyzed for three time periods: (1) symptoms in the month preceding the study visit, (2) symptoms at the time of the study visit, and (3) symptoms in the month following the study visit.
Milk compositional analyses focused on two well-studied immune factors, secretory immunoglobulin A (sIgA) and lactoferrin. SIgA is the primary antibody found in human breast milk and is derived from antibodies produced in the mother’s gut. SIgA works by binding to pathogens and keeping them from attaching to the infant’s mucosal surfaces (both respiratory and gastrointestinal). Lactoferrin is produced by epithelial cells in the mammary gland and has numerous functions in promoting host immune defense. It is perhaps best known for binding to iron in the infant’s gastrointestinal tract. Bacteria need iron for reproduction, so minimizing their access to this essential mineral inhibits their ability to proliferate.
Both sIgA and lactoferrin concentration were significantly related to reported symptoms of infant illness. Higher levels of sIgA were associated with lower incidences of illness in both the preceding and subsequent month, while higher levels of lactoferrin meant an infant was more likely to have experienced illness in the preceding month and the subsequent month. Based on these findings, they propose that sIgA fits best within the protective framework (higher concentration, healthier infants). However, lactoferrin fits best within the responsive framework (higher concentration, more symptoms of illness). Moreover, they propose that the concentration of these two factors may act as a biomarker for infant health: relatively higher sIgA can be thought of as a biomarker for a healthy infant, and higher lactoferrin as a biomarker for a sick infant (4).
Opposite effects
While these results are exciting, they are also a bit perplexing. Why might these two immune factors have opposite relationships with infant health? Why not elevate lactoferrin production when an infection is first detected? The authors offer several possible explanations. From an evolutionary perspective, energy savings are important. It may be that the cost to the mother of producing lactoferrin exceeds that of producing sIgA, or there may be costs (or consequences) to the infant from ingesting milk with consistently high levels of lactoferrin, and thus, levels would only be elevated when the infant is experiencing an infection. From a functional perspective, the two immune factors may differ in their lifespan or how long they are effective once ingested by the infant. If lactoferrin degrades quickly, it might not be the most efficient strategy to elevate production in order to prevent infection.
The authors also concede that their study design makes it difficult to fully explain the temporal changes in immune factors. Sampling at one-month intervals may have caused them to miss important changes in sIgA or lactoferrin concentration. Collecting milk samples more frequently and having a more objective measure of infant health is suggested for future studies. Breakey et al. (4) also suggest that a larger sample size might cancel out the “noise” of chronically ill infants, or mothers that simply produce high levels of immune factors throughout lactation. But even with all of these caveats, Breakey et al. present a set of testable hypotheses about the relationship between milk immune factors and infant illness. They present strong evidence that sIgA and lactoferrin, and potentially other immunological constituents, are not simply passively transferred from mother to offspring.
References
1. Bachour P, Yafawi R, Jaber F, Choueiri E., Abdel-Razzak Z. 2012. Effects of smoking, mother’s age, body mass index, and parity number on lipid, protein, and secretory immunoglobulin A concentrations of human milk. Breastfeeding Medicine 7: 179-188.
2. Groer M, Davis M, Steele K. 2004. Associations between human milk SIgA and maternal immune, infectious, endocrine, and stress variables. Journal of Human Lactation 20: 153-158.
3. Kawano A, Emori Y. 2013. Changes in maternal secretory immunoglobulin a levels in human milk during 12 weeks after parturition. American Journal of Human Biology, 25: 399-403.
4. Breakey AA, Hinde K, Valeggia CR, Sinofsky A, Ellison PT. 2015. Illness in breastfeeding infants relates to concentration of lactoferrin and secretory Immunoglobulin A in mother’s milk. Evolution, Medicine, and Public Health. DOI?
5. Bryan D., Hart PH, Forsyth KD, Gibson RA. 2007. Immunomodulatory constituents of human milk change in response to infant bronchiolitis. Pediatric allergy and immunology 18: 495-502.
6. Hassiotou F, Hepworth AR, Metzger P, Lai CT, Trengove N, Hartmann PE, Filgueira L. 2013. Maternal and infant infections stimulate a rapid leukocyte response in breastmilk. Clinical & Translational Immunology, 2(4), e3.
7. Riskin A, Almog M, Peri R, Halasz K, Srugo I, Kessel A. 2011. Changes in immunomodulatory constituents of human milk in response to active infection in the nursing infant. Pediatric research, 71: 220-225.