Age Progression Software

Illumination-Aware Age Progression
Ira Kemelmacher-Shlizerman Supasorn Suwajanakorn Steven M. Seitz
University of Washington
3%years%old% 5#7% 14#16% 26#35% 46#57% 58#68% 81#100%
(single%input)%
Figure 1. Given a single input photo of a child (far left) our method renders an image at any future age range between 1 and 80. Note the
change in shape (e.g., nose gets longer, eyes narrow) and texture, while keeping the identity (and milk mustache!) of the input person.
Abstract from which to build effective models, as existing age anal-
ysis databases are relatively small, low resolution, and/or
We present an approach that takes a single photograph limited in age range.
of a child as input and automatically produces a series of
age-progressed outputs between 1 and 80 years of age, ac- Nevertheless, age progression techniques have enjoyed
counting for pose, expression, and illumination. Leveraging significant success in helping to solve missing children
thousands of photos of children and adults at many ages cases, where subjects have been recognized many years
from the Internet, we first show how to compute average later based on age progressed images. Described as “part
image subspaces that are pixel-to-pixel aligned and model art, part science, and a little intuition” [30], these images
variable lighting. These averages depict a prototype man are produced by forensic artists who combine a background
and woman aging from 0 to 80, under any desired illumi- in art, physical anthropology, and expertise with image edit-
nation, and capture the differences in shape and texture be- ing software to simulate the appearance of a person later in
tween ages. Applying these differences to a new photo yields life [13]. Aging photos of very young children from a single
an age progressed result. Contributions include relightable photo is considered the most difficult case of all, where age
age subspaces, a novel technique for subspace-to-subspace progression beyond a few years is considered impractical
alignment, and the most extensive evaluation of age pro- [25]. We focus specifically on this very challenging case.
gression techniques in the literature1.
Our approach takes a single photo as input and auto-
1. Introduction matically produces a series of age-progressed outputs be-
tween 1 and 80 years of age. Figure 1 shows an exam-
What will a child look like in 20 years? Age progres- ple result. Our approach has three primary contributions.
sion, which seeks to “age” photographs of faces, is one of First, we present the first fully-automated approach for age
the most intriguing of digital image processing operations. progression that operates “in the wild”, i.e., without strong
It is also one of the most challenging for a variety of rea- constraints on lighting, expression, or pose. Second, we
sons. First, the aging process is non-deterministic, depend- present some of the first compelling (and most extensive)
ing on environmental as well as genetic factors that may results for aging babies to adults. And third, we introduce a
not be evident in the input photos. Second, facial appear- novel illumination-aware age progression technique, lever-
ance and recognizability is strongly influenced by hair style, aging illumination modeling results [1, 31], that properly
glasses, expression, and lighting, which is variable and un- account for scene illumination and correct surface shading
predictable. Finally, there is relatively little data available without reconstructing 3D models or light source directions.
1http://grail.cs.washington.edu/aging/ We build on prior work on age progression, notably, the
seminal work of Burt and Perrett [6], who created convinc-
ing average male faces for several ages (in the range of 20-
1

Age Progression Software? Saturday, February 01, 2020 No software involved. Just years of happy living. Me 2007 Age 59, almost 60. In the age of the internet, advanced telecommunication, and digital revolution, the impact of technology. Asset Tracking Software Is Your Solution to. Progression, music software for guitarists. Age progression, the process of modifying a photograph of a person to represent the effect of aging on their. Age progression, big breasts, Futanari, gender bender, skinsuit, solo action, transformation.


54) by aligning and averaging photos together. A new query age images by taking several exposures of portraits on the
photo was then age progressed by adding to it the differ- same photographic plate. Bensen and Perrett [2] showed
ence in shape and texture between the average of the de- that dramatically better composites can be obtained by first
sired target age, and the average for the age corresponding aligning facial features (208 fiducials) and warping the im-
to the query. Their approach required manual alignment. ages to a reference prior to averaging. Producing compos-
Subsequent aging work in the computer vision literature ites for aging studies is hampered, however, by the lack
introduced more automation, often using Active Appear- of good photographic data for young children, as existing
ance Models [17] or detecting fiducials [32]. Additional databases are relatively small, low resolution, and limited
improvements included texture modeling for wrinkles [37] in age range [9]. In the remainder of this section, we intro-
and person-specific models [35, 29]. More details can be duce an approach for creating and analyzing a large dataset
found in these excellent survey papers [9, 33]. Early face of human faces across ages, based on thousands of photos
analysis methods proposed to synthsize new faces using im- from the Internet.
age based models, e.g., [7], but did not focus on aging and
uncalibrated conditions. There are now several commer- 2.1. Data collection
cial programs that will age photos taken with a webcam or
mobile phone. Typically, however, these programs operate To analyze aging effects we created a large dataset of
effectively only for photos of adults or older children; [23] people at different ages, using Google image search queries
requires a minimum age of 18, ageme.com lists 7 as the low like “Age 25”, “1st grade portrait,” and so forth. We ad-
range, the popular AgingBooth iphone app suggests a min- ditionally drew from science competitions, soccer teams,
imum age of 15. Furthermore, both commercial offerings beauty contests, and other websites that included age/grade
and state-of-the-art methods from the research literature still information. The resulting databases spans 0 to 100, pooled
require frontal, simply-lit faces, with neutral expression [9]. into 14 age groups (we call them clusters), separated by
gender. The clusters correspond to ages 0, 1, 2-3, 4-6, 7-
There is a body of work on automatic age estimation, 9, 10-12, 13-15, 16-24, 25-34, 35-44, 45-56, 57-67, 68-80
e.g., [26, 11, 18]. They, however, did not pursue age pro- and 81-100. The total number of photos in the dataset is
gression or other synthesis applications. 40K and each cluster includes, on average, 1500 photos of
different people in the same age range. This database cap-
Our results set a new bar for age-progression research, tures people “in the wild” and spans a large range of ages.
demonstrated by a comprehensive evaluation of prior art
(the first of its kind in the age progression literature), and an 2.2. Aligned, re-lightable averages
extensive comparison to “ground truth,” via large scale user
studies on Amazon Mechanical Turk, as described in Sec- To obtain dense correspondence between the photos in
tion 4. The key components that make this advance possible each cluster, we use the “collection flow” method [15],
are first, a new database consisting of thousands of photos of which enables accurate dense correspondence across im-
people spanning age (0 to 100), variable lighting, and vari- ages with large illumination variation. The input to the col-
able pose and expression (Section 2.1). Second, relightable lection flow method are aligned and warped to frontal pho-
average images that capture changes in facial appearance tos for which we use the pipeline of [16]. Figure 2 shows the
and shape across ages, in an illumination invariant manner average image for each age, and the average of flow-warped
(Section 2.2). And third, a novel technique for aligning illu- photos using collection flow. Note how much sharper the
mination subspaces that enables capturing and synthesizing flow-aligned averages look. While these aligned averages
age transformations (Section 3). can appear remarkably lifelike, the lighting is dull and un-
realistic, as it is averaged over all images in the collection.
2. Building an Aging Basis We instead produce relightable average images, which may
be re-illuminated from any direction with realistic shading
As we age, our faces undergo changes in shape and ap- effects. We propose to match the lighting of any new input
pearance. The transformation from child to adult is domi- image I by first pose-aligning the image [16], and project-
nated by craniofacial growth, in which the forehead slopes ing that image onto every age subspace. Specifically, for an
backward, the head expands, and the lower portion of the age cluster j with flow-aligned average Aj, we compute a
face extends downward [8]. Changes in later years are dom- rank-4 basis via singular value decomposition on the flow-
inated by growth of the nose, narrowing of the eyes, the aligned images, i.e., Mj = UjDjVjT where Mj is f × p
formation of wrinkles and other textural changes. the matrix representation of the cluster’s flow-aligned pho-
tos (f is the number of photos and p number of pixels in
One of the most compelling ways to model and view each photo). As described in [15], this rank-4 approxima-
these changes across people is by creating a sequence of tion retains the lighting and shading of the input photos, but
composite faces, where each composite is the average of neutralizes the changes due to identity and facial expres-
several faces of the same gender and age. This idea dates
back more than two centuries; Galton [10] generated aver-


Age$0$ Age$2'3$ Age$7'9$ Male% Age$0$ Age$2'3$ Age$7'9$ Female%
Age$13'15$ Age$25'34$ Age$45'56$ Age$68'80$ Age$13'15$ Age$25'34$ Age$45'56$ Age$68'80$
Before%flow%
A6er%flow%
Religh2ng%
reference%
Figure 2. Average images of people at different ages. Each image represents an average of about 1500 individuals. Results in the top
row are aligned only to place the eyes, nose, and mouth in rough correspondence. The second row shows averages after pixel-to-pixel
alignment. These are much sharper, but the tone is variable, the lighting is unnatural, and subtle shape differences (e.g., wrinkles) are
averaged out (to see it zoom-in to the last column). The bottom two rows show re-lit averages, matched to two reference frames (far left)
with opposite lighting directions. The re-lit results have proper shading, are tone-matched to allow easier comparison across ages, and
reveal 3D shape changes (note the nose and forehead).
sion, producing a set of images in nearly perfect alignment ing conditions. The challenge here is twofold: 1) each illu-
with a common, average face pose. Next solving mination subspace represents a continuum of different im-
ages, and 2) their coefficient space is not aligned, i.e., any
min ||I − αVjT ||2 (1) physical lighting direction may map to different lighting co-
efficients in each illumination subspace.
α
We introduce a solution that can be easily implemented
for the coefficients α yields a re-lit average that matches the within the traditional two-view optical flow framework. Let
K be the number of database images in the union of clus-
illumination of I: ters i and j. For each image Ik in this union, we project it to
each of the two illumination subspaces resulting in an aver-
AIj = αVjT (2) age image Aik and Akj . The resulting set of images {Aki }kK=1
can be represented as a single K-channel image Ai, and
(Vj is truncated to rank=4). Figure 2 (rows 3-4) shows this similarly for Aj. Unlike the original illumination subspaces
capability. Two key advantages of relighting are that 1) it Vi and Vj, these two multi-channel images are illumination-
generates a more realistic set of average images, bringing aligned; the kth channel of Ai and Aj have the same light-
out fine details that are only visible with proper shading, ing. Hence, our method can work with any optical flow
and 2) we can align the lighting across the set of averages, algorithm that supports multiple channel images (including
to enable comparing changes at different ages. We use this more complex methods like SIFT flow [22, 12]) to compute
relighting capability to estimate flow across clusters as de- the lighting-aware flow field.
scribed below.
When K is large, a smaller representative set of im-
2.3. Illumination Subspace Flow ages can be chosen using either discrete sampling, cluster-
ing, or dimensionality reduction techniques. We leveraged
We have so far focused on aligning photos within each the fact that the illumination subspaces are low-dimensional
age cluster. Next, we show how to estimate flow across age [14] (Vi is 4D) and computed an orthogonal 24D basis (two
clusters, to measure face shape changes over time. Each 4D clusters times 3 color channels) for the K images us-
cluster has many photos under different illumination condi- ing PCA. Each basis vector (mean + principle vector) was
tions and thus captures an illumination subspace, represent- weighted in proportion to its principle value (we modified
ing how an average person at a particular age appears under [21] to support weighted multi-channel images).
all illuminations [1]. A key contribution of our paper is how
to align two such illumination subspaces Vi and Vj. 2.4. Age Transformations
We seek the (single) optical flow field that aligns Vi and To align all age clusters, we compute subspace flow be-
Vj. Our insight is to use relighting for flow estimation. As tween each pair of successive age clusters i and i + 1.
shown in Fig. 2 (last column), relighting brings out 3-
dimensional shape differences that are otherwise invisible
when averaging many photos. We therefore propose an op-
tical flow method that optimizes over many different light-

Age Progression Software For Celebrities List 2016


Pose% Apply%texture%% I + Jt − Js. For efficiency, we precompute bidirectional
aligned% difference% flows from each age cluster to every other age cluster.
+Ltarget Lsource Aspect ratio progress: Apply change in aspect ratio,
to account for variation in head shape over time. Per-cluster
Input% Low?rank% Low?rank% Apply%flow%% aspect ratios were computed as the ratio of distance between
difference% the left and right eye to the distance between the eyes and
adult% child% mouth, averaged over the fiducial point locations of images
in each of the clusters.
Blend%into%% Adjust%aspect%%
baby%head% ra9o%to%adult% We also allow for differences in skin tone (albedo) by
computing a separate rank-4 subspace and projection for
Output% each color channel.
Figure 3. Steps of illumination-aware age progression. 4. Experiments
Longer range flows between more disparate ages i and j We now describe implementation details, results, and
are obtained by concatenation of the flow fields between i evaluation based on a large scale user study.
and i + 1, i + 1 and i + 2, ..., j − 1 and j. This concate-
nation approach gives more reliable flow fields than direct, Implementation details For all flow computations, we
pairwise flow computation between i and j. These flows modified Ce Liu’s [21] implementation (based on Brox et
enable estimating differences in texture and shape between al. [4] and Bruhn et al. [5]) to work with weighted multi-
the different age groups, as we describe in the next section. channel photos. We used the following parameters α =
0.005, ratio= 0.85, minWidth= 20, nOuterFPIterations=
3. Illumination-Aware Age Progression 10, nInnerFPIterations= 1, nSORIterations= 20. We used
random SVD [34] for fast low rank computations. Process-
Given an input photo of a 2 year old, we can render her ing the photo database required 30 minutes (on 14 compute
at age 60 by computing the difference in flow and texture nodes) per age cluster of 300 photos, including flow, aver-
between the cluster of ages 2-3 (source) and cluster of ages ages, and subspace computation. Given the precomputed
57-67 (target) and applying it to the input photo. This task aging basis, age progression of a new input photo takes 0.1
is challenging for images “in the wild,” as it requires taking seconds. For blending aged faces into adult heads we es-
into account variations in lighting, pose, and identity. Illu- timate fiducials in the adult head photo (computed during
mination and shading are inherently 3D effects that depend pose correction), to match fiducials between the input and
upon light source direction and surface shape, e.g., as the target photos, and then run graph cuts to find an optimal
nose becomes more angular, its shading should change in seam followed by poisson blending to blend the aged face
a manner that depends on light source direction. We show, into the adult head photo [3].
however, that it is possible to utilize our rank-4 relightable
aging basis to work entirely in the 2D domain, without re- Cropped progression results Figures 1 and 4 show
constructing 3D models. age progressed images generated automatically using our
method. The input images were taken from the FGNET
To age progress a face photo we perform the following database [17] and were not part of the training set used
steps, as illustrated in Figure 3. to create the flow and texture age differences. The results
shown here focus on extremely challenging photos of chil-
Pose correction: the input face is warped to approx- dren, with examples that cover a wide range of face types
imately frontal pose using the alignment pipeline of [16] and imaging conditions: neutral, smiling, and laughing fa-
(step 1 in the figure). Denote the aligned photo I. cial expressions, frontal and non-frontal pose, lower qual-
ity scans as well as higher quality photos, female and male
Texture age progress: Relight the source and target age children and a variety of lighting conditions. All results
cluster averages to match the lighting of I as described in are cropped to the face area to show the raw output of the
Section 2.2, yielding AsI and AtI . Compute flow Fsource–input method. Note how the face shape changes with age in these
between AIs and I and warp AIs to the input image coordi- sequences, e.g., the nose stretches, eyes narrow, and wrin-
nate frame, and similarly for Ftarget–input. This yields a pair kles appear. Textural changes include facial hair, “shadows”
of illumination matched projections, Js and Jt both warped in male faces, eye makeup in female faces, and stronger
to input. The texture difference Jt − Js is added to the input eyebrows. Many more examples can be found in the sup-
image I. plementary material.
Flow age progress: Apply flow from source cluster to
target cluster Ftarget–source mapped to the input image, i.e.,
apply Finput–target ◦ Ftarget–source to the texture-modified image


Age Progression software, free download

Software

!0!year!old! 694!80! !1!year!old! 58468! !3!years!old! 58468! !3!years!old! 694!80! !4!years!old! 814100!
!4!years!old! 46457! !!4!years!old! 46457! !7!years!old! 69480! !5!years!old! 69480! !7!years!old! 814100!
!5!years!old! 69480! !6!years!old! 69480! !4!years!old! 69480! !2!years!old! 69480! !4!years!old! 69480!
!2!years!old! 814100! !3!years!old! 58468! !!6!years!old! 58468! !8!years!old! 58468! !3!years!old! 814100!
Figure 4. Age progression results. For each input image we automatically generate age progressed images for a variety of ages. Note the
realistic progression results even with strong directional lighting, non-frontal pose, and non-neutral expressions.
4.1. Evaluation every photo in the FGNET dataset, and compared to every
older photo available for each person. FGNET consists of
We performed a large scale user study on Mechanical photos of the same person over time, and several span baby
Turk, the most extensive of its kind in the age progression to adult, resulting in a total of 2976 comparisons. Each user
literature. In particular, we had human subjects compare our was presented three images: a photo of the subject at age X,
results to every prior age progression result we could find an older photo at age Y, and an age progressed photo at age
in the literature, and to ground truth (photos of 82 people at Y. They were asked to specify which of the latter two pho-
different ages). Each subject was shown a photo of a person tos was more likely the same person at age Y by choosing:
at age X (e.g., 4), and two additional photos: A) a photo photo A, photo B, both are equally likely, or neither is likely
of the same person at an older age Y (e.g., 25), and B) our to be the same person at age Y. Each comparison was eval-
age-progressed result. The user was asked which of A or B uated by 3 different people, and 12 comparisons were left
is more likely to be the same person at age Y. They also had blank, making the total number of comparisons we received
the option of selecting “both are equally likely” or “neither 8916. The number of unique workers was 72. The results
is likely.” Please refer to the supplementary material for a are as following: we received 3288 votes (out of 8916, i.e.,
screenshot of the interface and exact wording. The order 37%) that our result is more likely, 3901 (44%) that ground
of our result and the ground truth was randomly chosen to truth is more likely, 1303 (15%) that both are equally likely,
prevent order bias. All photos were cropped to the face area and 424 (5%) that neither is likely.
only. If the progressed image at age Y is generated from the
reference at age X, it will have the same lighting and ex- This result is so surprising that it led us to question how
pression. To avoid this similarity bias, our age progression proficient humans are at this task, i.e., maybe we are just
result was generated not from the reference shown the user, not good at face recognition across large age differences.
but instead from a photo of the same person at the closest To test this hypothesis, we conducted a perceptual study in
age to the reference. which each user was shown two real (ground truth) images
of the same person, separated by at least 5 years, and asked
Comparison with ground-truth We ran our method on


Figure 5. Comprehensive comparison to prior work, plotting user truth for comparison (Figure 6 (b)); clearly age progressing
study ratings of our method vs. all 120 results from prior work. the input face prior to blending yields much more realistic
Blue cells (> 0.55) are where our method scored higher, red cells composites.
(< 0.45) have prior method(s) scoring higher, and gray cells are
ambiguous. Our method excels for aging children, while prior Comparison to prior work We compared our results
techniques that target adults perform better for that category. to all prior papers that demonstrate age progression results,
with the exception of Lanitis et al. [17] whose results do not
to specify if it is the same or a different person. We used all specify ages. These papers are: (p1) [37], (p2) [35], (p3)
pairs (at least 5 years apart) of each person on FGNET, and [33], (p4) [27], (p5) [28], (p6) [19], (p7) [20], (p8) [36].
repeated each test three times on Mechanical Turk (8928
tests in total). The results indicate that people are generally While we’re most interested in long range age progres-
good at recognizing adults across different age ranges, but sion of very young children, for comparison we ran our
poor at recognizing children after many years. In particu- method on every result we found in these papers (including
lar, across children aged 0-7, participants performed barely adults and older children). The number of age progression
better than chance (57%) at recognition for roughly 10 year results in papers p1-p8 was: 56, 2, 8, 5, 7, 4, 30 and 8 re-
differences, at chance for 20 years (52%), and worse than spectively, for a total of 120 comparisons. Each comparison
chance for 50 years (33%). See supplementary material for was performed by 10 workers, and there were on average 13
the full details of the experiment and results. These stud- unique workers per paper. Figure 5 plots the results of the
ies point to the limits of human evaluation for assessing age user study: the x-axis is the input age group and the y-axis
progression results. is the output age group. The score is calculated as follows:
as in the ground-truth experiment, workers were asked to
Ground-truth-blended comparisons While the Me- choose one of the four options. We added 1 point when
chanical Turk study focuses on cropped faces, we also ex- our result was chosen, 0.5 when “both are likely” was cho-
perimented with blending age progressed faces onto the sen, and 0 when a result from prior work was chosen. The
ground truth head; representative results are shown in Fig- score was then normalized by number of responses per cell
ure 7 (additional results appear in the supplementary ma- (we did not include examples for which the option ”neither”
terial). In each case, we take an input photo in the 0-3 was chosen here, as the ground truth evaluation captures
age range and compare the ground truth image at each age similar statistics). As can be seen from Figure 5, our ap-
(right) with our result (left). We blended our result into the proach almost uniformly outperforms prior work for aging
ground truth head, using the process described earlier in this young children, and clearly dominates for aging children
section. (We also include unblended results cropped to only to adult. The one “red” box corresponds to an age change
the face area in the supplementary material.) The similarity of only three years. Note, that there are no prior results in
is impressive, especially given that each sequence (column) the literature for aging children beyond age 25; we are the
is produced from a single baby photo. Note that the facial first to attempt this task. On the other hand, techniques that
expression and lighting are fixed from the baby photo and focus on modeling older people (modeling wrinkles, hair
therefore differ from the ground truth. As a strawman, we color, etc.) do better for that category. Note that all pre-
also blended the input child’s face onto the older ground vious works typically focus on one of the two age ranges:
child to teenager or adult to older person, while our method
is general and spans ages 0 to 100 (e.g., Fig. 4). While be-
yond the scope of this paper, incorporating wrinkles or hair
lightening models could yield further improvements in the
upper age ranges.
Very few age progression papers address young children
[17, 19, 32, 35, 36], and those that do include only a handful
of results. See supplementary material for a figure that com-
pares our results to all results in the literature for children
under 9 years of age.
Figure 6 (a) compares our results to Perrett
et al.’s FaceTransformer tool at http://morph.cs.st-
andrews.ac.uk/Transformer/ and the PsychoMorph tool by
Tiddeman et al. [38] at the Face Research Lab website
http://www.faceresearch.org/demos/. As can be seen, they
do not perform well on young children. As a baseline we
also compare to applying only the aspect ratio change to
the input face (compare columns 2 and 5). Both of these


(a)& (b)&
Input&
Input& Our$result$ Perre7&et&al.& Faceresearch& Correc,ng& Our$result$blended&& Ground&truth& Baby&blended&&
(aged&to& (aged&to& PsychMorph& for&aspect& to&gt&head& to&gt&head&
adult)& &adult)& (aged&to&adult)& ra,o&only&
Figure 6. Comparison to other methods: (a) to Perrett et al. and FaceResearch online tool, (b) to mapping the baby’s face (far left) onto the
ground truth (column 3) to produce a blended result (far right). The aged results (column 2) look much more similar to the ground truth,
indicating that simply blending a face into a head of an older person does not produce a satisfactory age progression, additional shape and
texture changes must be added.
tools require manual placement of facial features, whereas [6] D. Burt and D. Perrett. Perception of age in adult caucasian
our approach is fully automated. male faces–comp graphic manipulation of shape and color
information. Pr. Royal S. London, 259:137–143, 1995.
5. Conclusion
[7] T. Ezzat and T. Poggio. Facial analysis and synthesis using
We presented a method for automatic age progression image-based models. In FG, pages 116–121, 1996.
of a single photo to any age between 1 and 80, by lever-
aging thousands Internet photos across age groups. The [8] L. G. Farkas. Anthropometry of the Head and Face. 1994.
method works remarkably well, in particular for the chal-
lenging case of young children, for which few prior results [9] Y. Fu, G. Guo, and T. Huang. Age synthesis and estimation
have been demonstrated. A key contribution is the ability via faces: A survey. PAMI, 32(11):1955–1976, 2010.
to handle photos “in the wild,” with variable illumination,
pose, and expression. Future improvements include: mod- [10] F. Galton. Composite portraits. Nature, 18:97–100, 1878.
eling wrinkles and hair whitening [37] to enhance realism
for older subjects; output a set of progressed images per [11] G. Guo and G. Mu. Simultaneous dimensionality reduction
single input, building on face editing techniques, e.g., [24]; and human age estimation via kernel partial least squares re-
having a database of heads and upper torsos of different gression. In CVPR, 2011.
ages to composite our result onto.
[12] T. Hassner. Viewing real-world faces in 3d. ICCV, 2013.
Acknowledgements We thank Google and Intel for sup-
porting this research. [13] H. Heafner. Age-progression technology and its application
to law enforcement. In SPIE, pages 49–55, 1996.
References
[14] I. Kemelmacher-Shlizerman and S. M. Seitz. Face recon-
[1] R. Basri and D. W. Jacobs. Lambertian reflectance and linear struction in the wild. In ICCV, pages 1746–1753, 2011.
subspaces. PAMI, 25(2):218–233, 2003.
[15] I. Kemelmacher-Shlizerman and S. M. Seitz. Collection
[2] P. Benson and D. Perrett. Extracting prototypical facial im- flow. In CVPR, pages 1792–1799, 2012.
ages from exemplars. Perception, 22:257–262, 1993.
[16] I. Kemelmacher-Shlizerman, E. Shechtman, R. Garg, and
[3] D. Bitouk, N. Kumar, S. Dhillon, P. N. Belhumeur, and S. K. S. M. Seitz. Exploring photobios. SIGGRAPH, 30(4), 2011.
Nayar. Face swapping: Automatically replacing faces in
photographs. In ACM Trans. on Graph., 2008. [17] A. Lanitis, C. J. Taylor, and T. F. Cootes. Toward automatic
simulation of aging effects on face images. PAMI, 24, 2002.
[4] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High ac-
curacy optical flow estimation based on a theory for warping. [18] C. Li, Q. Liu, J. Liu, and H. Lu. Learning ordinal discrimi-
In ECCV, pages 25–36, 2004. native features for age estimation. In CVPR, 2012.
[5] A. Bruhn, J. Weickert, and C. Schnrr. Lucas/kanade meets [19] Y. Liang, C. Li, H. Yue, and Y. Luo. Age simulation in young
horn/schunck: Combining local and global optic flow meth- face images. In Bioinf. and Biomed. Eng., 2007.
ods. IJCV, 61:211–231, 2005.
[20] Y. Liang, Y. Xu, L. Liu, S. Liao, and B. Zou. Transactions on
edutainment vi. chapter A multi-layer model for face aging
simulation, pages 182–192. 2011.
[21] C. Liu. Beyond Pixels: Exploring New Representations and
Applications for Motion Analysis. PhD thesis, MIT, 2009.
[22] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman.
Sift flow: Dense correspondence across different scenes. In
ECCV, pages 28–42. 2008.
[23] MerrillLynch. http://faceretirement.merrilledge.com/, 2013.


Input&age&3& Input&age&1& Input&age&3& Input&age&3& Input&age&2&
4& 2& 2& 6& 4&
6& 6& 6& 12& 8&
8& 7& 9& 16& 12&
9& 8& 19& 21& 17&
10& 10& 20& 35& 18&
16& 11& 21& 41& 20&
Figure 7. Comparison to ground truth images. In each case a single photo of a child (top) is age progressed (left) and compared to photos
of the same person (right) at the corresponding age (labeled at left). The age progressed face is composited into the ground truth photo to
match the hairstyle and background (see supplementary material for comparisons of just the face regions). Facial expression and lighting
are not matched to the ground truth, but retained from the input photo. Note how well the age progressed face matches the ground truth
face, given that the full sequence is synthesized from a single baby photo.
[24] U. Mohammed, S. J. D. Prince, and J. Kautz. Visio- [32] N. Ramanathan and R. Chellappa. Modeling age progression
lization: generating novel facial images. ACM Trans. in young faces. In CVPR, volume 1, pages 387–394, 2006.
Graph., 28(3):57:1–57:8, July 2009.
[33] N. Ramanathan, R. Chellappa, and S. Biswas. Age progres-
[25] NCMEC. Age progression. Technical report, National Cen- sion in human faces : A survey. J. of Vis. Lang. Comp., 2009.
ter for Missing and Exploited Children, 2010.
[34] V. Rokhlin, A. Szlam, and M. Tygert. A randomized algo-
[26] B. Ni, Z. Song, and S. Yan. Web image mining towards rithm for pca. SIAM J. Mat. Anal., 31(3):1100–1124, 2009.
universal age estimator. Proc. ACM Multimedia, 2009.
[35] K. Scherbaum, M. Sunkel, H.-P. Seidel, and V. Blanz. Pre-
[27] U. Park, Y. Tong, and A. K. Jain. Face recognition with diction of individual non-linear aging trajectories of faces.
temporal invariance: A 3d aging model. In FG, 2008. EUROGRAPHICS, (3):285–294, 2007.
[28] E. Patterson, A. Sethuram, M. Albert, and K. Ricanek. Com- [36] C.-T. Shen, W.-H. Lu, S.-W. Shih, and H.-Y. Liao. Exemplar-
parison of synthetic face aging to age progression by forensic based age progression prediction in children faces. In IEEE
sketch artist. In Vis Img. Proc., pages 247–252, 2007. Int. Symp. on Multimedia, pages 123 –128, 2011.
[29] P. Paysan. Statistical modeling of facial aging based on 3D [37] J. Suo, S.-C. Zhu, S. Shan, and X. Chen. A compositional
scans. PhD thesis, University of Basel, 2010. and dynamic model for face aging. PAMI, 2010.
[30] A. Prince. Age progression, forensic and medical artist, [38] B. Tiddeman, M. Stirrat, and D. Perrett. Towards realism
http://aurioleprince.wordpress.com/, 2013. in facial transformation: results of a wavelet mrf method.
Computer Graphics Forum, Eurographics, 24, 2005.
[31] R. Ramamoorthi and P. Hanrahan. A signal-processing
framework for inverse rendering. In SIGGRAPH, 2001.


Last Updated on April 22, 2020 by

If you could see how you would look when you’re old, would you? After all, it’s human nature to be curious about how we and our loved ones will look like old.

Luckily, with new age-progression technology, you have apps that make yourself look old right at your fingertips. In recent years, these apps have become more popular, with people becoming obsessed with trying to look elderly.

All you have to do is search for ‘age progression’, and you’ll find countless videos of people aging within seconds. It’s both fun, interesting, and an easy way to pass the time these days.

The good thing is, you can try it out without having to pay. Most of the make me older apps are free.

However, figuring out which apps are more accurate or user-friendly is difficult. Therefore, we’ve listed the five best what will you look like when you’re old apps down below. These apps have the best features and are free for Android.

What App Makes You Look Older?

Before we get into our list, let’s talk about how these apps work. Contrary to what you may think, it’s very simple.

The software sifts through thousands of images and picks up an overall trend of how people’s faces change as they age. Then, all you have to do is upload a photo, and the app will use that trend to show you an aged image of yourself.

However, don’t expect these apps to be 100% accurate as they don’t take into account individual differences, particularly if you have a facial scar. Consequently, don’t take the app too seriously and just accept it for the fun that it is.

Age Progression Software App

With that said, let’s get into our list of the five best age progression apps for Android:

5 Best Apps that Makes you Look Old

1. Oldify

First up, we have Oldify, which ages your face up to 80 years. It comes with a fully-functional camera and an in-built scanner, giving you results within seconds. There are different age brackets you can choose from, starting from 40 years and ending at 80 years.

Moreover, each age bracket has different filters attached to it, allowing you to edit your photo after you’ve taken it. With the app’s in-built share option, you can instantly give friends and family a view of an older, more wrinkly you.

According to most users, the app is best at giving instant results. Also, it doesn’t take up much storage space, making it a more convenient option. However, while the photos and effects are decent, other apps have more advanced technology and yield better results.

2. AgingBooth

Secondly, AgingBooth is a highly popular app with many advanced features. It was developed by PiVi & Co., a French app developing company, specializing in entertainment apps. The company puts heavy emphasis on improvement, frequently updating their apps to fix/enhance their features. Accordingly, AgingBooth is no exception.

It has many great features, including:

  • Auto-crop
  • Auto-save
  • Before-and-after views
  • Auto-link to email/Facebook sharing

Age Progression Software Photographs

Moreover, you don’t need WiFi to work the app as it works without an internet connection. You can either upload a photo from your gallery or take one using the app’s camera. The results are funny and entertaining, though they don’t mimic realistic aging processes.

3. Face App

Next, Face App is one of the best apps out there for age progression. Users find the app very entertaining, even the non-pro version. This app was created by Wireless Lab, a Russian tech company.

The app is popular all over the world, especially amongst the LGBTQ community because of its gender change feature.

Accordingly, the app is for more than just age progression, making it a more versatile addition to your phone. There are over 35 free filters to choose from, each with different colors and contrasts.

Additionally, you can use the app to:

  • Blur your background
  • Change your hairstyle and color
  • Edit facial features such as a beard/mustache
  • Add tattoos

The age progression feature is easy to work, yielding more ideal and realistic results for those aged 20 to 60. While the non-pro version is very impressive, you can also opt for the FaceApp Pro if you want more advanced filters.

4. Face Secret

Another great app, Face Secret, has a wide array of fun features. From age progression to even palm reading, this app lets you know more about yourself. Therefore, you can try and make predictions on your future and have fun with your friends.

Like FaceApp, it’s more versatile than other age progression apps. It also has a gender swap option, an in-built face scanner, and a palm reading option. The app uses enhanced AI to accurately detect and predict your ‘Future Face’.

In sum, the app lets you:

  • Read your zodiac horoscope
  • Create a baby photo (by putting in the photos of the potential parents)
  • Make aging predictions with the face scanner
  • Use the palm-reading scan to learn about your future

All in all, it’s a very unique app, being something new to try with your friends.

5. Old Face

Lastly, we have Old Face, another ideal app for Android users. It’s optimized for all Android devices, including phones, tablets, and smartwatches. Moreover, if you’re looking for a straightforward age progression app, Old Face won’t disappoint.

Free Age Progression Photo Generator

It’s a simple upload-and-share app. While there aren’t any fancy editing or palm-reading features, the app does have highly realistic aging software. You can compare photos of your grandparents with the photo of you and get a sense of how accurate the software is.

Age progression software appAgeAge Progression Software

If you want more dramatic results such as thinned out hair or sagging skin, you can change the age settings in the software. In this way, you get a look into your old self as well as a good laugh with your friends.

Software

Conclusion

To sum it up, these apps are purely for entertainment. They’re fun to play with, whether you’re playing a prank on someone or just curious about your future self. With these five apps, you can easily see yourself as an old person. They all do the job well, being user-friendly and quick.

However, if you’re looking for a recommendation, our top pick is undoubtedly FaceApp. It not only has the best technology for facial changing but it also has the most photo-editing features.

All in all, you can’t go wrong with any of these apps. We hope our list was helpful and informative.