Directions: Fill in
each blank with a proper word chosen from the box. Each word can be used only
once. Note that there is one word more than you need.
A.
massively B. potential C. figures D. fake E. manually F. sprang
G. captured H. paste I. extreme J. generated K. profound
|
Today, the events {#blank#}1{#/blank#} in realistic-looking or-sounding video
and audio recordings need never have happened. They can instead be {#blank#}2{#/blank#} automatically, by powerful computers
and machine-learning software. The catch-all term for these computational
productions is "deepfakes".
The term first appeared on Reddit, a
messaging board, as the username for an account which was producing {#blank#}3{#/blank#} videos. An entire community {#blank#}4{#/blank#} up around the creation of these
videos, writing software tools that let anyone automatically {#blank#}5{#/blank#} one person's face onto the body of
another. Reddit shut the community down, but the technology was out there. Soon
it was being applied to political {#blank#}6{#/blank#} and actors.
Tools for editing media {#blank#}7{#/blank#} have existed for decades—think
Photoshop. The power and peril of deepfakes is that they make fakery cheaper
than ever before. Before deepfakes, a powerful computer and a good chunk of a
university degree were needed to produce a realistic fake video of someone. Now
some photos and an Internet connection are all that is required.
The consequences of cheap, widespread
fakery are likely to be {#blank#}8{#/blank#}, albeit slow to unfold. Plenty worry
about the possible impact that believable, fake footage of politicians might
have on civil society—from a further loss of trust in media to the {#blank#}9{#/blank#} for electoral distortions. These
technologies could also be deployed against softer targets: it might be used,
for instance, to bully classmates by creating imagery of them in embarrassing
situations. In a world that was already saturated with {#blank#}10{#/blank#} imagery, deepfakes make it plausible
to push that even further.