Fix crash on hotplug disconnect

ComposerClient destroys its internal model of the display while handling
the onHotPlug event from the Hwc. Subsequently SurfaceFlinger destroys
its model of the display, and destroys all Hwc layers associated with
the display.

This fixes the code for destroying layers to not dereference an invalid
iterator if the display does not exist, allowing destruction to
continue.

It also fixes a similar issue which could occur if a HWC layer is being
created for a display at around the same time as the disconnect event.

Test: hotplug disconnect no longer crashes
Bug: 38464421
Change-Id: I0f2d28fe89fccf997b4bbb9fa6b5c0e6a6e49b93
This commit is contained in:
Lloyd Pique 2017-12-14 17:59:48 -08:00
parent f9b98e52b2
commit 2765f9d406

View file

@ -299,10 +299,17 @@ Return<void> ComposerClient::createLayer(Display display,
Error err = mHal.createLayer(display, &layer);
if (err == Error::NONE) {
std::lock_guard<std::mutex> lock(mDisplayDataMutex);
auto dpy = mDisplayData.find(display);
// The display entry may have already been removed by onHotplug.
if (dpy != mDisplayData.end()) {
auto ly = dpy->second.Layers.emplace(layer, LayerBuffers()).first;
ly->second.Buffers.resize(bufferSlotCount);
} else {
err = Error::BAD_DISPLAY;
// Note: We do not destroy the layer on this error as the hotplug
// disconnect invalidates the display id. The implementation should
// ensure all layers for the display are destroyed.
}
}
hidl_cb(err, layer);
@ -316,8 +323,11 @@ Return<Error> ComposerClient::destroyLayer(Display display, Layer layer)
std::lock_guard<std::mutex> lock(mDisplayDataMutex);
auto dpy = mDisplayData.find(display);
// The display entry may have already been removed by onHotplug.
if (dpy != mDisplayData.end()) {
dpy->second.Layers.erase(layer);
}
}
return err;
}