Refine
Document Type
- Article (1)
- Conference Proceeding (1)
- Master's Thesis (1)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- AI (1)
- Computer Vision (1)
- Deep learning (1)
- Diffusion Models (1)
- GAN (1)
- Generative Adversarial Network (1)
- Maschinelles Sehen (1)
- complex event processing (1)
- distributed evacuation coordination (1)
- evacuation guidance (1)
Institute
In this paper, we consider the route coordination problem in emergency evacuation of large smart buildings. The building evacuation time is crucial in saving lives in emergency situations caused by imminent natural or man-made threats and disasters. Conventional approaches to evacuation route coordination are static and predefined. They rely on evacuation plans present only at a limited number of building locations and possibly a trained evacuation personnel to resolve unexpected contingencies. Smart buildings today are equipped with sensory infrastructure that can be used for an autonomous situation-aware evacuation guidance optimized in real time. A system providing such a guidance can help in avoiding additional evacuation casualties due to the flaws of the conventional evacuation approaches. Such a system should be robust and scalable to dynamically adapt to the number of evacuees and the size and safety conditions of a building. In this respect, we propose a distributed route recommender architecture for situation-aware evacuation guidance in smart buildings and describe its key modules in detail. We give an example of its functioning dynamics on a use case.
Smart Cities require reliable means for managing installations that offer essential services to the citizens. In this paper we focus on the problem of evacuation of smart buildings in case of emergencies. In particular, we present an abstract architecture for situation-aware evacuation guidance systems in smart buildings, describe its key modules in detail, and provide some concrete examples of its structure and dynamics.
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.