330 Wirtschaft
Refine
Year of publication
Document Type
- Article (50)
- Working Paper (12)
- Conference Proceeding (10)
- Book (7)
- Bachelor Thesis (4)
- Report (4)
- Master's Thesis (3)
- Lecture (2)
- Part of a Book (1)
- Doctoral Thesis (1)
Has Fulltext
- yes (95)
Is part of the Bibliography
- no (95)
Keywords
- Milchwirtschaft (35)
- Molkerei (27)
- Kostenschätzung (8)
- Käserei (5)
- Digitalisierung (4)
- Erneuerbare Energien (4)
- Finanzkrise (4)
- Kostenverlauf (4)
- Unternehmensgründung (4)
- Weichkäse (4)
In diesem Beitrag wird zunächst ein Reifegradmodell zur Messung des Digitalisierungsgrades von landwirtschaftlichen Betrieben vorgeschlagen. Es basiert auf bestehenden Reifegradmodellen, welche an die landwirtschaftlichen Besonderheiten angepasst wurden. Im zweiten Teil werden die Ergebnisse einer Befragung von 151 Landwirten in Deutschland geschildert, in der sich Landwirte mit den Stufen des Reifegradmodells identifizieren sollten. Zusätzlich wurde gefragt, warum sich Landwirte in einen bestimmten Reifegrad eingruppiert haben und was sie daran hindert, einen höheren Reifegrad zu erreichen.
Der Sammelband gibt Gründungsberater*innen an (hochschulischen) Beratungseinrichtungen praxisnahe Tipps und Tricks zur Gestaltung ihrer Fempreneurship-Aktivitäten an die Hand.
Anhand von 11 Erfolgsfaktoren in 7 Rubriken bekommen die Leser*innen im Sinne eines "Motivationsbüchleins" Einblicke in die vielfältige Fempreneurship-Praxis von Gründungseinrichtungen in ganz Deutschland.
Based on classical contagion models we introduce an artificial cyber lab: the digital twin of a complex cyber system in which possible cyber resilience measures may be implemented and tested. Using the lab, in numerical case studies, we identify two classes of measures to control systemic cyber risks: security‐ and topology‐based interventions. We discuss the implications of our findings on selected real‐world cybersecurity measures currently applied in the insurance and regulation practice or under discussion for future cyberrisk control. To this end, we provide a brief overview of the current cybersecurity regulation and emphasize the role of insurance companies as private regulators. Moreover, from an insurance point of view, we provide first attempts to design systemic cyber risk obligations and to measure the systemic risk contribution of individual policyholders.
We study the statistical properties of the Bitcoin return series and provide a thorough forecasting exercise. Also, we calibrate state-of-the-art machine learning techniques and compare the results with econometric time series models. The empirical assessment provides evidence that the application of machine learning techniques outperforms econometric benchmarks in terms of forecasting precision for both in- and out-of-sample forecasts. We find that both deep learning architectures as well as complex layers, such as LSTM, do not increase the precision of daily forecasts. Specifically, a simple recurrent neural network describes a sensible choice for forecasting daily return series.
In this paper, we provide a thorough study on the relevance of liquidity-adjusted value-at-risk (LVaR) and expected shortfall (LES) forecasts. We measure additional liquidity of an asset via the difference between its respective bid and ask prices and we assess the non-normality of bid–ask spreads, especially in turbulent market times. The empirical assessment comprises German stocks in both calm and turmoil market times, and our results provide evidence that liquidity risk turns out to be crucial for the quality of regulatory risk assessment in turmoil market times. We find that a Cornish–Fisher approximation describes a sensible choice for LVaR forecasts whereas an extreme value approach results in adequate LES forecasts.
We simulate economic data to apply state-of-the-art machine learning algorithms and analyze the economic precision of competing concepts for model agnostic explainable artificial intelligence (XAI) techniques. Also, we assess empirical data and provide a discussion of the competing approaches in comparison with econometric benchmarks, when the data-generating process is unknown. The simulation assessment provides evidence that the applied XAI techniques provide similar economic information on relevant determinants when the data generating process is linear. We find that the adequate choice of XAI technique is crucial when the data generating process is unknown. In comparison to econometric benchmark models, the application of boosted regression trees in combination with Shapley values combines both a superior fit to the data and innovative interpretable insights into nonlinear impact factors. Therefore it describes a promising alternative to the econometric benchmark approach.
Decisions on asset allocations are often determined by covariance estimates from historical market data. In this paper, we introduce a wavelet-based portfolio algorithm, distinguishing between newly embedded news and long-run information that has already been fully absorbed by the market. Exploiting the wavelet decomposition into short- and long-run covariance regimes, we introduce an approach to focus on particular covariance components. Using generated data, we demonstrate that short-run covariance regimes comprise the relevant information for periodical portfolio management. In an empirical application to US stocks and other international markets for weekly, monthly, quarterly, and yearly holding periods (and rebalancing), we present evidence that the application of wavelet-based covariance estimates from short-run information outperforms portfolio allocations that are based on covariance estimates from historical data.
The paper presents a comprehensive model of a banking system that integrates network effects, bankruptcy costs, fire sales, and cross-holdings. For the integrated financial market we prove the existence of a price-payment equilibrium and design an algorithm for the computation of the greatest and the least equilibrium. The number of defaults corresponding to the greatest price-payment equilibrium is analyzed in several comparative case studies. These illustrate the individual and joint impact of interbank liabilities, bankruptcy costs, fire sales and cross-holdings on systemic risk. We study policy implications and regulatory instruments, including central bank guarantees and quantitative easing, the significance of last wills of financial institutions, and capital requirements.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
Das Gesundheitsdatennutzungsgesetz – Potenzial für eine bessere Forschung und Gesundheitsversorgung
(2023)
Im Koalitionsvertrag der Ampel-Koalition wird für die laufende Legislaturperiode ein Gesundheitsdatennutzungsgesetz (GDNG) angekündigt. Dieses Gesetz soll „zu einer besseren wissenschaftlichen Nutzung in Einklang mit der DSGVO“ führen. Bekanntermaßen steht unser Gesundheitssystem vor großen Herausforderungen (Demografie, Digitalisierung, Fachkräftemangel, Klimakrise, regionale Unterschiede, etc.) und ist jetzt schon das teuerste in Europa bei mittelmäßiger Leistung. Diese Herausforderungen können effizienter und evidenzgeleitet bewältigt werden, wenn wie im geplanten GDNG angedacht, die Datenressourcen für die Evaluierung und Weiterentwicklung des Gesundheitssystems und der Gesundheitsversorgung optimal genutzt werden. In den folgenden Ausführungen werden aus Sicht von Versorgungsforscher*innen Voraussetzungen und Desiderata für eine optimale Ausgestaltung des Gesetzes formuliert. Das Papier wurde durch das Deutsche Netzwerk Versorgungsforschung (DNVF) und die Arbeitsgruppe Erhebung und Nutzung von Sekundärdaten (AGENS) der Deutschen Gesellschaft für Sozialmedizin und Prävention (DGSMP) und der Deutschen Gesellschaft für Epidemiologie (DGEpi) erstellt und wird von den unterzeichnenden Fachgesellschaften getragen. Das vorliegende Positionspapier und die hier aufgestellten Forderungen sind vor der Veröffentlichung und damit in Unkenntnis des Referentenentwurfs zum GDNG formuliert worden.