core
Generating Subjective Content Descriptions
We compare two approaches, the SCD-matrix based MPSCD and iSCD versus
Use the code
core.corpuscontains the classes to access the corpora and annoate them with SCDscore.downloadcontains the code to download all needed files, like corpora, models and other filescore.evaluationcontains the code to evaluate the different settings (wrapper aroundcore.model)core.experimentscontains the code to run different experiments (entrypoint files!)core.modelcontains the models and all code related to each individual modelcore.utilscontains helper classes, functions and global constants
Run the code
We assume the docker container was started and a shell is opened inside.
- Download needed files with
core.downloadby running/home/user/core/setup.py - Run a sample setup (consiting of fine tuning/ training models and evaluating the performance) via
/home/user/core/main.py - Results will be written as JSON into
/home/user/core/results/ - Run real evaluation and combine multiple results in
/home/user/core/results/via/home/user/core/experiments/*.py
To create own runs, take a look at
core.model.exec.Executorand the source ofcore.main.
View Source
""" # Generating Subjective Content Descriptions We compare two approaches, the SCD-matrix based MPSCD and iSCD versus ## Use the code - `core.corpus` contains the classes to access the corpora and annoate them with SCDs - `core.download` contains the code to download all needed files, like corpora, models and other files - `core.evaluation` contains the code to evaluate the different settings (wrapper around `core.model`) - `core.experiments` contains the code to run different experiments (entrypoint files!) - `core.model` contains the models and all code related to each individual model - `core.utils` contains helper classes, functions and global constants ## Run the code > We assume the docker container was started and a shell is opened inside. 1. Download needed files with `core.download` by running ``/home/user/core/setup.py`` 2. Run a sample setup (consiting of fine tuning/ training models and evaluating the performance) via ``/home/user/core/main.py`` 3. Results will be written as JSON into ``/home/user/core/results/`` 4. Run real evaluation and combine multiple results in ``/home/user/core/results/`` via ``/home/user/core/experiments/*.py`` > To create own runs, take a look at `core.model.exec.Executor` and the source of `core.main`. """