Open
Description
create a new branch for development and do the following:
- load the textual dataset
- create a pipeline for running them through cryptoBERT model to get the embeddings
- stack the daily embeddings to make the new daily tweets dataset containing one embedding for each day with X * 768 dimensions. also, add padding to pad the embeddings to the max length in the dataset.
- write the parallel CNN model described in Zou et al. paper.
- run the embeddings through the CNN model to get the classes
Metadata
Metadata
Assignees
Labels
No labels