Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

autoencoder for healthcare system, Study Guides, Projects, Research of Engineering

Supply and demand increase in response to healthcare trends. Additionally, Personal Health Records (PHRs) are managed by individuals. Such records are collected through different means and vary widely in type and scope depending on the particular situation. Therefore, some data may be lost, negatively affecting data analysis, so such data should be replaced with appropriate values. In this study, a method for estimating missing data using a multimodal autoencoder is proposed, which

Typology: Study Guides, Projects, Research

2022/2023

Uploaded on 02/13/2023

dpark123
dpark123 🇮🇳

3 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
In the first step, we obtain the image representations with the help of a CNN
which was supervised pre-trained on the ImageNet 2012 dataset and fine-tuned
on target dataset. The CNN model contains five convolutional layers, two fully
connected layers, and a softmax classifier (a classifier for feature extraction
andparameter separation). This model incorporates a huge amount of semantic
information, since it was trainedon ImageNet 2012 classification dataset which
consists ofmore than 1 million images and it is able to classify into 1.000
categories. This huge amount of semantic information and the large number of
categories makes suitable this model with slight modifications to our task. Note
that the input of this pre-trained CNN is a fixed-size, mean-subtracted 224 _ 224
RGB image.
Unlike, the semantic feature is derived from the last fully-connected layer
directly, instead of inserting a new hash layer. The output of the last fully-
connected layer is splitted into two ways. One part eventuates in an n-ways
softmax classifier where n stands for the number of categories of the target
dataset. The other part makes up a hash-like function which composes the
features obtained by CNN to hash codes. The so-called mid-level features are
obtained from the last fully-connected layer, and softmax classifiers are trained
for each semantic label concurrently. We interpret the output of the softmax
pf2

Partial preview of the text

Download autoencoder for healthcare system and more Study Guides, Projects, Research Engineering in PDF only on Docsity!

In the first step, we obtain the image representations with the help of a CNN which was supervised pre-trained on the ImageNet 2012 dataset and fine-tuned on target dataset. The CNN model contains five convolutional layers, two fully connected layers, and a softmax classifier (a classifier for feature extraction andparameter separation). This model incorporates a huge amount of semantic information, since it was trainedon ImageNet 2012 classification dataset which consists ofmore than 1 million images and it is able to classify into 1. categories. This huge amount of semantic information and the large number of categories makes suitable this model with slight modifications to our task. Note that the input of this pre-trained CNN is a fixed-size, mean-subtracted 224 _ 224 RGB image. Unlike, the semantic feature is derived from the last fully-connected layer directly, instead of inserting a new hash layer. The output of the last fully- connected layer is splitted into two ways. One part eventuates in an n-ways softmax classifier where n stands for the number of categories of the target dataset. The other part makes up a hash-like function which composes the features obtained by CNN to hash codes. The so-called mid-level features are obtained from the last fully-connected layer, and softmax classifiers are trained for each semantic label concurrently. We interpret the output of the softmax

classifiers as probabilities of different semantic labels. The layers FC6 and FC are connected to the deep hash layer in order to code a broad assortment of visual content information. In the following subsections we will define our feature vector and the computation of the hash function.