­

把keras的model当layer来用

  • 2021 年 3 月 29 日
  • AI

1 在自定义的模型上

How to concatenate two layers in keras?stackoverflow.com图标

first = Sequential()
first.add(Dense(1, input_shape=(2,), activation='sigmoid'))

second = Sequential()
second.add(Dense(1, input_shape=(1,), activation='sigmoid'))

third = Sequential()
# of course you must provide the input to result which will be your x3
third.add(Dense(1, input_shape=(1,), activation='sigmoid'))

# lets say you add a few more layers to first and second.
# concatenate them
merged = Concatenate([first, second])

# then concatenate the two outputs

result = Concatenate([merged,  third])

ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)

result.compile(optimizer=ada_grad, loss='binary_crossentropy',
               metrics=['accuracy'])

这里的sequential是一个已经构建完毕的model,我们也可以加载一些预训练的model进来(这个不能这么简单的直接当layer用,否则会报错下面有例子),不过从function api的角度更好理解,function api也是我个人非常推荐使用的,sequence式的api不是特别灵活和直观.

from keras.models import Model
from keras.layers import Concatenate, Dense, LSTM, Input, concatenate
from keras.optimizers import Adagrad

first_input = Input(shape=(2, ))
first_dense = Dense(1, )(first_input)

second_input = Input(shape=(2, ))
second_dense = Dense(1, )(second_input)

merge_one = concatenate([first_dense, second_dense])

third_input = Input(shape=(1, ))
merge_two = concatenate([merge_one, third_input])

model = Model(inputs=[first_input, second_input, third_input], outputs=merge_two)
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
model.compile(optimizer=ada_grad, loss='binary_crossentropy',
               metrics=['accuracy'])

这样的方法对于多模态的问题写起代码来比较方便,比如输入有表格和图像,可以自定义一个cnn的模型结构作为多模态的输入之一,然后构建另外一个model用于处理表格数据,最后二者concat在一起就可以,非常的方便和easy,同时function api也可以很容易构建多目标优化问题的code,只要这样写,这里为了好理解加了一些中文变量名:

Python">
models=Model(inputs=[图像输入, 表格输入, 时间序列数据], outputs=[二分类目标,多分类目标,回归目标])

losses = {
	"二分类目标": "损失函数1",
	"color_output": "损失函数2",
        "回归目标": "损失函数3"
}


lossWeights = {"损失函数1": 1.0, "损失函数2": 1.0,"损失函数2": 1.0}

model.compile(optimizer='adam', loss=losses, loss_weights=lossWeights,metrics=["accuracy"])

这里如果涉及到多个目标的metric则可以选择使用train_on_batch或者是一次fit一个epoch,然后直接model predict之后自己写一些评估指标评估不同任务的表现,比直接自定义metrics方便,虽然代码量提升了但是避免了不少bug,写起来也简单.


2 使用预训练模型或者是已经训练过的model

这里有一个使用VGG16的预训练model的例子:

Combining Pretrained model with new layers · Issue #3465 · keras-team/kerasgithub.com图标

from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Flatten, Dense, Dropout
from keras.layers.normalization import BatchNormalization

#load vgg16 without dense layer and with theano dim ordering
base_model = VGG16(weights = 'imagenet', include_top = False, input_shape = (3,224,224))

#number of classes in your dataset e.g. 20
num_classes = 20

x = Flatten()(base_model.output)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = BatchNormalization()(x)
predictions = Dense(num_classes, activation = 'softmax')(x)

#create graph of your new model
head_model = Model(input = base_model.input, output = predictions)

#compile the model
head_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

head_model.summary()
.
.
.
#train your model on data
head_model.fit(x, y, batch_size = batch_size, verbose = 1)Combining Pretrained model with new layers · Issue #3465 · keras-team/kerasfrom keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Flatten, Dense, Dropout
from keras.layers.normalization import BatchNormalization

#load vgg16 without dense layer and with theano dim ordering
base_model = VGG16(weights = 'imagenet', include_top = False, input_shape = (3,224,224))

#number of classes in your dataset e.g. 20
num_classes = 20

x = Flatten()(base_model.output)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.5)(x)
x = BatchNormalization()(x)
predictions = Dense(num_classes, activation = 'softmax')(x)

#create graph of your new model
head_model = Model(input = base_model.input, output = predictions)

#compile the model
head_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

head_model.summary()
.
.
.
#train your model on data
head_model.fit(x, y, batch_size = batch_size, verbose = 1)

通过model.inpu和model.output的方式(注意 include_top = False),结合function api的语法就可以做出来了,当然了,第一种情况也可以用第二种的方式来写.