生成的protocol buffers文件tf.saved_model.save不包含GraphDef消息,但是一个SavedModel。你可以遍历那个SavedModel在Python中获取其中嵌入的图形,但这不会立即作为冻结图形工作,因此正确处理可能会很困难。相反,C++ API 现在包含一个LoadSavedModel调用允许您从目录加载整个保存的模型。它应该看起来像这样:
#include <iostream>
#include <...> // Add necessary TF include directives
using namespace std;
using namespace tensorflow;
int main()
{
// Path to saved model directory
const string export_dir = "...";
// Load model
Status s;
SavedModelBundle bundle;
SessionOptions session_options;
RunOptions run_options;
s = LoadSavedModel(session_options, run_options, export_dir,
// default "serve" tag set by tf.saved_model.save
{"serve"}, &bundle));
if (!.ok())
{
cerr << "Could not load model: " << s.error_message() << endl;
return -1;
}
// Model is loaded
// ...
return 0;
}
从这里开始,您可以做不同的事情。也许您会最舒服地将保存的模型转换为冻结图,使用FreezeSavedModel,这应该允许您像以前一样做事情:
GraphDef frozen_graph_def;
std::unordered_set<string> inputs;
std::unordered_set<string> outputs;
s = FreezeSavedModel(bundle, &frozen_graph_def,
&inputs, &outputs));
if (!s.ok())
{
cerr << "Could not freeze model: " << s.error_message() << endl;
return -1;
}
否则,您可以直接使用保存的模型对象:
// Default "serving_default" signature name set by tf.saved_model_save
const SignatureDef& signature_def = bundle.GetSignatures().at("serving_default");
// Get input and output names (different from layer names)
// Key is input and output layer names
const string input_name = signature_def.inputs().at("my_input").name();
const string output_name = signature_def.inputs().at("my_output").name();
// Run model
Tensor input = ...;
std::vector<Tensor> outputs;
s = bundle.session->Run({{input_name, input}}, {output_name}, {}, &outputs));
if (!s.ok())
{
cerr << "Error running model: " << s.error_message() << endl;
return -1;
}
// Get result
Tensor& output = outputs[0];