First of all, regarding the UTF-16 encoding: there is no need to use it at all, the one can use UTF-8 instead with no risk.
UTF-8 supports all the CJK(Chinese-Japanise-Korean) symbols but has no other meanings for 0x01 instead of SOH.
In UTF-16 or Unicode encodings, 0x01 is a page code and can be contained in the field content.
Usage of the multibyte encodings is covered by FIX protocol since FIX 4.2 with the followed algorithm:
- If the field has no Encoded analogue, there is no possibility to use Non-ASCII symbols in this field and still remain compliant with FIX spec.
- If the field has Encoded analog, the field MessageEncoding(347) should be presented and contain the encoding name used in Encoded* fields of the message.
The Field Encoded*Len should contain the count of BYTES(Important: not count of symbols) contained in correlated Encoded* field.
However, nothing prevents to use UTF-8 in any text field. It is not a FIX-compliant way, but the only requirement for such a trick is that the counterparty should expect UTF-8 in such field, protocol requirements will not be violated in this case.
Regarding UTF-16 or Unicode: such trick will lead to protocol violations because the 0x01 symbol can be contained in the text body in these encodings.
About support of Encoded fields:
FA and FE supports and correctly processes Encoded fields, supports and correctly processes UTF-8 in non-encoded fields.
For FA, it is user responsibility to convert ASCII string with UTF-8 content to the UTF-8 string and vise versa.
Tag support tables for base TP ICAP FIX dictionaries can be found in the attachment.
For custom tags, if the text tag has related pair tag with text length specified, the one can use UTF-8, Unicode or UTF-16 there, if the length should be specified in bytes. In case of the length should be specified in symbols, only UTF-8 can be used. In any scenario, counterparty should expect such encoding in such field.