If you are working with multi-byte character sets, and/or other non-western locales, it is possible that you may need to encode, decode or reencode string values that are sensitive to the Specific Character Set (0008,0005) (SCS). If you are lucky, you can avoid much of this by encoding using the ISO_IR 192 character set, also know as UTF-8, which is able to encode all Unicode characters. If not so lucky, you may need to do character set conversions.This example demonstrates how to determine the best SCS value for a given Dicom dataset by making alterations to the dataset and re-analyzing it after each change. The progression for this example is as follows:
- Given a ISO_IR 6 encoded dataset, insert an ASCII patient name and view the encoding analysis.
- Insert a japanese patient name and determine the best SCS, and re-encode the dataset.
- Add a chinese string element and determine the best SCS, and re-encode the dataset.
- Re-analyze the dataset using an encoding list that does not include ISO_IR 192.
- The final step in the program opens a text editor to view the output.
Each pass first encodes any SCS sensitive VRs (SH, LO, PN, ST, LT, UT) to ISO_IR 192. Then the dataset is re-encoded using the requested SCS set(s). The best SCS is determined by finding the first SCS encoding with no encoding failures. Failing that, the SCS with the least number of encoding failures is used.
The final example demonstrates a case where an imperfect encoding is chosen. To reemphasize, if you can use ISO_IR 100 for western locales, and ISO_IR 192 for others much of the character set encoding complexity can be avoided.
Assembly: CharacterSetEncoding (in CharacterSetEncoding.exe) Version: DCF34 r12431 DCF_3_4_38_20200923 NetFramework
public class Program
Thetype exposes the following members.
Initializes a new instance of theclass