Talk:Changing to UTF-8

From TNG_Wiki
Revision as of 04:40, 24 September 2017 by Rjlyders (talk | contribs) (Phoca Changing Collation Tool: updated index.php to skip tables/columns already updated: new section)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
  I had a problem using French accented vowels in text (for Notes, especially copied obituaries).  I did not use accented vowels in the names of the people.
  I had used the recommended approach of using the Phoca dialogs to convert all the tng databases to UTF8 with utf8_general_ci encoding.
  My gedcoms were all UTF8 produced by RootsMagic.  After uploading the gedcoms to tng, the browsers (IE, Chrome, Firefox) did not display the accented vowels in the notes (tng_xnotes table); instead it still treated the text as if the two byte UTF characters were single byte Latin characters.  So viewing the text had muddled characters.
  My solution was simple:
  Instead of using the phoca approach to convert ALL of the tables, I just focused on the one table that had the problem  (please note, syntax not exact in the  following lines ...)

1) backup the tng_xnotes table 2) truncate the tng_xnotes table (table is now empty) 3) MySQL - ALTER Table tng_xnotes to character set UTF8 collate utf8_swedish_ci 4) MySQL - ALTER table tng_xnotes columns (three columns, not ID column) CONVERT to collate utf8_swedish_ci 5) tell tng to (re)import every gedcom, replacing all data. 6) accented vowels were now correctly displayed in the various browsers. [Chuck Filteau]

Phoca Changing Collation Tool: updated index.php to skip tables/columns already updated

I found that the Phoca Changing Collation Tool was only converting the first 24 tables of my MySQL database and leaving the final 10 tables undone while reporting no errors to the user. I found this was related to the size of the output being generated. I updated the PHP code in their index.php file so that I could re-run the same script for only the tables that still needed to be converted. This allowed me to convert the remaining 10 tables. I didn't necessarily resolve the issue regarding the size of the output generated, but this worked around that problem because it only processed the remaining 10 tables and thus generated much less output which meant that it never had a problem with the size of the generated output.

Basically, I updated the code to only update a table if it doesn't have the desired collation or contains a column that doesn't have the desired collation. Additionally, while processing each table, it skips over columns that already have the desired collation.

If interested, you can see my code updates that I posted here in the PHOCA forum: