FontNet: Closing the gap to font designer performance in font synthesis

Font synthesis has been a very active topic in recent years because manual font design requires domain expertise and is a labor-intensive and time-consuming job. While remarkably successful, existing methods for font synthesis have major shortcomings; they require finetuning for unobserved font styl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Muhammad, Ammar Ul Hassan, Choi, Jaeyoung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Muhammad, Ammar Ul Hassan
Choi, Jaeyoung
description Font synthesis has been a very active topic in recent years because manual font design requires domain expertise and is a labor-intensive and time-consuming job. While remarkably successful, existing methods for font synthesis have major shortcomings; they require finetuning for unobserved font style with large reference images, the recent few-shot font synthesis methods are either designed for specific language systems or they operate on low-resolution images which limits their use. In this paper, we tackle this font synthesis problem by learning the font style in the embedding space. To this end, we propose a model, called FontNet, that simultaneously learns to separate font styles in the embedding space where distances directly correspond to a measure of font similarity, and translates input images into the given observed or unobserved font style. Additionally, we design the network architecture and training procedure that can be adopted for any language system and can produce high-resolution font images. Thanks to this approach, our proposed method outperforms the existing state-of-the-art font generation methods on both qualitative and quantitative experiments.
doi_str_mv 10.48550/arxiv.2205.06512
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_06512</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_06512</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-89e7b93a93de44e6e7ca945862ce7fea76a6bdcb97cc102ee081c376a3fd3f353</originalsourceid><addsrcrecordid>eNotj0FuwjAURL1hUdEeoKv6AkkdO7bj7lAEFAnBhn3043ynlsCJ7KiC2zcFViPNG430CHkvWF5WUrJPiFf_m3POZM6ULPgL2W6GMB1w-qL1eUg-9HT6QdrDSKeBupnRDpPvA0Y6YnRDvECwSH14wHQL8z759EoWDs4J3565JKfN-lR_Z_vjdlev9hkozbPKoG6NACM6LEtUqC2YUlaKW9QOQStQbWdbo60tGEdkVWHF3ArXCSekWJKPx-3dpBmjv0C8Nf9Gzd1I_AFnVEcL</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FontNet: Closing the gap to font designer performance in font synthesis</title><source>arXiv.org</source><creator>Muhammad, Ammar Ul Hassan ; Choi, Jaeyoung</creator><creatorcontrib>Muhammad, Ammar Ul Hassan ; Choi, Jaeyoung</creatorcontrib><description>Font synthesis has been a very active topic in recent years because manual font design requires domain expertise and is a labor-intensive and time-consuming job. While remarkably successful, existing methods for font synthesis have major shortcomings; they require finetuning for unobserved font style with large reference images, the recent few-shot font synthesis methods are either designed for specific language systems or they operate on low-resolution images which limits their use. In this paper, we tackle this font synthesis problem by learning the font style in the embedding space. To this end, we propose a model, called FontNet, that simultaneously learns to separate font styles in the embedding space where distances directly correspond to a measure of font similarity, and translates input images into the given observed or unobserved font style. Additionally, we design the network architecture and training procedure that can be adopted for any language system and can produce high-resolution font images. Thanks to this approach, our proposed method outperforms the existing state-of-the-art font generation methods on both qualitative and quantitative experiments.</description><identifier>DOI: 10.48550/arxiv.2205.06512</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.06512$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.06512$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Muhammad, Ammar Ul Hassan</creatorcontrib><creatorcontrib>Choi, Jaeyoung</creatorcontrib><title>FontNet: Closing the gap to font designer performance in font synthesis</title><description>Font synthesis has been a very active topic in recent years because manual font design requires domain expertise and is a labor-intensive and time-consuming job. While remarkably successful, existing methods for font synthesis have major shortcomings; they require finetuning for unobserved font style with large reference images, the recent few-shot font synthesis methods are either designed for specific language systems or they operate on low-resolution images which limits their use. In this paper, we tackle this font synthesis problem by learning the font style in the embedding space. To this end, we propose a model, called FontNet, that simultaneously learns to separate font styles in the embedding space where distances directly correspond to a measure of font similarity, and translates input images into the given observed or unobserved font style. Additionally, we design the network architecture and training procedure that can be adopted for any language system and can produce high-resolution font images. Thanks to this approach, our proposed method outperforms the existing state-of-the-art font generation methods on both qualitative and quantitative experiments.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0FuwjAURL1hUdEeoKv6AkkdO7bj7lAEFAnBhn3043ynlsCJ7KiC2zcFViPNG430CHkvWF5WUrJPiFf_m3POZM6ULPgL2W6GMB1w-qL1eUg-9HT6QdrDSKeBupnRDpPvA0Y6YnRDvECwSH14wHQL8z759EoWDs4J3565JKfN-lR_Z_vjdlev9hkozbPKoG6NACM6LEtUqC2YUlaKW9QOQStQbWdbo60tGEdkVWHF3ArXCSekWJKPx-3dpBmjv0C8Nf9Gzd1I_AFnVEcL</recordid><startdate>20220513</startdate><enddate>20220513</enddate><creator>Muhammad, Ammar Ul Hassan</creator><creator>Choi, Jaeyoung</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220513</creationdate><title>FontNet: Closing the gap to font designer performance in font synthesis</title><author>Muhammad, Ammar Ul Hassan ; Choi, Jaeyoung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-89e7b93a93de44e6e7ca945862ce7fea76a6bdcb97cc102ee081c376a3fd3f353</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Muhammad, Ammar Ul Hassan</creatorcontrib><creatorcontrib>Choi, Jaeyoung</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Muhammad, Ammar Ul Hassan</au><au>Choi, Jaeyoung</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FontNet: Closing the gap to font designer performance in font synthesis</atitle><date>2022-05-13</date><risdate>2022</risdate><abstract>Font synthesis has been a very active topic in recent years because manual font design requires domain expertise and is a labor-intensive and time-consuming job. While remarkably successful, existing methods for font synthesis have major shortcomings; they require finetuning for unobserved font style with large reference images, the recent few-shot font synthesis methods are either designed for specific language systems or they operate on low-resolution images which limits their use. In this paper, we tackle this font synthesis problem by learning the font style in the embedding space. To this end, we propose a model, called FontNet, that simultaneously learns to separate font styles in the embedding space where distances directly correspond to a measure of font similarity, and translates input images into the given observed or unobserved font style. Additionally, we design the network architecture and training procedure that can be adopted for any language system and can produce high-resolution font images. Thanks to this approach, our proposed method outperforms the existing state-of-the-art font generation methods on both qualitative and quantitative experiments.</abstract><doi>10.48550/arxiv.2205.06512</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2205.06512
ispartof
issn
language eng
recordid cdi_arxiv_primary_2205_06512
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title FontNet: Closing the gap to font designer performance in font synthesis
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T10%3A01%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FontNet:%20Closing%20the%20gap%20to%20font%20designer%20performance%20in%20font%20synthesis&rft.au=Muhammad,%20Ammar%20Ul%20Hassan&rft.date=2022-05-13&rft_id=info:doi/10.48550/arxiv.2205.06512&rft_dat=%3Carxiv_GOX%3E2205_06512%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true