Method for coding photonic crystal through deep neural network based on self-attention

The invention discloses a method for coding a photonic crystal based on a self-attention deep neural network, and provides a POViT model, and the POViT model is applied to the coded photonic crystal. The method comprises the following steps: acquiring a geometric structure parameter image of the pho...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: ZHANG ZHAOYU, LI RENJIE, YU YUEYAO, LI WENYE
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator ZHANG ZHAOYU
LI RENJIE
YU YUEYAO
LI WENYE
description The invention discloses a method for coding a photonic crystal based on a self-attention deep neural network, and provides a POViT model, and the POViT model is applied to the coded photonic crystal. The method comprises the following steps: acquiring a geometric structure parameter image of the photonic crystal; the photonic crystal is provided with a plurality of air holes, and each pixel of a geometric structure parameter image of the photonic crystal comprises the position and the radius of the air hole; carrying out dimension remodeling on the geometric structure parameter image to obtain a plurality of patch images; inputting the patch image into an embedding module and a position coding module to obtain a symbol sequence; inputting the symbol sequence into a transform coding module to obtain a coding feature; and inputting the coding features into a full connection layer module to obtain a quality factor Q and a mode volume V. The POViT applies a self-attention Transform model to the field of photoelec
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115542433A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115542433A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115542433A3</originalsourceid><addsrcrecordid>eNqNjEEKwjAQAHPxIOof1gf0UNM-QIriRU_itcRk0wTDbki2iL-3Bx_gaWAYZq0eV5TADjwXsOwiTZADC1O0YMunikkgofA8BXCIGQjnsjhCeXN5wdNUdMAEFZNvjAiSRKatWnmTKu5-3Kj9-XQfLg1mHrFmY3E5jMOtbfu-O3RaH_U_zRc9FjnQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Method for coding photonic crystal through deep neural network based on self-attention</title><source>esp@cenet</source><creator>ZHANG ZHAOYU ; LI RENJIE ; YU YUEYAO ; LI WENYE</creator><creatorcontrib>ZHANG ZHAOYU ; LI RENJIE ; YU YUEYAO ; LI WENYE</creatorcontrib><description>The invention discloses a method for coding a photonic crystal based on a self-attention deep neural network, and provides a POViT model, and the POViT model is applied to the coded photonic crystal. The method comprises the following steps: acquiring a geometric structure parameter image of the photonic crystal; the photonic crystal is provided with a plurality of air holes, and each pixel of a geometric structure parameter image of the photonic crystal comprises the position and the radius of the air hole; carrying out dimension remodeling on the geometric structure parameter image to obtain a plurality of patch images; inputting the patch image into an embedding module and a position coding module to obtain a symbol sequence; inputting the symbol sequence into a transform coding module to obtain a coding feature; and inputting the coding features into a full connection layer module to obtain a quality factor Q and a mode volume V. The POViT applies a self-attention Transform model to the field of photoelec</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS ; OPTICS ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20221230&amp;DB=EPODOC&amp;CC=CN&amp;NR=115542433A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76418</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20221230&amp;DB=EPODOC&amp;CC=CN&amp;NR=115542433A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>ZHANG ZHAOYU</creatorcontrib><creatorcontrib>LI RENJIE</creatorcontrib><creatorcontrib>YU YUEYAO</creatorcontrib><creatorcontrib>LI WENYE</creatorcontrib><title>Method for coding photonic crystal through deep neural network based on self-attention</title><description>The invention discloses a method for coding a photonic crystal based on a self-attention deep neural network, and provides a POViT model, and the POViT model is applied to the coded photonic crystal. The method comprises the following steps: acquiring a geometric structure parameter image of the photonic crystal; the photonic crystal is provided with a plurality of air holes, and each pixel of a geometric structure parameter image of the photonic crystal comprises the position and the radius of the air hole; carrying out dimension remodeling on the geometric structure parameter image to obtain a plurality of patch images; inputting the patch image into an embedding module and a position coding module to obtain a symbol sequence; inputting the symbol sequence into a transform coding module to obtain a coding feature; and inputting the coding features into a full connection layer module to obtain a quality factor Q and a mode volume V. The POViT applies a self-attention Transform model to the field of photoelec</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS</subject><subject>OPTICS</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjEEKwjAQAHPxIOof1gf0UNM-QIriRU_itcRk0wTDbki2iL-3Bx_gaWAYZq0eV5TADjwXsOwiTZADC1O0YMunikkgofA8BXCIGQjnsjhCeXN5wdNUdMAEFZNvjAiSRKatWnmTKu5-3Kj9-XQfLg1mHrFmY3E5jMOtbfu-O3RaH_U_zRc9FjnQ</recordid><startdate>20221230</startdate><enddate>20221230</enddate><creator>ZHANG ZHAOYU</creator><creator>LI RENJIE</creator><creator>YU YUEYAO</creator><creator>LI WENYE</creator><scope>EVB</scope></search><sort><creationdate>20221230</creationdate><title>Method for coding photonic crystal through deep neural network based on self-attention</title><author>ZHANG ZHAOYU ; LI RENJIE ; YU YUEYAO ; LI WENYE</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115542433A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS</topic><topic>OPTICS</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>ZHANG ZHAOYU</creatorcontrib><creatorcontrib>LI RENJIE</creatorcontrib><creatorcontrib>YU YUEYAO</creatorcontrib><creatorcontrib>LI WENYE</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>ZHANG ZHAOYU</au><au>LI RENJIE</au><au>YU YUEYAO</au><au>LI WENYE</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Method for coding photonic crystal through deep neural network based on self-attention</title><date>2022-12-30</date><risdate>2022</risdate><abstract>The invention discloses a method for coding a photonic crystal based on a self-attention deep neural network, and provides a POViT model, and the POViT model is applied to the coded photonic crystal. The method comprises the following steps: acquiring a geometric structure parameter image of the photonic crystal; the photonic crystal is provided with a plurality of air holes, and each pixel of a geometric structure parameter image of the photonic crystal comprises the position and the radius of the air hole; carrying out dimension remodeling on the geometric structure parameter image to obtain a plurality of patch images; inputting the patch image into an embedding module and a position coding module to obtain a symbol sequence; inputting the symbol sequence into a transform coding module to obtain a coding feature; and inputting the coding features into a full connection layer module to obtain a quality factor Q and a mode volume V. The POViT applies a self-attention Transform model to the field of photoelec</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN115542433A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
OPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
OPTICS
PHYSICS
title Method for coding photonic crystal through deep neural network based on self-attention
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T14%3A37%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=ZHANG%20ZHAOYU&rft.date=2022-12-30&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115542433A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true