Image Matting using Neural Networks
Image matting, also refers to picture matting in the article, is the task of finding appealing targets in a picture or sequence of pictures i.e., video, and it has been used extensively in many photo and video editing applications. Image composition is the process of extracting an eye-catching subje...
Gespeichert in:
Veröffentlicht in: | International journal of advanced computer science & applications 2022-01, Vol.13 (12) |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 12 |
container_start_page | |
container_title | International journal of advanced computer science & applications |
container_volume | 13 |
creator | J, Nrupatunga S, Swarnalatha K |
description | Image matting, also refers to picture matting in the article, is the task of finding appealing targets in a picture or sequence of pictures i.e., video, and it has been used extensively in many photo and video editing applications. Image composition is the process of extracting an eye-catching subject from a photograph and blending it with a different background. a) Blue/Green screen (curtain) matting, where the backdrop is clear and readily distinct between the foreground (frontal area) and background (foundation) portions. This approach is now the most used type of image matting. b) Natural picture matting, in which these sorts of photos are taken naturally using cameras or cell phones during everyday activities. These are the present known techniques of picture matting. It is difficult to discern the distinction between the frontal area and the foundation at their boundaries. The current framework requires both the RGB and trimap images as inputs for natural picture matting. It is difficult to compute the trimap since additional framework is required to obtain this trimap. This study will introduce the Picture Matting Neural Net (PMNN) framework, which utilizes a single RGB image as an input and creates the alpha matte without any human involvement in between the framework and the user, to overcome the drawbacks of the prior frameworks. The created alpha matte is tested against the alpha matte from the PPM-100 data set, and the PSNR and SSIM measurement index are utilized to compare the two. The framework works well and can be fed with regular pictures taken with cameras or mobile phones without reducing the clarity of the image. |
doi_str_mv | 10.14569/IJACSA.2022.0131221 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2770373809</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2770373809</sourcerecordid><originalsourceid>FETCH-LOGICAL-c204t-cb61325e37938ac81ff1099049116f0249b907c51e6943e72283f895dd237c03</originalsourceid><addsrcrecordid>eNotkEtPwkAUhSdGEwnyD1yQsG69j05nZtk0PmpQF7JwNyllSkCgONPG-O8tlLO45yxO7kk-Ie4RYkxkah6K1yz_zGICohiQkQivxIhQppGUCq7PWUcI6utWTELYQi82lGoeiVmxL9du-la27eawnnbhdN9d58tdb-1v47_Dnbipy11wk4uPxeLpcZG_RPOP5yLP5lFFkLRRtUyRSTpWhnVZaaxrBGMgMYhpDZSYpQFVSXSpSdgpIs21NnK1IlYV8FjMhrdH3_x0LrR223T-0C9aUgpYsQbTt5KhVfkmBO9qe_Sbfen_LII9A7EDEHsCYi9A-B8JHFAs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2770373809</pqid></control><display><type>article</type><title>Image Matting using Neural Networks</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>J, Nrupatunga ; S, Swarnalatha K</creator><creatorcontrib>J, Nrupatunga ; S, Swarnalatha K</creatorcontrib><description>Image matting, also refers to picture matting in the article, is the task of finding appealing targets in a picture or sequence of pictures i.e., video, and it has been used extensively in many photo and video editing applications. Image composition is the process of extracting an eye-catching subject from a photograph and blending it with a different background. a) Blue/Green screen (curtain) matting, where the backdrop is clear and readily distinct between the foreground (frontal area) and background (foundation) portions. This approach is now the most used type of image matting. b) Natural picture matting, in which these sorts of photos are taken naturally using cameras or cell phones during everyday activities. These are the present known techniques of picture matting. It is difficult to discern the distinction between the frontal area and the foundation at their boundaries. The current framework requires both the RGB and trimap images as inputs for natural picture matting. It is difficult to compute the trimap since additional framework is required to obtain this trimap. This study will introduce the Picture Matting Neural Net (PMNN) framework, which utilizes a single RGB image as an input and creates the alpha matte without any human involvement in between the framework and the user, to overcome the drawbacks of the prior frameworks. The created alpha matte is tested against the alpha matte from the PPM-100 data set, and the PSNR and SSIM measurement index are utilized to compare the two. The framework works well and can be fed with regular pictures taken with cameras or mobile phones without reducing the clarity of the image.</description><identifier>ISSN: 2158-107X</identifier><identifier>EISSN: 2156-5570</identifier><identifier>DOI: 10.14569/IJACSA.2022.0131221</identifier><language>eng</language><publisher>West Yorkshire: Science and Information (SAI) Organization Limited</publisher><subject>Cameras ; Cell phones ; Computer science ; Deep learning ; Neural networks ; Pictures ; Propagation ; Semantics</subject><ispartof>International journal of advanced computer science & applications, 2022-01, Vol.13 (12)</ispartof><rights>2022. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>J, Nrupatunga</creatorcontrib><creatorcontrib>S, Swarnalatha K</creatorcontrib><title>Image Matting using Neural Networks</title><title>International journal of advanced computer science & applications</title><description>Image matting, also refers to picture matting in the article, is the task of finding appealing targets in a picture or sequence of pictures i.e., video, and it has been used extensively in many photo and video editing applications. Image composition is the process of extracting an eye-catching subject from a photograph and blending it with a different background. a) Blue/Green screen (curtain) matting, where the backdrop is clear and readily distinct between the foreground (frontal area) and background (foundation) portions. This approach is now the most used type of image matting. b) Natural picture matting, in which these sorts of photos are taken naturally using cameras or cell phones during everyday activities. These are the present known techniques of picture matting. It is difficult to discern the distinction between the frontal area and the foundation at their boundaries. The current framework requires both the RGB and trimap images as inputs for natural picture matting. It is difficult to compute the trimap since additional framework is required to obtain this trimap. This study will introduce the Picture Matting Neural Net (PMNN) framework, which utilizes a single RGB image as an input and creates the alpha matte without any human involvement in between the framework and the user, to overcome the drawbacks of the prior frameworks. The created alpha matte is tested against the alpha matte from the PPM-100 data set, and the PSNR and SSIM measurement index are utilized to compare the two. The framework works well and can be fed with regular pictures taken with cameras or mobile phones without reducing the clarity of the image.</description><subject>Cameras</subject><subject>Cell phones</subject><subject>Computer science</subject><subject>Deep learning</subject><subject>Neural networks</subject><subject>Pictures</subject><subject>Propagation</subject><subject>Semantics</subject><issn>2158-107X</issn><issn>2156-5570</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNotkEtPwkAUhSdGEwnyD1yQsG69j05nZtk0PmpQF7JwNyllSkCgONPG-O8tlLO45yxO7kk-Ie4RYkxkah6K1yz_zGICohiQkQivxIhQppGUCq7PWUcI6utWTELYQi82lGoeiVmxL9du-la27eawnnbhdN9d58tdb-1v47_Dnbipy11wk4uPxeLpcZG_RPOP5yLP5lFFkLRRtUyRSTpWhnVZaaxrBGMgMYhpDZSYpQFVSXSpSdgpIs21NnK1IlYV8FjMhrdH3_x0LrR223T-0C9aUgpYsQbTt5KhVfkmBO9qe_Sbfen_LII9A7EDEHsCYi9A-B8JHFAs</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>J, Nrupatunga</creator><creator>S, Swarnalatha K</creator><general>Science and Information (SAI) Organization Limited</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20220101</creationdate><title>Image Matting using Neural Networks</title><author>J, Nrupatunga ; S, Swarnalatha K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c204t-cb61325e37938ac81ff1099049116f0249b907c51e6943e72283f895dd237c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Cameras</topic><topic>Cell phones</topic><topic>Computer science</topic><topic>Deep learning</topic><topic>Neural networks</topic><topic>Pictures</topic><topic>Propagation</topic><topic>Semantics</topic><toplevel>online_resources</toplevel><creatorcontrib>J, Nrupatunga</creatorcontrib><creatorcontrib>S, Swarnalatha K</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of advanced computer science & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>J, Nrupatunga</au><au>S, Swarnalatha K</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image Matting using Neural Networks</atitle><jtitle>International journal of advanced computer science & applications</jtitle><date>2022-01-01</date><risdate>2022</risdate><volume>13</volume><issue>12</issue><issn>2158-107X</issn><eissn>2156-5570</eissn><abstract>Image matting, also refers to picture matting in the article, is the task of finding appealing targets in a picture or sequence of pictures i.e., video, and it has been used extensively in many photo and video editing applications. Image composition is the process of extracting an eye-catching subject from a photograph and blending it with a different background. a) Blue/Green screen (curtain) matting, where the backdrop is clear and readily distinct between the foreground (frontal area) and background (foundation) portions. This approach is now the most used type of image matting. b) Natural picture matting, in which these sorts of photos are taken naturally using cameras or cell phones during everyday activities. These are the present known techniques of picture matting. It is difficult to discern the distinction between the frontal area and the foundation at their boundaries. The current framework requires both the RGB and trimap images as inputs for natural picture matting. It is difficult to compute the trimap since additional framework is required to obtain this trimap. This study will introduce the Picture Matting Neural Net (PMNN) framework, which utilizes a single RGB image as an input and creates the alpha matte without any human involvement in between the framework and the user, to overcome the drawbacks of the prior frameworks. The created alpha matte is tested against the alpha matte from the PPM-100 data set, and the PSNR and SSIM measurement index are utilized to compare the two. The framework works well and can be fed with regular pictures taken with cameras or mobile phones without reducing the clarity of the image.</abstract><cop>West Yorkshire</cop><pub>Science and Information (SAI) Organization Limited</pub><doi>10.14569/IJACSA.2022.0131221</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2158-107X |
ispartof | International journal of advanced computer science & applications, 2022-01, Vol.13 (12) |
issn | 2158-107X 2156-5570 |
language | eng |
recordid | cdi_proquest_journals_2770373809 |
source | EZB-FREE-00999 freely available EZB journals |
subjects | Cameras Cell phones Computer science Deep learning Neural networks Pictures Propagation Semantics |
title | Image Matting using Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T07%3A11%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20Matting%20using%20Neural%20Networks&rft.jtitle=International%20journal%20of%20advanced%20computer%20science%20&%20applications&rft.au=J,%20Nrupatunga&rft.date=2022-01-01&rft.volume=13&rft.issue=12&rft.issn=2158-107X&rft.eissn=2156-5570&rft_id=info:doi/10.14569/IJACSA.2022.0131221&rft_dat=%3Cproquest_cross%3E2770373809%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2770373809&rft_id=info:pmid/&rfr_iscdi=true |