Current Issue
CONTENTS of Volume 31, Number 2, December 2026
Feeling Ostracized And Powerless: The Moderating Role Of Job Crafting
Author TAE HYUN KIM, YE KANG KIM, HYEONGKI KIM, SUJIN LEE
Keywords ostracism; sense of power; job crafting; power dependence theory
Download
Ostracism is a negative social event that diminishes employees’ sense
of power. Drawing on power dependence theory, this study investigates
how employees’ prior engagement in crafting either social or structural job
resources differentially moderates this relationship. We hypothesize that
employees who previously increased social resources experience a weaker
negative impact of ostracism on power, whereas those who increased
structural resources experience a stronger effect. An experiment study
supported these hypotheses. This research advances the ostracism, job
crafting, and power literatures by showing how pre-ostracism job crafting
behaviors can either mitigate or exacerbate ostracism’s negative effects on
sense of power.
Reactive Disclosure to Environmental Rating Downgrades
Author KEUMAH JUNG, HYE-YEONG LEE, SINAE KIM, HEE-YEON SUNWOO
Keywords sustainability reporting, ESG ratings, carbon emissions, greenwashing.
Download
Analyzing Korean listed firms from 2014 to 2022, we report that 1)
firms are less likely to issue sustainability reports following environmental
rating downgrades, and 2) when they do, they include less green
terminology in the reports. Notably, these disclosure patterns are only
salient for firms with multiple downgrades (i.e., when multiple ESG
raters issue downgrades). Interestingly, downgraded firms with more
green terminology reduce carbon emissions to a lesser degree than those
with less green terminology. Our results suggest that downgraded firms’
reactive disclosures are more of a strategic response than a demonstration
of genuine commitment to environmental improvement.
Effects of AI Explanations on Human-AI Collaboration: An Experimental Study on Decision Performance and Reliance
Author SOL JIN, SANGKYU RHO
Keywords Explainable AI (XAI), decision support systems, collaborative decision-making performance, intent classification, experimental study
Download
This study examines how AI explanations affect human-AI collaborative
decision-making. We test whether explainable AI (XAI) improves accuracy,
speed, confidence, and reliance when distinguishing correct from incorrect
AI suggestions. Using call center agents in three conditions―human only,
human with AI, and human with XAI―we evaluate decisions made with
classifiers and LIME-based explanations. Results show that explanations
significantly increase decision accuracy, reduce overreliance on AI, and
promote appropriate non-reliance. These findings emphasize the critical
role and applicability of AI explanations in human-AI collaboration and
contribute practical insights into designing AI assistants.
Seoul Journal of Business

ISSN 1226-9816 (Print)
ISSN 2713-6213 (Online)
ISSN 2713-6213 (Online)


